People can invest in markets without a 401k with more options (plans commonly have only a handful of funds available) and less fees (both admin fees and inflated fund expense ratios). And you may pay more taxes with a 401k than otherwise depending on your future tax rate (which is unknowable).
The only pure advantage is employer matching if you have it and stay employed long enough for it to vest.
> It's infuriating. Nearly all of the agentic coding best practices are things that we should have just been doing all along
There's a good reason why we didn't though: because we didn't see any obvious value in it. So it felt like a waste of time. Now it feels like time well spent.
It's the user's fault. They vote for this crap with their attention. Junk sites like this shouldn't exist but they do amd aren't going anywhere until people stop using them.
Some users might enable these kind of features with their attention, but I don't think users actually want these features and any kind of "voting" is likely unintentional. It's manipulation. The fault lies mainly with the company and their carefully planned dark patterns. Ideally, users should punish them by e.g. leaving the platform but there's friction that may be a bigger problem than the dark patterns (depending on user). And I don't think there are any platforms that always guarantee good user experience now and in the future.
Not sure if users even realize what the dark patterns are and do. Users aren't all-knowing, with endless time, carefully balancing their attention to try to provide markets with the optimal signal to wisely guide the misbehaving actors.
Is it really the users fault when the apps are literally designed by neuroscientists that explicitly design it to be addictive toward humans all of which is being funded by monopolists companies whose leadership tend to have antidemocratic views about humanity?
Maybe we should finally regulate these addict boxes as the dangerous substances they are.
Users are not perfect agents. How can you expect the average non-technical person to figure out what is happening? For most people, if they don't see visually see something happening on the screen, it doesn't exist. They simply have no frame of reference to figure out that LinkedIN is hijacking their scroll speed.
> I do regularly read the code that Claude outputs
You probably could have s/Claude/Human/ in your rant and been just as accurate. I don't know how many times I've flagged these issues in code reviews. And that's only assuming the human even bothered to write tests...
What I find is that when I ask AI to write tests it writes too many, and I agree with you that a lot of them are useless. But then I just tell it that, and it agrees with me and cleans it up. Much faster feedback loop and much better final result.
I feel like people that look at a poor result and stop there and conclude it's useless have made up their mind and don't want to see the better results that are right in front of them if they just spend an extra 5 seconds trying.
How do you know whether the tests it’s spits out are bad if you don’t read the tests.
We’re not dealing AGI here. Tests aren’t strictly necessary for humans. They are for AI. AI requires guardrails to keep from spinning out. That’s essentially the entire premise of the agentic workflow.
I’m pretty sure they just meant they do testing not that they read the tests and that’s what everyone else who responded interpreted that as well.
You can get Claude to write good tests but based on what I’m seeing at work that’s not what’s happening. They always look plausible even when they’re wrong, so people either don’t read them, skim them very quickly, or read the first few assume the rest work and commit.
I think Claude is great for testing because setting test data and infrastructure is such a boring slog. But it almost always takes a lot of back and forth and careful handholding to get it right.
I read the tests, it also is really really good to have Claude verify that removing the changes in question break the tests. This brings the quality way way up for me.
> damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person
What damage are you talking about?
I'm not sure I understand why it matters that there is no real person there if you can't actually tell the difference. You're just demonstrating that you don't actually need a human for whatever it is you're doing.
Your wife or mother calls you or video calls you and says to meet her somewhere, or to send money, or to pick up groceries or whatever. Does it not matter that it wasn't her? Could it be someone trying to manipulate you into going somewhere, to be robbed or whatever? At any rate, you'll need to verify that information came from the source you trust before you act on it, and that verification has a cost.
The damage is to the trust we have in our communication media. The conclusion here is that every person is trivial to impersonate; that's the damage.
Ok fine, let's put it in the context of business. Your competitor impersonates your customer, gives you bad instructions. After following the bad instructions, you lose the contract with your customer, and your competitor (the attacker) is free to try and replace you.
If you got a suspicious text, the logical thing is to call up the person who sent it and try to verify it. AI impersonation makes that much harder.
Or even better, open the on-prem AI portal and type something like "I just got a suspicious call from client X, but I am on a lunch break. Call him and use a fake video of me. Ask him if what he said is true..."
Because what you are actually doing is exchanging symbols, tokens, if you will, that may be redeemed in a future meatspace rendezvous for a good or service (e.g. a job, a parcel). These tokens are handshakes, contracts, video calls, etc. to be exchanged for the actual things merely represented therein.
Instead what we have now with AI is people exchanging merely the tokens and being contented with the symbol in-and-of itself, as something valuable in its own right, with no need for an actual candidate or physical product underlying the symbol.
There is a clip by McLuhan I can't be assed to find right now where he says eventually people will stop deriving pleasure from the products themselves and instead derive the feelings of (projected) accomplishment and pleasure from viewing advertisements about the product. The product itself becomes obsolete, for all you actually need to evoke the desired response is the advertisement, or the symbol.
A hiring manager interviewing an AI and offering it a job is like buying the advertisement you just watched, and.... that's it. No more, the transaction is complete.
>Instead of tending towards a vast Alexandrian library the world has become a computer, an electronic brain, exactly as an infantile piece of science fiction. And as our senses have gone outside us, Big Brother goes inside. So, unless aware of this dynamic, we shall at once move into a phase of panic terrors, exactly befitting a small world of tribal drums, total interdependence, and superimposed co-existence. [...] Terror is the normal state of any oral society, for in it everything affects everything all the time. [...] In our long striving to recover for the Western world a unity of sensibility and of thought and feeling we have no more been prepared to accept the tribal consequences of such unity than we were ready for the fragmentation of the human psyche by print culture.
The grandparent post has the belief that human interaction is intrinsically better. Not sure i agree, but i can understand the POV.
However, the increase in fake videos that are difficult to tell from real is indeed a potential issue. But the fact that misinformation today is already so prevalent is evidence that better video doesn't make it any worse than it already is imho.
You're not sure if human to human interaction is intrinsically more valuable than a human talking to a facsimile? That feels like a very dangerous position to hold for one's ethical calculations and general sanity. I'm clinging tightly to the value of the bond with other people, even the passing connection, but certainly with my family members as this article is about.
i much prefer using the ATM, self-checkouts and an e-commerce website, over having to talk to somebody at a branch to get money, buy my groceries, or booking a holiday.
Human to human may be more valuable, but that may not have much to do with the truth in their statements. For example if your relatives are hooked up to a constant misinformation feed it gets to become problematic to communicate and deal with them.
What I'm saying is that LLMs don't have to do truly novel work in order to be useful. They are useful because the lion's share of all work is a variation on an existing theme (even if the creator may not realize it).
The point is that saying the LLM failed to do what the overwhelming majority of devs can't do isn't exactly damning.
It's like Stephen King saying an AI generated novel isn't as good as his. Fine, but most of have much lesser ambitions than topping the work of the most successful people in the field.
Yes, the cohort of islamists and anti-semites, opposed to the decolonisation of historically jewish lands which were deliberately conquered in the name of Islam.
The only pure advantage is employer matching if you have it and stay employed long enough for it to vest.
So not exactly a necessity to build wealth.
reply