I thought this was going to be about a problem we saw recently. Someone used an LLM to update the comment block at the start of each source file, and the LLM programmed its own tool that ended up changing ALL of the line endings when it output again with the corrected comment block. Instead of an LLM we could have used find and replace, but people are thinking LLM is the only tool.
Mario zechner did a talk on this where he states that he just gets everything from an agent as an html slide show. It looks better and you can page through it and there can be diagrams etc… Unfortunately I couldn’t find a link to the talk.
I recently watched some videos related to the production of cybercab, which has now started public testing. They’ve still done some great engineering, to the point that the car is now assembled like a matchbox car. All the drive components are contained in a single package for a FWD configuration that the body just drops down on. The car now has no controls besides the screen and door pulls. The materials are all lower cost and they even found a way to skip painting the cars. All of this should help them cut costs significantly.
As far as the self driving, they may be far off still, it’s hard for me to get a read on that and this vehicle is a bet that they will be able to achieve it - right down to the braille in the cabin, so maybe that’s why they still fail. The thing I will say is that despite the PR disaster that the CEO is, which gives us that feeling that the company has lost its mind, it seems they are still quietly doing some advanced engineering.
Correction, shareholders don’t keep the profit, the company keeps most of this on its balance sheet which may cause a corresponding rise in the price of Apple’s stock if people did not already anticipate that level of return. (And markets are rational)
The only money that shareholders keep is the dividend per share which was $0.27 out of a profit of $2.01 per share.
I’m afraid of AI but not because I think it’s going to become skynet tomorrow, it’s because of all the social ills that are already clearly attached to it.
- Spam
- Deep Fakes
- Porn
- Buggy Software
- Economic Bubbles
- Degradation in people’s abilities and learned dependence on ChatGPT for basic functions.
- Job loss through enshittification ala AI interviews and Telemarketers
Yeah, this is definitely not sustainable. We're all getting tired of the content quality going downhill. If it's gonna to be like this for a while, I guess new social networks will have to emerge and moderate more? maybe, especially since the government definitely isn't interested in moderating anything. They just want to win races.
How do you ever have a social media network that is immune to this from now? You could with the best intentions start a non-profit, defederated, open-source, grass roots social network. It would go great, until the moment it hits critical mass and becomes prey for people who are willing to piss in the pond to make money.
There's no way to defend against it. You can just copy and paste text from an LLM into the reply box.
reply