I also remember a similar wave around 10-15 years ago regarding ML tooling and libraries becoming more accessible, more open source releases etc. People whose value add was knowing MATLAB toolboxes and keeping their code private got very afraid when Python numpy, scikit learn and Theano etc came to the forefront. And people started releasing the code with research papers on github. Anyone could just get that working code and start tweaking the equations put different tools and techniques together even if you didn't work in one of those few companies or didn't do an internship at a lab who were in the know.
Or other people who just kept their research dataset private and milked it for years training incrementally better ML models on the same data. Then similar datasets appeared openly and they threw a hissy fit.
Usually there are a million little tricks and oral culture around how to use various datasets, configurations, hyperparameters etc and papers often only gave the high level ideas and math away. But when the code started to become open it freaked out many who felt they won't be able to keep up and just wanted to keep on until retirement by simply guarding their knowledge and skill from getting too known. Many of them were convinced it's going to go away. "Python is just a silly, free language. Serious engineers use Matlab, after all, that's a serious paid product. All the kiddies stacking layers in Theano will just go away, it's just a fad and we will all go back to SVM which has real math backing it up from VC theory." (The Vapnik-Chervonenkis kind, not the venture capital kind.)
I don't want to be too dismissive though. People build up an identity, like the blacksmith of the village back in the day, and just want to keep doing it and build a life on a skill they learn in their youth and then just do it 9 to 5 and focus on family etc. I get it. But wishing it won't make it so.
Talented, skilled people with good intuition and judgements will be needed for a long time but that will still require adapting to changing tools and workflows. But the bulk of the workforce is not that.
This is so true... I am having issues with the change right now.. being older and trying to incorporate agentic workflow into MY workflow is difficult as I have trust issues with the new codebase.. I do have good people skills with my clients, but my secret sauce was my coding skilz.. and I built my identity around that..
The cure for me has been to write an agent myself from first principles.
Tailored to my workflow, style, goals, projects and as close as possible to what I think is how an agent should work. I’m deliberately only using an existing agent as a rubber duck.
I agree with you and I'm generally an AI "defender" when people superficially dismiss AI capabilities, but this is a more subtle point.
If you prompt with little raw material and little actual specification of what you want to see in the end, eg you just say make a detailed breakdown dashboard-like site that analyzes this codebase, the result will have this uncanny character.
I'd describe it as a kind of "fanfic", it (and now I'm not just talking about this website but my overall impression related to this phenomenon) reminds me a bit like how when I was 15 or so, I had an idea about how the world works then things turned out to be less flashy, less movie-like, less clear-cut, less-impressive-to-a-teenage-boy than I had thought.
If you know the concept of "stupid man's idea of a smart man", I'd say AI made stuff (with little iteration) gives this outward appearance of a smart man from the Reddit-midwit-cinematic-universe. It's like how guns in movies sound more like guns than real guns. It's hyperreality.
Again this is less about the capabilities of AI and it's more connected to the people-pleasing nature of it. It's like you prompt it for some epic dinner and it heaps you up some hmmm epic bacon with bacon yeah (referring to the hivemind-meme). Or BigMac on the poster vs the tray, and the poster one is a model made with different components that are more photogenic. It's a simulacrum.
It looks more like your naive currently imagined thing about what you think you need vs what you'd actually need. It's like prompting your ideal girlfriend into AI avatar existence. I'm sure she will fit your ideal thought and imagination much better but your actual life would need the actual thing.
This relates to the Persona thing that Anthropic has been exploring, that each prompt guides the model towards adopting a certain archetypal fiction character as it's persona and there are certain attraction basins that get reinforced with post training. And in the computer world, simulated action can be easily turned into real action with harnesses and tools, so I'm not saying that it doesn't accomplish the task. But it seems that there are more sloppy personas, and it seems that experts can more easily avoid summoning them by giving them context that reflects more mundane reality than a novice or an expert who gives little context. Otherwise the AI persona will be summoned from the Reddit midwit movie.
I'm not fully clear about all this, but I think we have a lot to figure out around how to use and judge the output of AI in a productive workflow. I don't think it will go away ever, but will need some trimming at the edges for sure.
It's how people resisted CGI back in the day. What people dislike is low quality. There is a loud subset who are really against it on principle like we also have people who insist on analog music but regular people are much more practical but they don't post about this all day on the internet.
perhaps one important detail is that cassette tape guys and Lucasfilm aren’t/weren’t demanding a complete and total restructuring of the economy and society
An excellent observation. When films became digital the real backlash came when they stopped distributing film for the old film projectors and every movie theaters had to invest in a very expensive DCP projectors. Some couldn’t and were forced to shut down.
If I had lost my local movie theater because of digital film, I would have a really good reason to hate the technology, even though the blame is on the studios forcing that technology on everyone.
It is not. People resisted bad CGI. During the advent of CGI people celebrated the masterpiece of the Matrix and even Titanic. They hated however the Scorpion King.
> I think less of someone as a person if they send me AI slop.
n=1 but working on side projects for others, i could easily generate ai images (instead of using stock photos) for a client, but i resist because i also feel this but as the sender...
there is the fact that such images 'look ai' but even if it were perfect, idk somehow i feel cheap doing that.
Agreed. Even in low value stuff I’d so much rather use basic stock images, ms paint drawings or almost anything over AI images. Seeing them is almost like being near someone who stinks or is sick/coughing. It’s a very visceral reaction.
No, I don't think most people are really against AI Gen works "on principle". Or at least not in any interpretation of "on principle" that would allow for you to be dismissive of complaints in this way.
I think principles are important. Especially when it comes to art, principle might be all we have. Going back to the crypto example, NFTs were art that real people had made. In some cases, very good art. People railed against NFTs despite the quality of the art. That is being against something on-principle. Comparatively, if my local grocery chains were owned by neonazis, I'd have a much harder time of standing on principle, giving that doing so may have a negative impact on my ability to survive and prosper.
AI Gen works, on the other hand, most often do not come with readily available marking that it is AI Gen. What people are complaining about is the lack of quality in the work. If they accuse a poorly human-written article of being AI Gen, that's just a mistake. But the general case is a legitimate evaluation of the quality of the material and the conditions under which it was made and presented.
In my own case, while I certainly have plenty of "principled" reasons to dislike AI Gen works, I also dislike it because it's just garbage. Oh yeah, sure, it's impressive that a computer can spit out reasonable content at all. It would equally be impressive for a chimpanzee to start talking in full sentences. That doesn't mean I'm going to start going to the chimpanzee for dissertations on the human condition.
Not just in the obvious ways either, even good CGI has been detrimental to the film (and TV) making process.
I was watching some behind the scenes footage from something recently, and the thing that struck me most was just how they wouldn't bother with the location shoot now and just green-screen it all for the convenience.
Even good CGI is changing not just how films are made, but what kinds of films get shot and what kind of stories get told.
Regardless of the quality of the output, there's a creativeness in film-making that is lost as CGI gets better and cheaper to do.
it may be an unpopular opinion but i feel like that watching any of the marvel movies... its like its just a showcase for green screens and ridiculous rubber-band acrobatics cgi everywhere...
that kind if stuff might work in anime or cartoons, but live action just looks ridiculous to me for the most part.
Not the same. The more effort you put into CGI the more invisible it becomes. But you can’t prompt your way out of hallucinations and other AI artifacts. AI is a completely different technology from CGI. There is no equivalence between them.
The story is that I was getting into a new genre of music, namely Japanese City pop from the 1980s. I was totally unfamiliar with the genre and started listening to it on YouTube. I found one playlist, which I listened to a lot, thinking: “wow, this is very formulaic, and the lyrics are very generic” but I kind of thought that was just how the genre went. Finally had planned to use it for during a small local event, but when I went to find out who the artists were I embarrassingly found out it was all AI generated.
Thing is, in this instance I knew nothing of the source material, when I went to get actual songs, written by actual people, the difference was start. I would be able to recognize AI generated City pop in an instant now 8 months later. This experience kind of felt like I had been scammed. That my ignorance of the genre had been taken advantage of. It was not pleasant.
You don't understand. I mean content that even now, you don't know it is AI.
Obviously you think the AI content that you can identify is bad. But there is content you've encountered that you think is good and not AI content, that actually is AI generated.
I had a very similar experience, looking for music to play during D&D sessions. Not paying close attention to the music, it seemed like it fit the bill. Once I started listening more closely, there were lots of issues that became readily apparent.
My dad has also started sharing with me links on Facebook to pop songs that have been re-arranged in different genres. This was a big area of fun for a number of folks in my family several years ago as we discovered YouTube artists like Chase Holfelder who put significant effort into making very high quality rearrangements. But I kept noticing these weird issues in the new songs.
I've gotten to where I can identify an AI generated song almost immediately: there's a weird, high frequency hiss in the mix that sounds like heavy noise getting to overcome compression artifacts but the source from which it's coming should be clean. There's a general lack of enthusiasm to the lyrics and a boring, nonsensical progression to the lyrics on original arrangements. Sometimes, the person generating the song tries to hide that last issue by generating instrumentals only or they use one of those try-to-hard-to-sound-badass Country Rock genres that are popular on Tik Tok to stick on top of clips from the TV show Yellowstone (WTF is with that?!), but then when I check the details, there's an obviously AI cover art for artists I've never heard of. The accounts will be anthologies full of these artists that have never existed.
So, I know people keep parroting "a good artist can use any tool". But I've yet to see it. All this "democratizing art" (didn't know anyone was gate keeping it to begin with, certainly have not seen any lack of talent online in several years) doesn't seem to be producing results. It becomes pretty obvious very quickly it's all just a pump and dump scheme to Get Them Clicks.
i think they are referring to statements that they have "solved" hallucinations and it wont be a problem anymore (which it obviously isn't yet anyways)
My guess is that post-training has gotten a lot better in the last couple of years and what people are attributing to better models are actually just traditional (non-LLM) models they place on top of the LLM which makes it appears that the model has increased in quality (including by seemingly fewer hallucination).
If this is the case it would be observed with different prompting strategies, when you find a prompt which puts more weight on the post-training models.
It's not hard to find them, they are in clear text in the binary, you can search for known ones with grep and find the rest nearby. You could even replace them inplace (but now its configurable).
Yes. Tools like Khan Academy help lots of talented kids to progress in the curriculum beyond what's available in physical classrooms available to them.
There are simply not enough teachers who can provide such an ideal, imagined education, at least not for the current rate of teacher salaries (and it's very far off). The educational strategy has to scale to real people, real teachers and real students as they are in the flesh, not some ivory tower pipe dream. We've had decades of this "we should teach how to think, not what to think".
Alternatively,if you don't care about scale, as in rolling out a system to the population at large, then yeah, this kind of advanced education exists, it's just very selective and is in advanced extracurricular or obtained through private tutors.
This also assumes that universal education is a sensible aim. I think that's doubtful and that it contributes to these sorts of burdens and waters down the quality of education in the process.
As a concrete example, for a few decades now, we're been pushing primary school students toward university education quite aggressively and broadly. It was quite common to scare students toward university by claiming that without a university degree, they would be flipping burgers at McDonalds. This, of course, is completely false and it is disgraceful that such dishonest and manipulative tactics were used. Today, because of rising university costs and the dubious value of most university education, we're seeing this idea challenged at the level of the university. Gen Z's interest in trades has increased by something like 1500%. I don't see this as a negative. In Germany, for instance, there is a more balanced distribution across trades and university.
Now, I admit that the situation is a bit different in the case of primary education, but here, too, I think we do well to think in terms of reform rather than technology and patching up a pedagogically and administratively broken system. The American education system spends an inordinate amount of money on each student with little to show for it. If, for instance, those funds were allocated wisely, then a number of problems would likely go away or become smaller issues.
Of course, what does "allocate wisely" mean? Education systems require a principled grasp of what education is for. If you don't have a sound anthropological grasp of what it means to be human and how education is supposed to enable one's humanity and serve human persons, then you are in no position to run an education system or decide school curricula. I cannot stress this enough. Our education system today is very "pragmatist"; we're constantly told we're being prepared for a career and a job market. That's not education: it's job training. Of course, schools are quite mediocre as training facilities, because they're sort of a halfway house between training and whatever residue of classical education still lingers. So that's one distinction: training vs. education. Now, if we simply accept this distinction, we should ask: how should one organize training on the one hand and education on the other to enable each to be successful within its own circumscribed domain? And what if we keep things as local and decentralized as possible? I guarantee you would not see the inept system we have today.
So, with this...
> There are simply not enough teachers who can provide such an ideal, imagined education
...I agree, but again, my view is that at best we are buying time with these sorts of technological gimmicks. We're also social animals. We cannot keep isolating ourselves behind technology under the pretext of "practicality".
Yes, Germany has different educational tracks that are decided fairly early, at 10 or 12 years old (with some opportunity to change tracks). I don't think Americans like this idea.
Still, 40% of young adults have a tertiary degree (https://www.oecd.org/en/publications/2025/09/education-at-a-...) while it's 47% in the US, so I wouldn't say it's a huge difference. And its not just a US thing, Denmark stands at 45%. So I wouldn't spin too big of a narrative around this.
Education is a field where decade after decade they try some new fad which is basically the old fad re-dressed and never really learn much. That's because teachers and their methodologies don't really have that big of an effect. A stable non-chaotic learning environment and access to the learning material though any kind of presentation, and books gets you to pretty much as good as it gets. To have a real effect, you need private tutoring for the gifted or very small groups of talent nurturing, which goes far beyond the default curriculum. But again, these don't fit the current zeitgeist, so they will keep on pushing "critical thinking" and "how to think", no matter how much they fail.
If you think that "2 days" makes it sound a lot... You'd be surprised how long it takes to actually make learning materials. I don't want to be too harsh, in case you're a high school student etc. I see it's good faith, but do note the reaction here.
I read a couple of good analogies to predict how you and others will feel about your AI content: 1) telling people at the breakfast table about the dream you just had, 2) showing all your loose acquaintances the photos of your newborn baby.
That is, it's very precious and interesting to you, but it really isn't to anyone else. This is true about generated text, images and songs. I've generated a lot of what I think of as bangers with Suno but learned quickly that they have zero value to anyone else. Part of the value to me is the thrill and dopamine hits of having generated it. This simply doesn't translate to anyone else. It will take a while until society internalizes this.
This is not to say that AI can't have any role in the creative process. But the effort will be still high and original human thinking and intent and input is still very important.
it's a worthwhile lesson. thank you. There was a great deal of effort on my part, but not in the prose. You've taught me something and I appreciate it.
it's not that you're teaching the AI, it's that you're framing the conversation on a reference material and having a conversation around it. Exploring a problem with referential framing, like a white paper or a dense blog post is a useful cognitive hack. You just have to be careful to pin extraordinary claims to extraordinary evidence.
Just read a good textbook instead of this LLM-written stuff. For example those by Murphy or Prince or Bishop. Or one of many YouTube lecture series from MIT or Stanford. There are many primer 101 tutorials and Medium posts. But if you actually want to learn instead of procrastinating, pick up a real textbook or work through a course.
I've bounced off of many good textbooks. Even Karpathy's YouTube series was too dense for me. I'm trying to come in at a more palatable level.
This was a two day exploration where I provided the syllabus and ran through it with Claude Code, asking questions, trying to anchor it to stuff I understand well. I feel like the artifact has value.
I think chatting with an llm alongside a textbook can be helpful but producing learning material when you yourself are a novice is not really that valuable.
Or other people who just kept their research dataset private and milked it for years training incrementally better ML models on the same data. Then similar datasets appeared openly and they threw a hissy fit.
Usually there are a million little tricks and oral culture around how to use various datasets, configurations, hyperparameters etc and papers often only gave the high level ideas and math away. But when the code started to become open it freaked out many who felt they won't be able to keep up and just wanted to keep on until retirement by simply guarding their knowledge and skill from getting too known. Many of them were convinced it's going to go away. "Python is just a silly, free language. Serious engineers use Matlab, after all, that's a serious paid product. All the kiddies stacking layers in Theano will just go away, it's just a fad and we will all go back to SVM which has real math backing it up from VC theory." (The Vapnik-Chervonenkis kind, not the venture capital kind.)
I don't want to be too dismissive though. People build up an identity, like the blacksmith of the village back in the day, and just want to keep doing it and build a life on a skill they learn in their youth and then just do it 9 to 5 and focus on family etc. I get it. But wishing it won't make it so.
Talented, skilled people with good intuition and judgements will be needed for a long time but that will still require adapting to changing tools and workflows. But the bulk of the workforce is not that.
reply