Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is why I am concerned that really good AI tools (be it LLMs, or not), are only going to make people dumber over time. It is true that those people that are highly motivated will be digitally enhanced by LLMs, but the hard reality is that this does not encompass most people. Most people will do the least amount of work possible to solve a task. Rather than taking the time to understand and critically reason about the output of an LLM, these people will instead take it and run, only asking questions later [if at all].

And as we zoom out and think more farther into the future, I see it getting much worse. If AI is really doing all the "hard stuff", then the general human incentive to learn and do them at all quickly treads to 0. This isn't going to be everyone, I think some people will absolutely become "10x developers" or the equivalent for other domains. But all this will result in is more inequity, in my naïve view. The universe has a fundamental property that things move from higher energy states into lower ones. From a human POV, I think you could apply a similar idea, if the need to be smart quickly recedes, then in general we may degrade unaugmented human collective intelligence over time.

I don't know, maybe we'll figure something out to make it much easier to learn and ingest new concepts, but it seems more and more to me that the high obstacles for human brains learning things (with poor memory) is too big a bottleneck to overcome any time soon.



In Phaedrus, Plato has Socrates tell the story of a dialog between Egyptian gods as a caution against writing—that writing would cause people to lose their ability to remember. On the one hand, Thamus's warning was accurate—cultures that rely on writing generally do not have robust memorized oral traditions—but on the other hand, we only have this story today because Plato wrote it down, and he cannot have been ignorant of the irony.

Every tool has this trade-off, and the existence of skills that will be lost is not evidence that the tool will do more harm than good. I don't think anyone here would argue that Socrates was correct that writing would be the end of memory and wisdom.

> To [Thamus] came Theuth and showed his inventions ... when they came to letters, "This,* said Theuth, "will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit."

> Thamus replied: "O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality."

https://www.gutenberg.org/files/1636/1636-h/1636-h.htm


"This tech will only make people dumber over time" was also claimed about the Internet.


This is a tale as old as time.

In a conversation with Phaedrus, Socrates worries that writing could impair human memory, as people depend on written words instead of recalling information. [0]

[0]: https://en.wikipedia.org/wiki/Phaedrus_(dialogue)#Discussion...


It's true though, as human (+ tool) gets smarter, human (without tool) tends to get dumber in the domain the tool augments.

The question is will we one day have tools so powerful, that the human is vestigial, and tool (without human) is just as powerful and cheaper than tool (+ human)?


True in general, yes, but writing is an elegant solution where the longer something is, the more likely you are to write it down because you are less likely to be able to remember it. The shorter something is, the less likely you are to write it down because it’s a pain to get out your scratchpad (or iPhone) for ten words or less.


Yes writing can only augment human intelligence.

AI might replace it.


I do think it's murkier then that.

Writing "replaced" some forms of intellectual effort.

And it's yet to be seen how AI will play out.


This is the case today for ai chess engines


But there are still people playing chess professionally because playing well is not enough. (You have to entertain an audience, which is only possible by "just playing well" if you're human)


It’s also an early example of the medium being the message. Socrates and his interlocutors are sometimes a parable on the transition from oral to written culture.

I find that it is true that new mediums and new technologies for language have a numbing effect on certain senses and an amplification of others.

Writing is beneficial in one regard but does have an impact on memory. Epic poems of great length were memorized in their entirety, a skill that would be a lot easier to develop in a world without writing.


Personally speaking, I've got much worse at stripping tree bark off with my teeth since I evolved opposable thumbs.


Well, to this day one can consider this true, looking at tiktok et al.


> one can consider this true

Not without a solid result establishing so. But good luck with this, you'd probably need to compare our world with internet and a similar enough world but without it, which does not exist.

In any case, internet can't be reduced to "tiktok et al"


TikTok may indeed be a contributing factor to the current collective brain rot, but I’ve learnt more so far from YouTube and access to random PDF documents on any technical subject I can imagine than I probably would have done in one lifetime without.


We'd have to define "dumber" because adult literacy and numeracy rates have largely increased globally since the 1990's.

https://ourworldindata.org/literacy


This is largely due to increasing access to education in less developed countries.

In already developed countries the Flynn effect seems to be reversing, ie IQ is leveling off or even dropping.


IQ is a very different thing from literacy (or generally any skills one learns). I think IQ is not relevant to this discussion.

It's basically impossible to improve an adult's g-factor, for example. For children, things like nutrition and hygiene (e.g. no parasites) play a big role. But the kinds of things a human learns or tools they use doesn't significantly affect G-factor.


Well, it wasn’t wrong. The Internet has made people dumber.


Internet made us dumber but also made a lot of things easier. Just like industrial machines made us weaker (or rather, not as rugged as we used to be) but also allowed us to get much more work done.


And they seem to have been proven right. (probably not the way they think)


It was claimed about writing as well. Oral experience passing was considered superior. The Quran puts great emphasis on decoration of it so it can be transmitted and recited orally.


same with phd defense


PhD defenses are mostly ceremonial. What are you trying to say? Is anything important transmitted orally during a PhD defense?


yes but in this case, we have actual research showing that the technology is degrading the quality of work.


Is that necessarily a bad thing? What if high quality isn't required to do the thing you wanted to accomplish?

Many other mass produced products we buy today are clearly lower quality than ones crafted by artisans 50 years ago, and yet they do what we need them to do at a fraction of the cost.


There is a threshold of quality; once it goes below, it's a broken product. Personally, I'd rather have a high-quality product crafted by artisans than mass-produced ones, similar to preferring a dinner at a restaurant made by a chef over fast food. However, in engineering, every choice involves tradeoffs.


I think that regardless of category, we should strive for making quality easier and cheaper to achieve and sustain. Unfortunately financial incentives favor velocity and volume at the cost of all else and so instead we’re increasingly pumping out vast amounts of garbage.


> regardless of category

I think that's a sweeping generalization. For example, it's much better to have a bunch of food that's mostly garbage than to have a famine where all food is super high quality. There were points in history where this choice was made (obviously unconsciously because the choice is too obvious to even think about). Other examples abound.

Technology often makes things much cheaper while reducing quality. Sometimes that's bad, sometimes it's great.


These cheaper products are also far more wasteful and taxing on natural resources. This is the “growth at any cost” mindset. It’s been great for us in the last 100 years, but I’m not convinced it’s actually sustainable in the long run.


We have research but I think it’s far from rock-solid proven.

Like any idea about squishy human brains and its products it remains to be seen and can’t be as easily proven as for example research in Physics.

I would say current research has a probability of 50% of being correct at best.


...and about the written word, by Socrates

https://blogs.ubc.ca/etec540sept13/2013/09/29/socrates-writi...


In a sense Socrates was right. It's always better to be able to recall something from your memory rather than having to look it up in a book. The classic example is the multiplication table. The problem is that humans have rather limited capacity for such a 'recallable on demand' memory.


The problem is that you have to invest actual effort to memorize things. Memorizing things you very rarely need is not worth it, even if you're amazing at memorizing things. It's impossible to memorize everything you would benefit from having access to in written form.


On the other hand just because something is written down somewhere is not at all equal to 'having access to it'. Once you forget it it might as well not exist at all. The information needs to be internalized to be any useful, and for that at least some of it needs to be kept in your memory


I'm terrible at recalling specifics but I feel that my "superpower" is that I'm really good at remembering if say a solution to a certain problem exists, and just enough breadcrumbs to find that solution again, be it a few key words that I can search for or similar.

This coupled with an broad interest in just about anything means I can often help people far outside my specific field, usually rather swiftly as well.


And that's why digital documents are better (for average use-cases) than paper. Another argument for using technology to make things easier at the cost of possibly losing skills you no longer urgently need (such as "maintaining a library of paper books").


"Internet makes smart people smarter and dumb people dumber". (Mark Solonin, a Russian historian).

Taking into account that such blunt statements always hide a lot of nuances, this seems to capture the reality.


The mistakes that will be made are obvious, and those who fear that are correct, yet they will also be overcome because the ones not learning will fail and the ones learning will succeed.


Whoever said it was right. One of the dumbest public figures of our time was elected president as a direct result of the reach and amplification provided by the internet.

Qanon was purely a creation of the internet. Now go take a look at how many people believe one, many or all of the various Qanon alternative facts.


Those are convincing arguments that internet has (major) downsides and bad consequences. The scale of surveillance it enables is another one.

They don't prove that it makes people dumber. You have to quantify and qualify people and to define "dumb".

Maybe people are not actually dumber because of the internet, but the internet is very good at spreading ideas, including incredibly dumb ones, especially (because of how human beings work) those likely to cause feeling of outrage.

Maybe people are not dumber, just too defenseless against the scale of bullshit they are faced with because of the internet. Maybe internet is an incredibly good tool, but strongly requires good learning / training of critical thinking and there's not enough of this yet.


People used to believe a lot of crazy conspiracies and superstitions throughout history. The reason why we're so appalled at Qanon is that "it should be so easy for them to correct their superstitions, given the tools we have".

It's hard to argue that nowadays people believe more crazy stuff than before the internet was invented. (It's very easy, of course, to claim this, as many like to do.)


Yes. It seems that the proportion of people for whom 50% of their beliefs constitute unfounded, unjustified nonsense is actually far lower than it was before the internet. One of the main things that has changed, though, is how quickly ideas (including bonkers ones) spread. So whereas we used to see a significantly new bonkers idea only every few centuries, we seemingly now see several a week.


And they were right.


Are you sure it didn't?


Didn't it?


It did.


This sounds pretty similar to all technology. I remember Slashdot threads about how IntelliSense was making software engineers dumber. I remember my teachers saying that allowing calculators on the SAT made kids dumber. I remember people saying that spell checkers would make people worse at writing. I remember people saying at the start of the pandemic that Zoom meetings would make everyone anti-social.

None of this really happened. I mostly use editor tools to automate away tedium that doesn't matter; I type "log.Inf<TAB>" and it adds "src/internal/log" as an import at the top of the file and types the o and open parenthesis for me. I have not forgotten how to do that myself, but it saves me a couple seconds. Calculators didn't really make people dumber, though I have to say that a lot of arithmetic I learned in elementary school did make me dumber ("touch math" was the killer for me; slows me down every time I do arithmetic in my head; I need some brainwashing program that deletes that from my brain). Spell check didn't make people worse at spelling; spelling things wrong still has a penalty (C-w to kill the last word and spell it correctly), so the incentive is to still to lurn how to spel wrds rite. Zoom meetings didn't ruin the corporate world; I personally found them very helpful for memorizing people's names with a high degree of certainty. You see it under their face for 40 minutes at a time, and you learn it fast. In real life, probably takes me a few weeks for people I only see once a week. So, honestly a benefit for me.

The current state of AI seems very similar to these technologies. I did a copilot free trial (and didn't renew it). With the free trial I think there were a couple things it was good at. One time I wanted a CPU profile for my app, so I just asked the AI to type it in. Open a file, check for an error, start profiling to the file, stop profiling, close the file. Would have taken me a minute or two to type in, but Copilot typed it in instantly. I also did something like "do the same as the function above, but for a YAML file instead of a JSON file". Again, super trivial to type in that code, it's really only one line, but the AI can handle that just fine in an instant. I don't really think it's more than slightly smarter IntelliSense, but without any access to the compiler toolchain, so it can sometimes just hallucinate stuff that doesn't exist.

I've found this to be kind of an interesting way to proofread documents and design APIs. Give ChatGPT a document you're working on, and then ask questions about it. If it gets the wrong answer, then your doc is no good! Similarly, ask it how to write some code using your new API (without showing it the API). Whatever it writes should be the surface area of your API. This avenue is pretty interesting to me, and it's not replacing humans, it's just a smarter rubber duckie.

Overall, I think we're in a little bit of a hype phase with AI. I look at it kind of like a dog that has read every book, if such a thing were possible. Pretty smart, but not quite human yet. As a result, it's not going to do well in the areas where people really want to apply it; customer service, insurance claims, loan underwriting, etc. But it is pretty good for asking questions like "does my document make sense" or "please find some boilerplate to cut-n-paste here, I am going to delete this before checking in anyway". Also not too bad at slightly modifying copyrighted images ;)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: