Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm surprised nobody else has gone meta yet, so I suppose I must. Anyway, "ChatGPT said this" ... about this thread.

----

In many of the Hacker News comments, a core complaint was not just that AI is sometimes used lazily, but that LLM outputs are fundamentally unreliable—that they generate confidently stated nonsense (hallucinations, bullshit in the Frankfurtian philosophical sense: speech unconcerned with truth).

Here’s a more explicitly framed summary of that sentiment:

Central Critique: AI as a Bullshit Generator

Many commenters argue that: • LLMs don’t “know” things—they generate plausible language based on patterns, not truth. • Therefore, any use of them without rigorous verification is inherently flawed. • Even when they produce correct answers, users can’t trust them without external confirmation, which defeats many of the supposed productivity gains. • Some assert that AI output should be treated not as knowledge but as an unreliable guess-machine.

Examples of the underlying sentiment: • “LLMs produce bullshit that looks authoritative, and people post it without doing the work to separate truth from hallucination.” • “It costs almost nothing to generate plausible nonsense now, and that cheapness is actively polluting technical discourse.” • “‘I asked ChatGPT’ is not a disclaimer; it’s an admission that you didn’t verify anything.”

Philosophical framing (which commenters alluded to)

A few participants referenced Harry Frankfurt’s definition of bullshit: • The bullshitter’s goal isn’t to lie (which requires knowledge of the truth), but simply to produce something that sounds right. • Many commenters argue LLMs embody this: they’re indifferent to truth, tailored to maximize coherence, authority, and user satisfaction.

This wasn’t a side issue—it was a core rejection of uncritical AI use.

So to clarify: the strong anti-AI sentiment isn’t just about laziness.

It’s about: • Epistemic corruption: degrading the reliability of discourse. • False confidence: turning uncertainty into authoritative prose. • Pollution of knowledge spaces: burying truth under fluent fabrication.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: