Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[dead]


I have a colleague that recently self-published a book. I can easily tell which parts were LLM driven and which parts represent his own voice. Just like you can tell who's in the next stall in the bathroom at work after hearing just a grunt and a fart. And THAT is a sentence an LLM would not write.


> And THAT is a sentence an LLM would not write.

Really?

Here's some alternatives. Some are clunky. But, some aren't.

…just like you can tell whose pubes those are on the shared bar of soap without launching a formal investigation.

…just like you can tell who just wanked in the shared bathroom by the specific guilt radiating off them when they finally emerge.

…just like you can tell which of your mates just shitted at the pub by who's suddenly walking like they're auditioning for a period drama.

…just like you can tell which coworker just had a wank on their lunch break by the post-nut serenity that no amount of hand-washing can disguise.

…just like you can tell whose sneeze left that slug trail on the conference room table by the specific way they're not making eye contact with it.

…just like you can identify which flatmate's cum sock you've accidentally stepped on by the vintage of the crunch.

…just like you can tell who just crop-dusted the elevator by the studied intensity with which one person is suddenly reading the inspection certificate.


It's still on you to pick what the LLMs regurgitate. If you don't have a style or taste you will simply make choices that would give you away. And if you already have your own taste and style LLMs don't have much to offer in this regard.


Indeed. Wholeheartedly agree.

Just as it’s on you to pick the word you want when using Roger’s Thesaurus.

My workflow, when using it for writing, is different than when coding.

When coding, I want an answer that works and is robust.

When writing, I want options.

You pick and choose, run it through again, perhaps use different models, have one agent critique the output of another agent, etc.

This iterative process is much different than asking an LLM to ‘write an article about [insert topic)’ and hope for the best.

In any case, I’ve found the LLMs when properly used greatly benefit prose and knee-jerk comments about how all LLM prose sound the same are a bit outdated… (understandable as few authors are out there admitting they are using AI… there’s a stigma about it. But, trust me, there are some beautiful soulful pieces of prose out there that came out of a properly used LLM… it’s just that the authors aren’t about to admit it.)


[dead]


One shouldn’t expect the ‘joke’ to have identical tone. (As if that’s even measurable.)

The point was simply that these examples are not trending towards the average or ‘ablating’ things as the article puts it. They seem fairly creative, some are funny, all are gross… and they are the result of very brief prompt… you can ‘sculpt’ the output in ways that go way beyond the boring crap you typically find in AI-generated slop.


So what even if that is true? You confirmed that it improved upon what he could manually produce, which is still a win. It doesn't always make sense to pay $20000 to a professional author to turn it into a masterpiece.


The great promise and the great disaster of LLMs is that for any topic on which we are "below average", the bland, average output seems to be a great improvement.


Counter intuitively... this is a disaster.

We dont need more average stuff - below average output serves as a proxy for one to direct their resources towards producing output of higher-value.


My point is simply that the tell-tale marks of LLM prose can be remediated through prompts.

I have a very large ‘default prompt’ that explicitly deals with the more obnoxious grammatical structures emblematic of LLMs.

I would wager I deal with more amateurishly created AI slop on a doily basis than you do. (Legal field, where everyone is churning out LLM-written briefs.) Most of it is instantly recognizable. And, all of it can be fixed with more careful prompt-engineering.

If you think you can spot well-crafted LLM prose generated by someone proficient at the craft of prompt-engineering by, to use an analogy to the early days of image creation, counting how many fingers the hand has, you’re way behind.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: