Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Note I might be wrong on this one but it's just extremely annoying that I even have to consider if I am being manipulated by an AI while reading HN comments.

If I want to read AI stuff I go to Clawdbook or OpenAIs Sora app.

 help



Sure, and we've banned the account, but please email us with these hn@ycombinator.com. @mentions don't work on HN; I only saw it because I was looking through the thread. We're also asking people not to make these accusations publicly, partly because they take longer for us to see than an email, and also because a false accusation is more harmful than a valid accusation is beneficial.

Okay fair about the mentions but I don't that email is a good process:

1. It puts more effort on me as a user to report the spam via email because I have to open my email, compose one by hand and add the reasoning. The offending user in comparison probably automatically spams. Can't we have a button at least?

2. It doesn't make the community aware of the ongoing issue. Other community members could be primed that currently they need to read comments more critically. At the moment that seems like the only detection that somewhat works but if I silently send an email instead of commenting here it doesn't inform anyone else of my suspicion.


It’s fine to just flag things and move on. We’re considering adding additional parameters to the flag function, but till then, emailing us with “LLM?” in the subject and the comment ID/URL in the body is great, and should be faster for you than a comment and faster for us to be able to act.

The community is well aware of the issue and off-topic meta discussion has always been against the guidelines here. We’ve discussed this publicly and privately with top HN contributors and the consensus is that this is the least-worst approach.


The fact that this comment had 10 comments and full comment chains that didn't notice it is LLM generated tells me the community is not aware enough. It was also upvoted a lot.

I think there is significant value in making people second guess content and look at it critically. Especially in a time where it is so easy to fake expertise. We all need to train that skill anyway these days for all online interactions.

10 years ago it was clickbait titles that we needed to learn to ignore, today it is LLM generated content. We will get there, but by not calling it out publicly we are making it easier for adversaries to fool everyone.

And yes I don't want to falsely accuse anyone of LLM slop either but they can defend themselves and making mistakes is part of the learning process for all of us. Writers and commenters will learn how to not sound like an LLM and we will more finely atune to the nuance between polished human writing and AI.


In another comment thread yesterday, I pointed out an effect I’ve observed, which is that people are less likely to notice or feel negatively towards LLM-generated content if they like it or agree with it. That’s just a reality of human psychology, and has always been the way swindlers and con artists have their way with people.

The answer is not to suddenly abandon one of HN’s most important principles, which is that we want to keep discussion on-topic and discourage off-topic meta discussion.

We’re developing software to respond to the new challenges that are emerging, and we already have plenty of people who email us to when they notice generated content, which is great.

We know there is no perfect approach; there never is in a large community of humans. But principles matter, and we need to trust that the principles that have made HN what it is over nearly two decades will keep it strong for years to come.

By the way, your own original comment called on us to “Please do something more rigorous than manually deleting accounts”; we are doing that but these things take time to develop, test and perfect on a platform like ours.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: