Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Increasingly, when I ask someone for their opinion, they just tell me they've put that question into ChatGPT and this is what it said.

Well, okay, but if I wanted to hear that I would have asked ChatGPT directly, no?



As a counterpoint, isn't it implied that the person has read what ChatGPT wrote and by forwarding it, agrees that it's a roughly similar phrasing of their feelings on the issue? One would hope so, anyway.


Indeed that's what I thought, but I think it's fair to say, "I don't want to read ChatGPT's answers."

More precisely, I did get lazy and think, "well, I don't know what he meant, exactly, but he'd probably pick out one or two of these points, at least. ChatGPT tends to include everything, including the kitchen sink."


Plus, to me, "I don't know" is also an answer - I use it to gauge how niche a topic might actually be. LLMs do that at all.


Why not cut out the middleman then?

People who "agree with what ChatGPT wrote" are redundant in a discussion if they're merely relaying it.


If you're an expert in X and I'm not, you can easily google "intro to X" and skim the first handful of pages and send me the best one. This is the same idea IMHO.

Telling me that ChatGPT's answer is not horribly wrong could be a very useful data point.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: