Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


I don’t think this is a fair retort. This is not being marketed towards people who have any inkling about how any of this works. The linked press release is clearly trying to get the average person jazzed up about wiring their medical history and fitness data to ChatGPT.

ChatGPT is just suppose to “work” for the lay person and it just doesn’t quite often. OpenAI is already being sued by people for stochastic parroting that ended in tragedy. In one case they’ve tried to use the rather novel affirmative defense that they’re not not liable because using ChatGPT for self-harm was against the terms of service the victim agreed to when using the service.


Doctors get sued all the time. It doesn't mean doctors are no good. I also don't think ChatGPT will pretend they are replacing doctors / committing to diagnosis with this tool. They will cover their ass legally.

Right. GPT is a glorified keyboard prediction, and people should treat it as such. I don’t get it when people get mad at the output.

I mean if someone talked to you your whole life assuming you are autistic,that's kind of fucked up ?

This response is a non-sequitur, this isn't _someone_, this is an inanimate program that hallucinates responses.

If every building I went to in the US had ramps and elevators even though I'm not in a wheelchair, would it be "fucked up" that the building and architects assume I'm a cripple?

There's just as much meaning in ChatGPT saying "As you said, you have ADHD" as a building having an elevator.

In the training data for ChatGPT, the word ADHD existed and was associated with something that people call each other online, cool. How deep.

Anyway, I do assume very single user of this website, including myself, all have autism (possibly undiagnosed), so do with that information what you will. I'm pretty sure most HN posters make the same assumption.


That's kind of how it works though. People who know you very much associate certain traits and labels with you.

It’s an unpleasant experience to have people who think they know you but clearly don’t project their opinions of what they think you’re like.

It’s probably a very human trait to do that but it is a bad habit.


Yeah and it's fucked up, so being dramatic is warranted.

ChatGPT is, to my knowledge, trained on Reddit and at least certain sub-reddits are basically people (or bots) telling others that they probably have ADHD/ADD. These are the "AskReddit" type of sub-reddit. There's a Danish subreddit for everyday questions (advise column style posts), and like 80% of people there are apparently either autistic or have ADHD.

So I'm not entirely surprised that an LLM would start assuming that the user have ADD, because that's what part of it's training data suggests it should.


In your scenario, maybe yes.

The issue is it doesn't apply here as it's neither a person or a coherent memory/thinking being.

"Thinking" models are basically just a secondary separately prompted hidden output that prefaces yours so your output is hopefully more aligned to what you want, but there's no magic other than more tokens and trying what works.


It's not a person and its not a thinking being.

I think you are definitely right. People need to learn to be more resilient. People are in such a hurry to give over their lives to Sam Altman (cue the "decentralizers and democratizers").



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: