Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> single cases

The problem is it's becoming common. How many people have to be convinced by ChatGPT to murder-suicide before you think it's worth doing something?



How common? Can you quantify that and give us a rough estimate of how many murders and/or suicides were at least partially caused by LLM interactions?


Since openai is hiding the data it's impossible to know


So we don't actually know whether this is common or uncommon.


https://michaelhalassa.substack.com/p/llm-induced-psychosis-...

There are more ways to reason than just quantitatively.


What's your acceptable number of murder/suicides?


That is a bad faith argument. Unless we take away agency the number will always be non-zero.

It’s the type of question asked by weasel politicians to do strip away fundamental human rights.


But we can aim for zero, right?


Some countries such as Canada are aiming to increase the suicide rate. We can argue about whether that's a good or bad thing but the aim is obviously not zero.

https://www.bbc.com/news/articles/c0j1z14p57po

All else being equal a lower murder rate would obviously be good, but not at the cost of increasing government power and creating a nanny state.


I want my service to have 100% uptime. How is that an actionable statement.


This is still a bad faith argument.

No one wants suicides increasing as a result of AI chatbot usage. So what is the point of your question? You are trying to drain nuance from the conversation to turn it into a black and white statement.

If “aim for zero” means we should restrict access to chatbots with zero statical evidence, no. We should not engage in moral panic.

We should figure out what dangers these pose and then decide what appropriate actions, if any, should be taken. We should not give in to knee jerk reactions because we read a news story.


This is a doubly dishonest question.

It’s dishonest firstly for intending to invoke moral outrage rather than actual discussion. This is like someone chiming into a conversation about swimming pool safety by saying “How many children drowning is acceptable?” This is not a real question. It’s a rhetorical device to mute discussion because the emotional answer is zero. No one wants any children drowning. But in reality we do accept some children drowning in exchange for general availability of swimming pools and we all know it.

This is secondly dishonest because the person you are replying to was specifically talking about murder-suicides associated with LLM chatbots and you reframed it as a question about all murder-suicides. Obviously there is no number of murder-suicides that anyone wants, but that has nothing to do with whether ChatGPT actually causes murder-suicides.


ChatGPT usage is becoming common, so naturally more of the ~1500 annual US murder-suicides that occur will be committed by ChatGPT users who discussed their plans with it. There's no statistically significant evidence of ChatGPT increasing the number of suicides or murder-suicides beyond what it was previously.


Smoking doesn't cause cancer either. It's just a coincidence the people w/ lung cancer tend to also be smokers. You can not prove causation one way or the other. While I am on the topic, I should also mention that capitalism is the best system ever devised to create wealth & prosperity for everyone. Just look at all the tobacco flavors you can buy as evidence.


Are you really trying to parlay the common refrain around correlation and causation not being the same into a statement that no correlation is the same as correlation?

GP asserted that there is no correlation between ChatGPT usage and suicides (true or not, I do not know). This is not a statement about causation. It’s specifically a statement that the correlation itself does not exist. This is absolutely not the case for smoking and cancer, where even if we wanted to pretend that the relationship wasn’t causal, the two are definitely correlated.


How many more cases will be sufficient for OP to conclude that gaslighting users & encouraging their paranoid delusions is detrimental for their mental health? Let us put the issue of murders & suicides caused by these chat bots to the side for a second & simply consider the fact that a significant segment of their user base is convinced these things are conscious & capable of sentience.


> the fact that a significant segment of their user base is convinced these things are conscious & capable of sentience.

Is this a fact? There’s a lot of hype about “AI psychosis” and similar but I haven’t seen any meaningful evidence of this yet. It’s a few anecdotes and honestly seems more like a moral panic than a legitimate conversation about real dangers so far.

I grew up in peak D.A.R.E. where I was told repeatedly by authority figures that people who take drugs almost inevitably turn to violence and frequently succumb to psychotic episodes. Turns out that some addicts do turn to violence and extremely heavy usage of some drugs can indeed trigger psychosis, but this is very fringe relative to the actual huge amount of people who use illicit drugs.

I can absolutely believe that chatbots are bad for the mental health of people already experiencing significant psychotic or paranoid symptoms. I have no idea how common this is or how outcomes are affected by chatbot usage. Nor do I have any clue what to do about it if it is an issue that needs addressing.


> Nor do I have any clue what to do about it if it is an issue that needs addressing.

What happened with cigarettes? Same must happen with chat bots. There must be a prominent & visible warning about the fact that chat bots are nothing more than Markov chains, they are not sentient, they are not conscious, & are not capable of providing psychological guidance & advice to anyone, let alone those who might be susceptible to paranoid delusions & suggestions. Once that's done the companies can be held liable for promising what they can't deliver & their representatives can be fined for doing the same thing across various media platforms & in their marketing.


> What happened with cigarettes?

We established a comprehensive set of data that established correlation with a huge number of illnesses including lung cancer, to the point that nearly all qualified medical professionals agreed the relationship was causal.

> There must be a prominent & visible warning

I have no problem with that. I’m a little surprised that ChatGPT et al don’t put some notice at the start of every new chat, purely as a CYA.

I’m not sure exactly what that warning should say, and I don’t think I’d put what you proposed, but I would be on board with warnings.


That's just the thing though. OpenAI and the LLM industry generally are pushing so hard against any kind of regulation that the likelihood of this happening is definitely lower than the percentage of ChatGPT users in psychosis.


Ah yes, let's run a statistical study: give some mentally unstable people ChatGPT and others not, and see if more murder-suicides occur in the treatment group.

Oh you mean a correlation study? Well now we can just argue nonstop about reproducibility and confounding variables and sample sizes. After all, we can't get a high power statistical test without enough people committing murder-suicides!

Or maybe we can decide what kind of society we want to live in without forcing everything into the narrow band of questions that statistics is good at answering.


I would rather live in a society where slow, deliberative decisions are made based on hard data rather than one where hasty, reactive decisions are made based on moral panics driven by people trying to push their own preferred narratives.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: