How many people are stuck in the middle, having less extreme beliefs reinforced by a sycophantic AI?
I've started to hear whispers among friends that there are many founders stuck in loops of "planning" with AI, reinforcing banal beliefs and creating schizophrenia-like symptoms.
While I'm sympathetic to bereaved families, I find it difficult to assign much blame to AI providers for this sort of thing.
Developed countries have a suicide rate around 11 suicides per 100,000 people, per year [1]. So if an AI provider has 700 million weekly active users, every year we'd expect 77,000 suicides by people who'd used the service in the last 7 days.
Blaming these deaths on chatbots seems kinda sketchy. These people all had preexisting mental health issues, and may have died whether they used ChatGPT or not.
This reminds me of the moral panic over video game addiction in the 90s.
How many people are stuck in the middle, having less extreme beliefs reinforced by a sycophantic AI?
I've started to hear whispers among friends that there are many founders stuck in loops of "planning" with AI, reinforcing banal beliefs and creating schizophrenia-like symptoms.
reply