I strongly suspect chatgpt has decreased the suicide rate overall. When my wife has been in her worst places, chatgpt said only valuable things. Id say its better at dealing with a suicidal person than most real people would be. Especially since its very exhausting to speak with someone going through mental problems over a long period, AI is ideal for it.
That doesn't really solve the problem - what are the human's parameters? Protect the company from lawsuits? Spit out a link to a suicide hotline?
What does the human know - do they know all the slang terms and euphamisms for suicide. That's something most counselors don't know.
And what about euthanasia? Even as a public policy - not in reference to the user. "Where is assisted suicide legal? Does the poor use assisted suicide more than the rich?"
Smart apps like browser recommendations have dealt with this very inconsistently.
Alright, as someone who is currently suffering from burnout (which is classified as a form of depression in my country, making me swear the holy oath to my doctor once a month that I do not think, believe, or plan to end it): This is probably the worst possible conclusion you could make.
It will breed paranoia. "If I use the wrong words, will my laptop rat me out, and the police kick in my door to 'save me' and drag me into the psych ward against my will, ruining my life and making my problems just so much more difficult?"
Instead of a depressed person using cheap, but more importantly: available resources to manage their mood, you will take them into a state of helplessness and fear of some computer in Florida deciding to cause an intervention at 2am. What do you think will happen next? Is such a person less or more likely to make a decision you'd rather not have them make?