> Mason Marks ... argues that Facebook’s suicide risk scoring software, along with its calls to the police that may lead to mandatory psychiatric evaluations, constitutes the practice of medicine. He says government agencies should regulate the program, requiring Facebook to produce safety and effectiveness evidence.
Safety/effectiveness studies certainly seem like a good idea, though I'd be really hesitant to prohibit Facebook from doing this, at least not from worry about false positives (the privacy implications are a different issue).
Facebook is just the first in a line of common-sense checks: a Facebook algorithm detects that someone seems to be suicidal, this escalates to a Facebook employee who reviews it and in extreme cases escalates to law enforcement, law enforcement gets the information from Facebook and decides how to respond, and may end up pulling in psychiatric medical help, which of course involves trained people making judgement calls.
Facebook's role is to escalate, not to intervene, and even that escalation process is done by a human. It seems very unlikely that it would ever result in a situation where someone gets involuntarily hospitalized because some machine learning algorithm had a bug.
I'm bipolar and whenever I start losing some sleep I retake an online version of the Young Mania Rating Scale. I'm able to tell "this is a sign", but the test is an inventory of how many signs there are and an indication of whether I should call the head shrinker out of regular schedule (which costs money) or just try to hold back on prodromal behaviors.
I'm no longer on facebook, but it would be really useful if something I open daily would tell me "hey dude, you're overtalkative and online at odd hours and maybe you should get it checked" -- but give me the agency to do so. I understand perfectly well that there are mental illness scenarios where the capacity of a person to help himself is lost (making involuntary intervention the humane course of action) but even in mania this is the exception rather than the norm.
I can't think of a model of illness severity where machine learning is able to determine a score but the time course of this score is expected to be discontinuous. Before any noise of escalation without my knowledge, I would expect tools to be developed that would help me realize I'm getting ill before losing agency. Everything I just said works for depression too.
My understanding is that this Facebook program only focuses on cases where involuntary intervention feels justified, e.g. someone publicly saying that they're going to kill themselves.
But I agree that it would be great if there were more tools to help with self-awareness and guide people to get help. I've had plenty of mental health problems in the past, and any source of better self-awareness seems useful, even outside the context of seeking professional help. Still, at least with my Facebook usage, it seems very unlikely that it ever would be able to know when I'm depressed except when I'm already in the process of getting help from a friend.
This sort of thing already exists a little bit. A much tamer variant I've seen is Nintendo games that tell you to take a break when you've been playing for a while. And Google searches involving suicide show the message "You're not alone. Confidential help is available for free." and direct you to the suicide prevention hotline.
Facebook's role is to escalate, not to intervene, and even that escalation process is done by a human. It seems very unlikely that it would ever result in a situation where someone gets involuntarily hospitalized because some machine learning algorithm had a bug.
Per TFA they don't actually track the outcomes of their calls because that would be too much of a privacy violation. I'd expect the resulting lack of feedback to be problematic.
Also, I'd say that escalating to the people authorized to use force is a form of intervening.
Is this a 'we spy on them for their own protection' sort of argument? I think there should be extreme privacy protections against profiling like this. Its potentially a great service, but not from Facebook who might sell the info to recruiters 10 years later or some other horrible breach of trust. There needs to be oversight and rules
Agreed that privacy is a real concern with a system like this, though my comment was specifically focusing on the "worry of false positives" concern. I certainly don't claim that Facebook is in the clear from a privacy standpoint, I'm just digging into one detail.
>It seems very unlikely that it would ever result in a situation where someone gets involuntarily hospitalized because some machine learning algorithm had a bug.
Perhaps, but people being involuntarily detained because people misunderstand communications those users thought to be private seems much more likely. I have a group of friends who share memes about suicide and make jokes about it constantly. This is actually pretty common. If some uninitiated outsider read some of my DMs, I’d expect some people I know to be on their way to hospital in handcuffs.
More importantly, why does Facebook think this is their job, and why would anybody else agree with them about it? The idea of the postal service sending the police to my house because they were concerned about the contents of a letter I wrote is horrifying, or the phone company because they were worried about a call I had. I’m not any less horrified about facebook doing this, especially considering how much less I trust them than the postal service or Verizon.
That's fair, I guess in my ideal world, law enforcement would just act on the factual information rather than being biased by the fact that a Facebook person reported it as a concern. But as evidenced by things like swatting, police are certainly not the best at responding to reports in a calm and unbiased way. I guess my hope is that law enforcement can learn to appropriately interpret and respond to these reports (including deeming that it's actually not a concern), but maybe that's unrealistic.
Safety/effectiveness studies certainly seem like a good idea, though I'd be really hesitant to prohibit Facebook from doing this, at least not from worry about false positives (the privacy implications are a different issue).
Facebook is just the first in a line of common-sense checks: a Facebook algorithm detects that someone seems to be suicidal, this escalates to a Facebook employee who reviews it and in extreme cases escalates to law enforcement, law enforcement gets the information from Facebook and decides how to respond, and may end up pulling in psychiatric medical help, which of course involves trained people making judgement calls.
Facebook's role is to escalate, not to intervene, and even that escalation process is done by a human. It seems very unlikely that it would ever result in a situation where someone gets involuntarily hospitalized because some machine learning algorithm had a bug.