On February 25th, Facebook’s safety division announced an extension of their suicide prevention initiative. They describe the initiative as being based on work with suicide prevention organisations, clinical research, and lived experiences from mental health survivors.
From what I’ve seen so far, parts of this initiative seem beneficial, and useful for helping people through a bad night or self-destructive impulse. However, there are still some concerning areas, and there has already been at least one example of just how this initiative can be dealt with wrongly.
The Benefits
Firstly I’ll go through its helpful aspects. The idea of pointing out that the post suggests someone is upset or distressed could be effective. Receiving this message might be the shock that lets someone realise they are having difficulties beyond typical ups and downs, and so might encourage them to see what the offered help is.
From the other side, allowing people to send an anonymous “someone thinks you might be in trouble” message reduces one of the barriers people often have in talking about mental health issues. It starts the conversation in a low-risk way, without requiring the face-to-face questions that many people just don’t know how to carry out.
Facebook’s post showed some pictures of the support options. The support page offers the following message:
“You’re not alone. We do this for many people every month”
This is nicely worded, because it should make people feel like the service is a general help for various difficulties, rather than the service is singling them out or marking them as “in trouble”. This should hopefully reduce their anxiety about responding to the prompt and make them less likely to ignore the service.
When a status is flagged as containing potentially suicidal content, it offers a few next steps. The person in need has the option to contact a friend or a helpline (currently the National Suicide Prevention lifeline, as the service is US-only for now), or to read some advice and tips. None of the posts showed the content of this advice, so its utility remains to be seen.
There are also options for the person who reported a status. They can choose to directly call or message the person in need, or contact a suicide helpline volunteer themselves. If the service stopped here, and only went further after the status of the person in need was confirmed, I would consider it to be a sensitive yet useful initiative. But Facebook’s additional options may be a cause for concern, and something that puts people off from using the service.
The Problems
Here’s where issues with the service start:
If someone on Facebook sees a direct threat of suicide, we ask that they contact their local emergency services immediately. We also ask them to report any troubling content to us. We have teams working around the world, 24/7, who review any report that comes in. They prioritize the most serious reports, like self-injury, and send help and resources to those in distress
Reporting troubling content to facebook means it’s outsourced to a moderator. The moderator reads the flagged status, decides how serious or risky the content is, then categorises it and passes that on to Facebook.
A stranger reading someone’s intensely personal information is an invasion of privacy, albeit one that would be worthwhile if it saved a life. But a stranger analysing a status for something as serious and complex as suicide risk, with no idea of what the person is normally like and no context to put their status in? That’s going to be inaccurate. People will fall through the cracks just because of how they express themselves.
Conversely, and more probably, the opposite will happen. In fact, it happened straight away. After the program was announced, a man tested the initiative by posting a fake suicidal status . He informed his friends it was fake. Yet local police were called, and his access to Facebook was blocked. When the police contacted him, he was handcuffed, and placed in a 72-hour mental health hold. Despite his (and his wife’s) full explanation of the experiment, he was held for the full length of time and subjected to unnecessary medical procedures.
The connection between the suicide prevention feature, and Facebook themselves calling local police to “check-up” on the person, is what turns this service from caring to invasive. In a perfect world, including the police would be a benefit, an extra layer of security.
But in practice, it’s not.
People experiencing mental health issues, who aren’t diagnosed or haven’t used any services, will usually have kept their experiences hidden for fear of drawing attention to it. Calling the police to their house won’t help that. Some people feel unable to ask for professional help for fear of losing their autonomy and the little control they do have over the situation. Bringing the police in won’t help with that either.
Some people diagnosed with mental health issues will have already had very negative experiences with medical staff or police: having a mental health condition is often (unfortunately) drawing the short straw in terms of compassionate medical or legal treatment.
In the UK, people as young as 14 have been left in adult prisons overnight because the NHS hasn’t been able to find a bed in a mental health ward for them. In the US, the stakes are even higher, as the majority of people killed by police have a mental illness.
Until there is better police training for handling situations involving mentally ill and/or suicidal people, police services simply aren’t the best or most appropriate form of help. It only make sense to include police if all other options have failed or if there is a definite reason to need police support, such as the person in need showing homicidal intentions or saying that they intend to commit a crime.
Facebook could make this service a lot more trustworthy by prioritising friends and helplines. Helplines are designed for this exact purpose and staffed by people trained for this situation. They will be more personal and less anxiety-inducing than the police, and they allow the person in need to retain greater autonomy. Having a friendlier first experience of contacting help means people will be more likely to follow-up, and more likely to use the services again themselves in the future.
For now, Facebook’s service is a good idea that has been executed much too invasively. It risks compromising its own goals by jumping to the highest level of emergency straight away, making itself less trustworthy (and therefore less usable) by doing so.