Facebook starts using artificial intelligence to find users who are suicidal. For this tool Facebook uses natural-language-processing and pattern recognition.

The statement defines pattern recognition:


“Pattern recognition is a branch of machine learning that focuses on the recognition of patterns and regularities in data, although it is in some cases considered to be nearly synonymous with machine learning” (https://en.wikipedia.org/wiki/Pattern_recognition, 23.10.2017)


The search algorithm for blogs they might locate or identify sad and painful contents. Moreover, they recognize blogs including comments like “Are you okay?” or “I’m worried about you”.

When Facebook recognizes a post then it is going to be send for rapid review to the network’s community operations team.

But what happens next?

Facebook would give the user some advice on how to seek help. They might propose to contact friends in order to talk about problems or to see a psychologist.


“The director of the US National Suicide Prevention Lifeline praised the effort, but said he hoped Facebook would eventually do more than giving advice, by also contacting those that could help.” (http://www.bbc.com/news/technology-39126027 23.10.2017)


But what should Facebook do?

What may be allowed for Facebook?

Should Facebook be allowed to contact friends without prior agreement of the user?

I think that it is really a critical point of view. This problem is of crucial importance and needs to be discussed openly. In reality, a person who indicates to commit suicide, gets directly help and is taken to psychiatry. They don’t have the choice to decide on whether to talk to a friend or go to the psychiatry.

But in virtual world, it is a little bit different.

In my opinion, Facebook should cooperate with some suicide prevention organizations.