By guest blogger Jack Barton
Since the exposure of Cambridge Analytica in 2018 it is no longer surprising that tech giants are using our information in ways we may not be explicitly aware of. Companies such as Facebook are already using computer algorithms to identify individuals expressing thoughts of suicide and provide targeted support, such as displaying information about mental health services or even contacting first responders.
However, the visibility of these features is poor at best — and it remains unclear if the public even wants them in the first place. Now a study in JMIR Mental Health has asked whether the general public would be happy for tech companies to use their social media posts to look for signs of depression. The study found that although the public sees the benefit of using algorithms to identify at-risk individuals, privacy concerns still surround the use of this technology.
Dr Elizabeth Ford at Brighton and Sussex Medical School and her colleagues surveyed participants (recruited via mental health charities and many of whom reported depressive symptoms) on their social media use and mental health. They then asked participants about their opinions on a hypothetical situation in which Facebook analysed their social media information to screen for depression, in order to provide targeted mental health care advice or information from charities like the Samaritans.
Out of 183 people who completed the survey, just over a fifth felt that their social media activity would highlight their low moods, and only 3% felt the specific content of these posts would betray how they were feeling.
Participants tended to acknowledge the potential positives of Facebook classifying our mental health, such as widening access to health services. But, overall, they also tended to disagree that the risks to privacy were worth the potential benefits. And although 60% supported the idea of automated algorithms providing healthcare information to users potentially experiencing depression, only 15% were happy for this to occur without their explicit consent.
Participants also had the chance to note down the benefits and risks they saw in Facebook analysing their posts. On the one hand, participants felt that this could improve access to mental health services through targeted advertising. On the other, they voiced concerns that companies tracking social media content in this way could be a risk to online privacy and security (for example, “with the number of data leaks we have by large tech companies, this is a risk too far for many people”). Respondents also thought that such algorithms may not even be particularly accurate and could lead to over-diagnosis. The stigma of being falsely identified as being distressed was a big concern.
Overall, the study showed that the public understood the potential benefits of analysing social media posts to detect and intervene for depression — but believed concerns around individual privacy outweighed these positives. It’s important to note that this study was conducted at the same time as Cambridge Analytica were exposed, and the survey responses may have been influenced by this, as one participant’s quote suggests: “In light of recent revelations about the questionable ethics of Facebook I would find it extremely disturbing if they were using my data”. Still, it’s clear that public trust in tech giants as gatekeepers of our personal data is low, and even campaigns championed by trusted mental health charities have been met with considerable backlash when people’s privacy was intruded upon.
Moreover, it could be argued that analysing the public’s data for signs of depression or suicidal ideation is premature. There are concerns about the scientific rigour of the data used to support algorithms to predict mental health difficulties. For a start, informed consent is notably absent for “participants” online. Moreover, the unwillingness of tech companies to share their algorithms makes it hard for independent scientists to show that said algorithms are accurately detecting mental health difficulties. Some researchers have also raised questions about whether the content of social media posts is actually predictive of emotion.
For now, it seems that privacy is paramount to a public who are eager to post content online but who also, understandably, want to know how this information is being used. Until trust can be re-built by large tech companies such as Facebook, their attempts to intervene in our lives will likely have to be limited to targeted ads for those cat socks you mentioned once to your friend.
Post written by Dr Jack Barton (@Jack_bartonUK) for BPS Research Digest. Jack is a freelance science writer based in Manchester, UK, whose research focuses on understanding the link between sleep and mental health.
At Research Digest we’re proud to showcase the expertise and writing talent of our community. Click here for more about our guest posts.
View more here.
Credit- BPS Research Digest. Published by- Dr. Sabiha : www.drsabiha.blogspot.com