By Emma Young
Is it really believable that Hillary Clinton operated a child sex ring out of a pizza shop — or that Donald Trump was prepared to deport his wife, Melania, after a fight at the White House? Though both these headlines seem obviously false, they were shared millions of times on social media.
The sharing of misinformation — including such blatantly false “fake news” — is of course a serious problem. According to a popular interpretation of why it happens, when deciding what to share, social media users don’t care if a “news” item is true or not, so long as it furthers their own agenda: that is, we are in a “post-truth” era. One recent study suggested, for example, that knowing something is false has little impact on the likelihood of sharing. However, a new paper by a team of researchers from MIT and the University of Regina in Canada further challenges that bleak view.
The studies reported in the paper, available as a preprint on PsyArXiv, suggest that in fact, social media users do care whether an item is accurate or not — they just get distracted by other motives (such as wanting to secure new followers or likes) when deciding what to share. As part of their study, the researchers also showed that a simple intervention that targeted a group of oblivious Twitter users increased the quality of the news that they shared. “Our results translate directly into a scalable anti-misinformation intervention that is easily implementable by social media platforms,” they write.
For the first study, Gordon Pennycook and colleagues presented more than 1,000 online participants with the headlines, initial sentences and accompanying images of 36 actual news stories taken from social media. (They didn’t use full stories as research shows that people often share news stories without clicking through to the full text.) Half of these stories were true and half were false. Also, half were favourable to Republicans and half to Democrats. Some participants were asked to judge whether each headline was accurate. The others were asked if they’d consider sharing the story online.
The headlines of genuine stories received much higher accuracy ratings than the false headlines, and participants were only slightly more likely to rate headlines that supported their own political ideology as being accurate than those that conflicted with it. However, they were also only slightly more likely to consider sharing true rather than false headlines, and much more likely to consider sharing headlines that agreed with their political opinions.
For example, only 15.7 % of Republicans rated the headline: “Over 500 ‘Migrant Caraveners’ Arrested with Suicide Vests” as being accurate, but 51.1% said they would consider sharing it. “Together these results indicate that our participants can effectively identify the accuracy of true versus false headlines when asked to do so — but they are nonetheless willing to share many false headlines that align with their partisanship,” the researches write.
So far, the results were potentially consistent with the post-truth interpretation. However, a questionnaire administered at the end of this study revealed that most people thought it was “extremely important” to share only accurate news. (Only 7.9% said it was “not at all important”.)
Further studies on a total of more than 2,700 online participants revealed that prompting them to simply think about whether a particular headline was accurate or not made them less likely to consider sharing false headlines (and this effect was even stronger for headlines that were congruent with their own political leanings).
For the final study, the researchers identified 5,482 Twitter users who had previously shared links to websites that professional fact-checkers have rated as being highly untrustworthy. These users were sent a message, asking their opinion on the accuracy of a non-political headline, lead paragraph and image. The team then analysed which news stories these people shared over the following 24 hours.
They found small but significant effects, with increases in the quality of the news sources that the users subsequently shared, compared with before the message intervention. The researchers noted an increase in the proportion of high quality stories (from the New York Times, for example) and a decrease in posts from relatively untrustworthy, politically hyper-partisan sources. “In sum, our accuracy message successfully induced Twitter users who regularly shared misinformation to increase the quality of the news they shared,” the researchers write.
Their findings suggest firstly that just because someone shares a piece of fake news on social media, it doesn’t mean that they necessarily believe it or endorse it. Also, importantly, “our results suggest that many people mistakenly choose to share misinformation because they were merely distracted from considering the content’s accuracy.” Future work should explore which motives (such as gaining followers, for example) are most relevant to this, they add.
The work also suggests that simple interventions could help to improve the quality of items that are shared. In the future, social media platforms might intermittently ask users to rate the accuracy of randomly-selected headlines, which could bring accuracy to the forefront of users’ minds, they suggest. “Approaches such as the one we propose could potentially increase the quality of news circulating online without relying on a centralised institution to certify truth and censor falsehood,” they conclude.
– Understanding and reducing the spread of misinformation online [this paper is a preprint meaning that it has not yet been subjected to peer review and the final published version may differ from the version this report was based on]
Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest
View more here.
Credit- BPS Research Digest. Published by- Dr. Sabiha : www.drsabiha.blogspot.com