Facebook Limits Content Sharing in Ethiopia to Limit the Spread of Misinformation and Hate Speech

Among the various issues and concerns highlighted by the recent ‘Facebook Files’ internal data leak was the suggestion that content sharing on Facebook is actually one of the most harmful actions, as the ease of amplifying questionable content by simply tapping ‘Share’ significantly increases the amount of people doing exactly that.

Indeed, one of the most recent reports shared by Facebook whistleblower Frances Haugen indicated that Facebook’s own research has shown that the ‘Share’ option is harmful, particularly in relation to shares of shares.

As reported by Alex Kantrowitz in his newsletter Big Technology:

The report noted that people are four times more likely to see misinformation when they encounter a post via a share of a share – kind of like a retweet of a retweet – compared to a typical photo or link on Facebook. Add a few more shares to the chain, and people are five to ten times more likely to see misinformation. It gets worse in certain countries. In India, people who encounter “deep reshares,” as the researchers call them, are twenty times more likely to see misinformation.”

In other words, the content that tends to see repeated shares is far more likely to include misinformation – which makes sense given the more salacious and divisive nature of such claims.

The question then, however, is what’s Facebook, or Meta, going to do about it, with Haugen claiming that the company has ignored these findings.

Though that isn’t entirely correct. Today, in an update on the measures that were implemented on Facebook specifically in order to stop the spread of misinformation and hate speech in Ethiopia ahead of the nation’s recent elections, Meta included this note:

“To address possible viral content, we’re continuing to reduce content that has been shared by a chain of two or more people. We’re also continuing to reduce the distribution of content that our proactive detection technology identifies as likely to violate our policies against hate speech as well as from accounts that have recently and repeatedly posted violating content.”

So Meta is actually looking to implement certain restrictions on post sharing, in line with its previous findings.

Which is good, and given the research, it makes sense. But then again, if Meta is acknowledging that shares of shares are a potential problem, which can contribute to the amplification of harmful posts, why not implement this as a blanket rule – or even further, why not remove the ‘Share’ option entirely to eliminate this type of rapid amplification?

To be clear, if Facebook were to remove the ‘Share’ button, users would still be able to share content.

  • Users would still be able to post article links in their own updates, but they would be more likely to include their own personal thoughts on each, given they’d have to create a new post
  • Users would still be able to react to and ‘Like’ posts, which then increases exposure to their connections, and broader networks, through engagement activity
  • Users would still be able to comment on posts, which also increases exposure based on the algorithm seeking to show the most engaging content to more users

Theoretically, people would also still be able to share posts via message as well, as per this iteration of the Facebook post UI that Facebook tested in 2018, which replaced the ‘Share’ button with a ‘Message’ one instead.

Facebook alternative share

So there would still be options for engaging with content via Facebook, but the research suggests that having a quick ‘Share’ option can significantly contribute to the rapid spread of questionable claims.

Maybe, by removing it, and ideally forcing users to take more time and thought in their process, that would lessen blind sharing, and slow the spread of such posts.

That’s the same theory that Twitter used when it removed straight retweeting as an option for US users in October last year, in the lead-up to the Presidential Election.

Twitter retweets change

As you can see here, instead of allowing users to blindly, and rapidly, retweet any claim, Twitter instead defaulted users to use its ‘Quote tweet’ option, in order, ideally, to get people to think a little more deeply about what they were sharing, as opposed to just re-amplifying content and quotes.

That did have some impact. After reinstating regular retweets in December, Twitter noted that the use of Quote Tweets did increase as a result, “but 45% of them included single-word affirmations and 70% had less than 25 characters”.

In other words, users were a little more hesitant in their sharing activity, but it didn’t inspire a lot more context in the process.

But then again, maybe that’s all that’s required – maybe, all you need is for people to take a minute, to think about the message for a second, and that may well be enough to stop them spreading viral misinformation and false claims.

That’s worked with Twitter’s pop-up alerts on articles that users attempt to retweet without actually opening the article link and reading the post first, with users opening articles 40% more often as a result of that increased friction.

Twitter article prompt

Facebook has now adapted the same, again indicating that there is value in this approach – and again, with its own research showing that shares can be a negative element, why not simply remove the option to prompt more consideration in the process?

Of course, there would likely be impacts on publishers, who might see their referral traffic drop, while it would also impact Facebook engagement overall, by reducing the options for post interaction.

Is that why Meta wouldn’t do it? I mean, it has the data, and it’s already implementing its findings in certain situations to avoid potential harm. Meta knows that a change in its sharing process could have a positive impact.

Why not implement restrictions across the board?

It would be a big step, for sure, and there are various considerations within this. But the research and other indicators all show that Meta knows that this would be effective.

So why not do it, and reduce potential harm through blind re-distribution?

This site uses cookies to offer you a
better browsing experience.