Like Facebook Facing the spread of hate speech on its platform, it is introducing changes that limit the spread of messages in two countries where it has come under fire in recent years: Sri Lanka and Myanmar.
In a blog post Thursday night, Facebook said it was “adding friction” to message forwarding for Messenger users in Sri Lanka, so that people could only share a particular message multiple times. The limit is currently set at five people.
This is similar to a limit that Facebook introduced in WhatsApp last year. In India, a user can forward a message to only five other people on WhatsApp . In other markets, the limit is 20. Facebook said that some users had also requested this feature because they are fed up with receiving chain messages.
In early March, Sri Lanka grappled with mob violence targeting its Muslim minority. In the midst of all this, hate speech and rumors began to spread like wildfire on social media services, including those operated by Facebook. The country’s government then briefly shut down citizens’ access to social media services.
In Myanmar, social media platforms have faced a similar and enduring challenge. In particular, Facebook has been blamed for allowing hate speech to spread such fueled violence against the Rohingya ethnic group. Critics have claimed that the company’s efforts in the country, where it did not have a local office or employees, are simply not enough.
In its blog post, Facebook said that it has started to reduce the distribution of content from people in Myanmar who have systematically violated its community standards with previous posts. Facebook said it will use the learnings to explore expanding this approach to other markets in the future.
“By limiting visibility in this way, we hope to mitigate the risk of harm and violence offline,” blogged Samidh Chakrabarti, director of product management and civic integrity, and Rosa Birch, director of strategic response, in the post. from the blog.
In cases where it identifies individuals or organizations “most directly promotes or involves violence,” the company said it would ban those accounts. Facebook is also expanding the use of AI to recognize posts that may contain graphic violence and comments that are “potentially violent or dehumanizing.”
In the past, the social network has banned armed groups and accounts run by the military in Myanmar, but it has been criticized for reacting slowly and also promoting a false narrative that suggested its artificial intelligence systems would do the job.
Last month, Facebook said it could detect 65% of the hate speech content it proactively removed (based on user reports for the rest), compared to 24% a year ago. In the quarter ending in March this year, Facebook said it had removed 4 million hate speech messages.
Facebook continues to face similar challenges in other markets, such as India, the Philippines and Indonesia. After a riot last month, Indonesia restricted the use of Facebook, Instagram and WhatsApp in an attempt to contain the flow of false information.