Myanmar's recent elections saw Facebook fighting hate speech and misinformation which has dogged the country's previous polls

Myanmar’s recent elections saw Facebook fighting hate speech and misinformation which has dogged the country’s previous polls. Source: AFP

Did Facebook interfere in Myanmar’s elections?

  • Myanmar’s recent elections saw Facebook fighting hate speech and misinformation which has dogged the country’s previous polls

On November 8, voters in Myanmar went to the polls for only their second democratic election since the end of army junta rule in 2011. Since then, the fledgling democracy has grappled with a political firestorm around various issues, including the spread of misinformation and hate speech on Myanmar’s most pervasive digital platform, Facebook.

Facebook’s hold on the country’s internet users is substantial, with over half of Myanmar’s 53 million people actively engaged on the platform, and is synonymous with the internet for many Burmese just like how it is for neighboring Cambodia.

Myanmar’s previous election five years ago was marred by disinformation campaigns targeting ethnic minority groups, including allegedly falsified news, hate speech, and spreading ethnically and religiously charged misinformation against rival candidates. In 2017, violent speech on Facebook was blamed as the primary reason to trigger an army crackdown on Rohingya Muslims, which drove more than 730,000 of them to flee Myanmar.

With both Myanmar’s as well as the US federal elections occurring earlier this month, Facebook was under intense media scrutiny following widespread claims of ‘fake news’ and voter suppression campaigns in the preceding years leading up to an election campaign.

“There’s a short-term immediate concern of all this disinformation and hate speech fuelling real-world violence,” said Jes Kaliebe Petersen, a leading member of a civil society group coordinating efforts to reduce risks posed by social media.

Leading up to this year’s election, Facebook revealed a raft of initiatives on its service in a bid to minimize unscrupulous chatter and outright fraudulent posts from fake accounts. While acknowledging the country’s “complex social and political context”, the social media giant is also aware of the violence that was triggered just before Myanmar’s last election in 2015.

Facebook now actively removes information that can be proven or has been flagged as containing obvious misinformation. This content is verified with local partners to ensure authenticity and to avoid damaging the integrity of the election process.

“For example, we would remove posts falsely claiming a candidate is a Bengali, not a Myanmar citizen, and thus ineligible,” Facebook said in its blog post on the subject.

To combat hate speech violations, Facebook harnesses artificial intelligence to actively identify flagged words and phrases in 45 languages, including Burmese. “In the second quarter of 2020, we took action against 280,000 pieces of content in Myanmar for violations of our Community Standards prohibiting hate speech, of which we detected 97.8% proactively before it was reported to us,” said Facebook. “This is up significantly from Q1 2020, when we took action against 51,000 pieces of content for hate speech violations, detecting 83% proactively.”

The platform also introduced more transparency measures, like making electoral and political ads more clearly defined so they can be clearly spotted. Facebook also worked with Myanmar partners to verify the official national Facebook Pages of political parties, making it easier for users to differentiate real and imitation profiles.

Users will also be limited in using shared images out of context, via an Image Context reshare product introduced in Myanmar in June. And pivotal to the prevention efforts is the third-party fact-checking program that can be independently verified by Facebook partners in Myanmar BOOM, AFP Fact Check, and Fact Crescendo.