Can Facebook be trusted with combating fake news?
SOCIAL media platforms have enabled global societies to become interconnected at an unprecedented scale.
Unfortunately, along with the seemingly limitless benefits social media brings, the platforms are also being used to amplify the spread of false information and hate speech.
At the moment, the onus seems to be on the platforms themselves — to filter the questionable contents that they broadcast.
But, can the social media operators such as Facebook be trusted with regulating and scrubbing the misinformation running rampant on its platform?
This question was posed to Facebook’s VP of policy solution, Richard Allan in London, last Tuesday by Singapore lawmaker, Edwin Tong.
Tong was among the 24 parliamentarians from nine countries – Argentina, Belgium, Brazil, Canada, France, Ireland, Latvia, and Britain — that took part in a first-of-its-kind hearing on the proliferation of fake news and disinformation.
In his exchange with Allan, Tong zeroed in on Facebook’s apparent resistance to bringing down one particular post in Sri Lanka that was deemed “inciting”, during the turbulent period of religious riots in the country.
Singapore’s Straits Times reported that a user had flagged the post as violating Facebook’s hate speech policy, but the social media platform responded that it did not deviate from its “community standards,” and suggested among other options for the user to block persons that were circulating the content.
The incident eventually escalated and contributed to the Sri Lankan government’s decision to ban Facebook altogether in the country.
Allan, in his response to Tong’s query in London, admitted that the failure to remove the post was a “serious and egregious” error.
“We make mistakes. Our responsibility is to reduce the number of mistakes,” said Allan, before suggesting that the solution to reducing these errors may lie away from human staff members.
“We are investing very heavily now in artificial intelligence, where we would precisely create a dictionary of hate-speech terms in every language,” Allan explained.
Facebook, moving forward, will work with relevant local authorities to stop the spread of inflammatory content and disinformation.
The relevant judicial authority in any country is the best party to determine whether any claim is valid or not, Allan told the lawmakers.
In its effort to combat foreign meddling in national elections, such as what is being alleged to have happened in the 2016 US presidential race, Facebook will be setting specific up task forces — made up of security and a legal specialist — will guard against interference of important elections.
He said, “Our current resourcing allows us to look at all national elections. So, if there’s a national election in Singapore, for example, that would be covered.”
Following the close to three-hour-long testimony by Allan, lawmakers from various countries signed a declaration called, “International Principles for the Law Governing the Internet.”
The declaration, among others, called for technology firms around the world to “recognize their great power and demonstrate their readiness to accept their great responsibility as holders of influence.”
It also urged social media platforms to be accountable to users and be “answerable to national legislatures and other organs of representative democracy.”
Facebook, in recent times, has been scrambling to assure the public that it is taking all the necessary steps to protect users’ data and defend against purveyors of misinformation.
Calls have been made asking co-founder Mark Zuckerberg to step down from the chief executive position, due to the controversies that the social network firm has found itself in, since the 2016 election.
While the jury is still out on whether Facebook can be trusted to tackle the issues, the prospect of utilizing AI – a technical solution to a complex human problem – may not be as straightforward either, as differentiating between what is fake and what is real may not be as binary after all.