Twitter said it would offer a cash "bounty" to users and researchers to help root out algorithmic bias on the social media platform. (Photo by Lionel BONAVENTURE / AFP)

Twitter said it would offer a cash “bounty” to users and researchers to help root out algorithmic bias on the social media platform. (Photo by Lionel BONAVENTURE / AFP)

Twitter wants to exterminate AI biases in a “bug bounty” program

Artificial intelligence (AI) brings with it the promise of a world that’s better and more equitable, such as in cases where its use uplifts people or disrupts monopolies.

However, the design of algorithms and models can also perpetuate inequities, which is known as AI bias. 

Industry’s first AI bias “bounty”

Twitter said last week that it would offer a cash “bounty” to users and researchers to help root out algorithmic AI biases on the social media platform.

The tech giant said this would be “the industry’s first algorithmic bias bounty competition,” with prizes to the tune of US$3,500 up for grabs.

The competition is based on the “bug bounty” programs some websites and platforms offer to find security holes and vulnerabilities, according to Twitter executives Rumman Chowdhury and Jutta Williams.   

“Finding bias in machine learning models is difficult, and sometimes, companies find out about unintended ethical harms once they’ve already reached the public,” Chowdhury and Williams wrote in a blog post.

“We want to change that.”

They said the hacker bounty model offers promise in weeding out AI biases.

“We’re inspired by how the research and hacker communities helped the security field establish best practices for identifying and mitigating vulnerabilities in order to protect the public,” they wrote.

“We want to cultivate a similar community… for proactive and collective identification of algorithmic harms.”

The problem with AI biases

The move comes amid growing concerns about automated algorithmic systems, which, despite an effort to be neutral, can incorporate racial or other forms of bias, reports AFP.

Twitter, which earlier this year launched an algorithmic fairness initiative, said in May it was scrapping an automated image-cropping system after its review found bias in the algorithm controlling the function.

The messaging platform said it found the algorithm delivered “unequal treatment based on demographic differences,” with white people and males favored over Black people and females, and “objectification” bias that focused on a woman’s chest or legs, described as “male gaze.”

Implications of AI biases

In the US, a recent study by the US Department of Commerce found that facial recognition AI misidentifies people of color more often than white people. This then leads to concerns, that, when used by important entities like law enforcement, could exacerbate injustice against people of color — especially when bias against POC is a prevailing problem in the US.

Another study by Georgia Tech found that self-driving AI cars were bad at detecting people with darker skin — regardless of the time of day. This essentially puts the lives of darker-skinned pedestrians at risk.

Another example is when Amazon used their “secret” recruitment tool, which turned out to be incredibly sexist against women applicants.

“The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters. […] 

“In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter.”, reported Reuters.

Whilst AI algorithms are popularly used, it is also important for companies that utilize these algorithms to ensure that biases are reduced as much as possible.

This includes feeding and training the machine with clean and unbiased data, as well as having a robust algorithmic model that reduces biases from its developers from start to finish.

How to take part

The deadline for entry is August 6th, 2021, at 11:59 pm PT. Winners will be announced at the DEF CON AI Village workshop hosted by Twitter on August 9th, 2021.

Winners will be invited to present their work during the workshop at DEF CON although conference attendance is not a requirement to compete.

The winning teams will receive cash prizes via HackerOne:

  1. US$3,500 1st Place
  2. US$1,000 2nd Place
  3. US$500 3rd Place
  4. US$1,000 for Most Innovative
  5. US$1,000 for Most Generalizable (i.e., applies to most types of algorithms).

More information about Twitter’s AI bias bounty challenge can be found on their HackerOne page.