Large organizations need to start thinking about AI bias. Source: Shutterstock

Large organizations need to start thinking about AI bias. Source: Shutterstock

Why large organizations are looking for AI behavior forensics experts

THERE is no doubt that artificial intelligence (AI) powered applications and solutions are gaining popularity across the world.

A recent Gartner survey showed that 37 percent of organizations have implemented AI in some shape or form, despite the talent shortages.

“Four years ago, AI implementation was rare, only 10 percent of survey respondents reported that their enterprises had deployed AI or would do so shortly. For 2019, that number has grown to 37 percent — a 270 percent increase in four years,” said Gartner Distinguished Research VP Chris Howard.

However, another study warns that large organizations are increasingly getting anxious about the brand and reputation risks associated with using AI-powered solutions.

Users’ trust in AI and machine learning (ML) solutions is plummeting as incidents of irresponsible privacy breaches and data misuse keeps occurring.

Despite rising regulatory scrutiny to combat these breaches, Gartner predicts that, by 2023, 75 percent of large organizations will hire AI behavior forensic and privacy and customer trust specialists to reduce brand and reputation risks.

Bias based on race, gender, age or location, and bias based on a specific structure of data, have been long-standing risks in training AI models.

In addition, opaque algorithms such as deep learning can incorporate many implicit, highly variable interactions into their predictions that can be difficult to interpret.

“New tools and skills are needed to help organizations identify these and other potential sources of bias, build more trust in using AI models, and reduce corporate brand and reputation risk,” said Gartner Research VP Jim Hare.

“More and more data and analytics leaders and chief data officers (CDOs) are hiring ML forensic and ethics investigators.”

Being able to understand algorithms used in AI and ML solutions, especially in the government sector and financial services space, for example, is critical in order to avoid the unintentional yet malicious targeting of certain groups of people.

Imagine a bank using an AI algorithm it believes to be sound to evaluate loan applications in a certain small town. If it was trained on data that was biased, it might not provide the right output and decline loan applications of certain groups of people without any real reason.

For such organizations, hiring AI behavior forensics experts is very important. Companies such as Bank of America and MassMutual have already made such hires, according to Gartner — or are at least in some phase of the hiring process.

These job description of these specialists include validating models during the development phase and continuous review of them once they are released into production, as unexpected bias can be introduced because of the divergence between training and real-world data.

In the future, more AI behavior forensics experts are expected to be needed — not just in the financial services industry but across the board, as AI solutions become part of everyday business functions such as human resources, operations, and finance.