Has artificial intelligence revolutionized recruitments?

Has artificial intelligence revolutionized recruitments? Source: Shutterstock

AI and ML inherit and replicate human bias in the recruitment process

IN THE TYPICAL recruitment process, professionals typically skim through thousands of resumes to filter and interview the best candidates for the role that needs to be filled.

The process results in a hiring decision, and serves as data for new-age recruitment solutions. These solutions use artificial intelligence (AI) and machine learning (ML) to process volumes of records on recruitment patterns, successful resumes, and traits of hired personnel.

In time, AI and ML can learn and automate the recruitment process.

The problem here is, recruiters or even the enterprise itself can sometimes be biased. Recruiters also may use personal preferences – a higher tendency to choose those of the same gender or race – to make hiring decisions, and these decisions make up the data.

So, eventually, AI and ML inherit the bias and continue to make choices that match a set pattern.

Further, both can learn data, either in the form of human language (qualitative) or numerical (quantitative), analyze them, produce insights and understand how to continue using similar patterns to make future hiring decisions.

When qualitative data is used, AI and ML interpolate that past recruitments are based on keywords found in resumes that are associated with certain traits.

For example, the words ‘female’ or ‘women’ are associated with humanities careers, whereas ‘male’ and ‘men’ are associated with engineering or science careers.

As a result, AI and ML perform gender-biased recruitment operations because that is what it has learned to become. Again, reflecting the inherent bias decisions made by operations and recruiters.

Of course, prejudice and bias are not explicitly spelled out as the building block for the data, but it becomes the inherent nature of the data. Not only that, when there is a lack of diversity among developers of the algorithms, there is bound to be some skewness as well.

So when biased human language becomes the basis of reason for these technologies, skills recognition is not prioritized as much as other attributes. The only way to change this is to feed them with new data – preferably quantitative data – that is objective, and associated with performance, evaluation, and critical skills with hiring decisions.

In retrospect, the saying “you are what you eat” perfectly applies to AI and ML. Just as they can unintentionally learn to be biased, they can also intentionally learn to be impartial, subjective and ‘moral’.

New algorithms must be developed to first, identify the biased data. Then, use new and corrected practices to generate quality data that is free from bias, which can feed new patterns to the system.

From new data, they will develop the intelligence to recognize bias, identify them as mistakes, and learn to make better decisions. Technology giants in the industry are already working to address this issue.

Failing to be recruited because of one’s gender and race not only hurts the economy and industrial growth but also impacts the livelihood of many. Diversity in data is as important as having diversity in AI and ML development.

The abilities and possibilities of AI and ML far outweigh the limiting bias they inherit from us. Albeit, change is only possible if enterprises realize the bias they practice.