We promise you, AI used in courts in China won’t look like this. It’s still a pretty vector art though. (IMG/studiostoks/Shutterstock)

China has developed an… AI prosecutor?

Name a better love story than China and their love for AI — we bet you can’t. 

AI is so pervasive in China, that it’s used in everything from online shopping to… let’s just call it Big Brother activities.

Now, Chinese scientists have developed an AI “prosecutor” that can charge people with crimes. It was developed by a team led by Professor Shi Yong, director of the Chinese Academy of Sciences’ big data and knowledge management laboratory. 

Professor Shi claims the machine is able to file a charge with a whopping 97% accuracy based on a verbal description of the case.

Theoretically, the machine would be able to reduce the workloads of prosecutors, so they can focus their time and efforts on more difficult tasks. 

“The system can replace prosecutors in the decision-making process to a certain extent,” said Shi and his colleagues in a paper published this December in the domestic peer-reviewed journal Management Review.

We know it sounds like an android judge fitted with a wig and robes will be banging a gavel, calling silence in the courtroom, but that’s not really how it works — it’s really just an AI machine on a desktop computer, processing cases.

Not the first time China has used AI in the judiciary

Despite the aplomb with which the news broke, this isn’t actually China’s first foray into using AI in legislation. AI was introduced into the court process as early as 2016, through a tool known as System 206, according to SCMP.

System 206 can evaluate the strength of evidence, conditions for arrests, and the level of a suspect’s danger to society.

Nevertheless, the limitations of existing AI tools such as System 206 were that they were not designed to be a part of the decision-making process of filing charges and suggesting sentences, according to Shi.

Such higher-level decision-making requires the AI machine to identify and sort details of a case file and remove data that are extraneous or irrelevant to the crime whilst still keeping pertinent information. 

Furthermore, it would need to ‘convert complex, ever-changing human language into a standard mathematical or geometric format that a computer could understand.”

According to SCMP, charges can be meted out to suspects based on 1,000 traits (or variables) pulled from the human-generated case description text. The evidence would then be left to System 206 for assessment.

The machine was fed with over 17,000 cases from between 2015 and 2020 in order for it to learn how to recognize, sort, and include or exclude pertinent information.

It is so far able to prosecute eight of the most common crimes with a 97% accuracy. They include credit card fraud, illegal gambling operations, reckless driving, intentional injury, obstruction of official duties, theft, and fraud.

In typical China fashion, “picking quarrels and provoking trouble” are also criminal offenses — which the AI is able to recognize too… obviously. 

Shi and colleagues expect the AI prosecutor to, over time and with improvements, increase in accuracy and scope of function. Examples include recognizing uncommon crimes and filing multiple charges against a single suspect.

China not the first to use AI in sentencing

This is not the first instance of the use of AI in the judiciary system. 

In February 2020, Malaysia made history as its judiciary was the first to use AI in sentencing

Local reports said the AI would analyze a database of cases between 2014 and 2019 in the Eastern states of Sabah and Sarawak prior to recommending actions to the court.

Currently, the AI system the in East Malaysian judiciary is used for crimes such as drug possession and rape. 

The danger of AI biases

Importantly, when it comes to machine learning, AI bias plays a massive role in determining the outcome of things. Feed the machine with the wrong kind of information, and you’d get screwed-up results that can maim, kill or put the wrong people behind bars for life

AI bias can be so pervasive, silent, and invisible — many do not even notice that it exists in not just the information fed to the machine, but also how the entire machine is designed, and who designs it. 

Human beings by default, are already biased to begin with — especially when bias is deeply entrenched systemically in societies.

This makes engineering a bias-free machine learning system that doesn’t cause destruction to lives rather difficult.

Tech companies are quickly realizing this, and some have even embarked on programs to weed out AI biases, such as Twitter.

We’ve already seen how AI bias has caused deaths from autonomous cars, affected healthcare provision on the basis of race, and also discriminated against female job applicants, among a litany of other problematic issues. 

In Wisconsin, an AI risk assessment software called COMPAS was used in sentencing. The AI in COMPAS estimates the likelihood of criminals re-offending based on their responses to 137 survey questions. 

However, a study found discrimination in how it assessed criminals based on their ethnicity

Black criminals were often labeled as higher-risk re-offenders even when they do not re-offend. 

Conversely, it produced the opposite results for white criminals by labeling them as lower-risk re-offenders even when they re-offend. 

There still remain important questions when it comes to its use in cases impacting actual human lives — AI bias is one, but ultimately, there is the question of who eventually takes responsibility.

In the case of China, will it be the prosecutors, AI machine, or the algorithm designer(s)?