Source: Shutterstock

Is the AI ethics issue hindering innovation?

  • There are too many cases of discrimination in AI and, in some cases, organizations are even shelving plans to adopt it altogether

Beneath the exciting tide of artificial intelligence (AI) applications permeating industries and consumers’ daily lives, there has been an undercurrent growing in strength over years: the question over whether we can trust the decisions of morally-void autonomous systems, informed by and interpreting only the datasets they receive. 

The challenges around ethical AI have, for several years, been viewed as the biggest challenge facing its users, but now some organizations are actually killing plans for adoption because of the potential danger or avoiding embarking on projects altogether.  

As reported by Yahoo, the chief technologist for business software maker LivePerson, Alex Spinelli, said he had canceled some AI projects at his employer (and previous ones) over concerns about AI – in particular, the use of machine learning to analyze customer data and make predictions about behavior.

Spinelli attributed the AI systems – particularly those used by Facebook to target users with content and pages it thinks will be of interest to users – to the spread of disinformation conflagrating the pro-Trump Capitol riots last month.

AI has shown transformative potential in its ability to undertake complex tasks with lower costs and resources. Use cases are proliferating, from detecting fraud, increasing sales, improving customer experience, automating routine tasks, to providing predictive analytics, while automated chatbots remain the most widely-adopted machine learning application. 

But when it comes to the question of AI ethics, there are plenty of examples to show we’re a long way off the mark.

Bias in AI-powered facial recognition systems is perhaps the most prolific example. In 2018, a study by MIT found that while determining gender using three different facial recognition programs, the error rate for light-skinned men was 0.8%, while darker-skinned women were misgendered 20% to 34% of the time.

Amid Black Lives Matter protests last year – and a flurry nationwide of what was deemed excessive force by the law – IBM withdrew its facial recognition technology, condemning the wider technology’s use for “for mass surveillance, racial profiling, violations of basic human rights and freedoms.”

Amazon then also withdrew its Rekognition software from use by law enforcement. But the company was earlier forced to park its AI candidate-screening technology due to an inherited lack of gender-neutrality. The ‘secret’ tool was supposed to rank candidates with a five-star rating system. Amazon previously canned another AI-powered recruitment program, after discovering that the 10 years’ worth of successful applications it was consulting to make decisions were male-dominated, and therefore unfavorably discounting women. 

Most recently, Google’s former co-head of AI ethics and a prominent black female researcher at the company, Timnit Gebru claimed to have been fired after the company blocked the publication of a report she co-authored, raising ethical questions around the use of large, data-consuming language models in which Google is one of the leaders. Following her departure, the search giant went on to suspend computer access to another of the firm’s AI ethics researchers who had been critical of the company. 

A need for regulation

With machine learning models relying on algorithms learning patterns from vast pools of data, models are at risk of perpetuating bias present in the information they are fed. AI’s mimicking of real-world, human decisions is both a strength and a great weakness for the technology— it’s only as ‘good’ as the information it accesses. Of course, this challenge isn’t new; as innovation continues, AI and machine learning ethics are regularly touted as crucial to the technology’s development. This challenge is on the radar of organizations, world governments, and the machine learning community. To date, there has been a growing body of work on ethical AI principles, guidelines, and standards across different organizations, including IEEE, ISO, and the Partnership on AI. But guidelines are still lacking, and many organizations are navigating the complex waters of self-governance.

In 2019, a Vanson Bourne study revealed 89% of IT heads believe AI development should be regulated, with the need for a level of control and central oversight deemed necessary, even if it hindered the pace of the technology’s evolution and applications by organizations. 

Self-regulation and governance – and the creation of internal AI ethics panels – aren’t keeping pace with AI’s growing scale and sophistication. A report by Pegasystems found that 65% of respondents felt current external governance was insufficient to manage AI adoption; 70% of respondents expressed fear about AI.

Effective self-governance requires enterprises to check software for AI algorithms are correct, and that the algorithms are ethical. But despite AI’s proliferation, just 27% of respondents have a designated leader in AI governance, with Manufacturing, Healthcare, and Financial Services all reporting significant gaps in internal leadership and formal strategies. 

There is plenty of advice out there for organizations to stay on the ethical path with AI, but many organizations seem to be crying out for more hardline guidelines, consistent across industries. 

As far back as 2017, Elon Musk called for the regulation of AI development, despite being “clearly not thrilled” to be advocating for government scrutiny that could impact his own industry. The Tesla CEO believed the risks of going without were simply too high. 

“Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever,” he told NPR

“That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization.”

So far in the US (and preceding the new administration), there’s been an appetite for “light touch” regulation built of flexible frameworks, with the intent to do as little as possible to stymy the growth of the country’s technology industry. Under the Trump administration, the Office of Science and Technology Policy warned against more hardline policies being nudged forward in Europe: “Europe and our allies should avoid heavy-handed innovation-killing models, and instead consider a similar regulatory approach.”

In an article entitled AI That Reflects American Values, United States’ chief technology officer Michael Kratsios wrote; “The U.S. will continue to advance AI innovation based on American values, in stark contrast to authoritarian governments that have no qualms about supporting and enabling companies to deploy technology that undermines individual liberty and basic human rights.

“The best way to counter this dystopian approach is to make sure America and our allies remain the top global hubs of AI innovation.”

In that same Pegasystems survey, however, concerns were not so much that regulation would dampen or hamper innovation or make adoption more complex and expensive – they leaned towards a worry that regulation would be insufficient to manage AI adoption.

With no universal regulation set-in-stone, businesses are left to make their own assessments on AI and how to ensure the way it’s applied is deemed ethical. They must consider whether the business benefits they gain from the technology are worth the risk of discrimination.

The fact that the question of AI ethics is giving some business leaders pause is a good sign, at least, and some organizations are notably making progress themselves. 

Last year, engineering giant Rolls-Royce unveiled a “workable, peer-reviewed AI ethics framework” published under Creative Commons license, which it said can help any organization ensure the decisions it takes to use AI in critical and non-critical applications are ethical. The company has been using AI for around two decades, including using the technology to monitor jet engines in service in real-time. But as it looked to extend AI to more parts of the business such as robotic inspections of critical components, it was becoming critical to address rising concerns around ethical and transparent AI. 

“Rolls-Royce’s AI capabilities are embedded deeply into other companies’ products and services and so aren’t widely known. Rolls-Royce’s AI doesn’t often feature in a consumers’ understanding of how the digital world is changing their lives,” said Caroline Gorski, Global Director of R2 Data Labs.

“The current debate about the use of AI is focused on the consumer and the treatment of consumer and personal data. But we believe that what we have created – by dealing with a challenge rooted squarely in the industrial application of AI – will help not only with the application of AI in other industries but far more widely,” she added. 

Rolls-Royce chief executive, Warren East, stressed that the firm wants to move from “conversation” around concepts and guidelines of AI ethics to “applying it”.

“There is no practical reason why trust in AI cannot be created now. And it’s only with the acceptance and permission of our society – based on that trust – that the full benefits of AI can be realized, and it can take its place as a partner in our lives and work.”