EU Commissioner for Research, Science and Innovation Carlos Moedas. Source: Etienne LAURENT / POOL / AFP

EU Commissioner for Research, Science and Innovation Carlos Moedas. Source: Etienne LAURENT / POOL / AFP

Will the world warm up to EU’s ethics guidelines for AI?

UNDERSTANDING artificial intelligence (AI) isn’t an easy task. It’s a complicated technology that businesses have only recently started experimenting with — and academicians are not far ahead.

However, there’s no doubt in anyone’s mind that the technology is incredibly powerful.

Recently, for example, Mark Zuckerberg‘s Facebook had to shut down an AI deployment because the robots in the project developed their own language — bypassing developers.

Earlier last year, Tesla and SpaceX Founder and tech-visionary Elon Musk remarked that AI is more dangerous than nuclear weapons.

The reality is, AI will make a big impact on people, businesses, and the world around us in the next few years — there’s no stopping that. But shaping that future is in the hands of businesses investing in the technology today.

The European Commission recognizes this, which is why they’ve recently issued a new set of guidelines calling for the development of trustworthy AI.

“AI is developing at an exponential pace. We don’t want to stop innovation but the added value of the EU approach is that we are making it a people-focused process. People are in charge.” EU Commissioner for the Digital Economy Mariya Gabriel told The Financial Times.

A peek into the EU’s new AI ethics guidelines

The 41-page guidelines document issued by the European Commission provides a pilot version of a trustworthy AI assessment list — spanning 6 pages and covering 7 broad points. Here’s a summary:

# 1 | Human agency and oversight

AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.

Though a series of questions, the guidelines force AI developers and business owners to think about its impact on humans everywhere — immediately and in the future.

# 2 | Robustness and safety

Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.

Given the rise of cyberattacks, this is something that must be top of mind for all AI developers. Naturally, it’s right at the top of the EU Commission’s new guidelines.

The key is that the document not only asks AI creators to bake in resilience to attacks into the system but also create a fallback plan and ensure general safety when planning and executing AI projects.

# 3 Privacy and data governance

Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.

Most importantly, depending on the use case, the guidelines emphasize the establishment of a mechanism allowing others to flag issues related to privacy or data protection in the AI system’s processes of data collection (for training and operation) and data processing.

Given the recent release of the GDPR, this document also highlights the importance of maintaining the quality and integrity of data and ensuring access to data.

# 4 | Transparency

The traceability of AI systems should be ensured, says the EU Commission.

The guideline outlines methods used for designing and developing the algorithmic system, methods used to test and validate the algorithmic system, and outcomes of the algorithmic system.

The EU Commission also suggests that businesses must provide explainability and communication when developing trustworthy AI systems.

# 5 | Diversity, non-discrimination and fairness

AI systems should consider the whole range of human abilities, skills, and requirements, and ensure accessibility.

The EU Commission seems to have put considerable thought into this and has raised several intelligent questions to help organizations think about putting in place systems and processes to avoid unfair bias and improve accessibility.

The document makes a case for including the participation of different stakeholders in the AI system’s development and use and paving the way for the introduction of the AI system in your organization by informing and involving impacted workers and their representatives in advance.

# 6 | Societal and environmental well-being

AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.

The EU Commission expects AI developments to be sustainable and environmentally friendly, and ensure that the social impact is not only positive but also supportive of humans in general.

The guidelines urge organizations and AI developers to assess the broader societal impact of the AI system’s use beyond the individual (end-)user, such as potentially indirectly affected stakeholders.

# 7 | Accountability

Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

This section emphasizes the need for auditability of AI algorithms and solutions, minimizing and reporting negative impact, and documenting trade-offs. It also highlights the need for organizations to build in the ability to redress in case any harm is caused.

Guidelines are only a first step

While there’s been no announcement that the guidelines will become any kind of law in the near future, it’s something that many businesses are preparing for.

Recently, Google tried to set up an AI ethics board — and although it had to shut it down almost immediately because of internal conflict — it goes to show that the idea is something that is on the company’s mind right now.

Similar efforts are being made by almost all major AI developers, including Microsoft and Amazon.

Clearly, ethics in AI is something that is a cause for concern for businesses and they understand that the sooner that they’re able to agree on a baseline for AI development, the sooner will they be able spearhead into the unknown and make real progress with cutting-edge AI applications.

The future is going to be driven by AI. The question, as the European Commission has asked is this: How trustworthy will our AI be?