Australia joins ‘world-first’ ethical AI forum
- Australia has joined a ‘world-first’ global forum dedicated to the development of ethical AI technology
- Discussions of ethical AI have been brought to the fore most recently by US tech giants withdrawing their facial recognition technology
In the same yard as Chinese giants like Tencent and Alibaba, Australia has fierce rivals in APAC artificial intelligence (AI). And while it’s been somewhat of a late-starter, that hasn’t stopped it doubling down to become a global AI powerhouse.
Now it wants to ensure it’s contributing to the technology’s greater good.
Australia will join forces with 11 countries and the European Union to form the world’s first forum dedicated to the technology’s responsible development and innovation, or ethical AI.
The forum, Global Partnership on Artificial Intelligence or GPAI (pronounced ‘gee-pay’), will be dedicated to tackling issues of ethics in AI, including use of facial recognition by law enforcement, a debate which drew the spotlight in light of Black Lives Matter protests.
Last week, IBM announced it was ceasing its facial recognition technology business, while Amazon said it was issuing a year-long memoratum on the sale of the technology to police, pending the introduction of sufficient regulation.
It will be bodies like GPAI that will push ‘human-centric’ law-changes like this through. Australia joins the forum alongside Canada, Germany, France, India, Italy, Japan, New Zealand, South Korea, Singapore, Slovenia and the UK.
With all members aligned on the beliefs regarding AI’s ethical development, and bringing together “leading experts from industry, civil society, governments, and academia” GPAI will first be aiming to evolve methodologies that show how AI can be further leveraged to respond to the COVID-19 crisis.
A future of ethical AI
With AI applications now surging, there is increasing focus on ensuring models are built on ethical principles, unbiased datasets and transparency.
Outside of the facial recognition debate in the spotlight, for example, bias has been reported in the use of HR systems which ‘vet’ applicants for job vacancies, while transparent and trustworthy systems are crucial for highly regulated industries like healthcare and finance. Plenty of debate also remains in the human rights implications of the technology’s use by the military in autonomous warfare.
The GPAI will comprise four working groups including those focused on responsible AI, data governance, future of working, and innovation and commercialization. Australian experts, which include academicians in areas such as machine learning, anthropology and computer science will contribute to all four areas.
Australian National University dean of engineering and computer science Professor, Elanor Huntington told The Australian Financial Review that, given the scale of the work ahead, the group had to tackle them in a stepped approach in order to develop practical ways to influence effective change globally.
“Part of the work we have to do is decide what to prioritize,” said Professor Huntington. “Some of the things we’re talking about is how to understand the data economy, how all the open data initiatives are going around the world, the way people trust data or not and how decisions get used.”
She added that there is “strong interest” in sustainable development goals which, while seemingly detached from conversation around AI ethics, are topics that are increasingly overlapping as internet- and cloud-connected devices and systems consume a growing chunk of the world’s energy.
“We’re keen to ensure our understanding of the Australian landscape is brought to this global conversation and then we can take those insights back to Australia, too, to make sure we stay connected,” Professor Huntington said.