The US and China are working towards regulating ChatGPT-like AI tools. Here's what we know so far

The US and China are working towards regulating ChatGPT-like AI tools. Here’s what we know so farSource: Shutterstock

The US and China are working towards regulating ChatGPT-like AI tools. Here’s what we know so far

  • The Biden administration is seeking public comments on potential accountability measures for AI tools.
  • On the other hand, the Chinese internet watchdog unveiled a set of strict draft rules targeting ChatGPT-like services.

The proliferation of artificial intelligence (AI) tools recently – significantly since the emergence of OpenAI’s ChatGPT– has been growing. Nations worldwide are competing to win the “AI race,” and along with enthusiasm come challenges exacerbated by AI’s complexity. Experts say that we have yet to grasp the actual risks AI systems can pose to our societies.

This month, the alarm went off within the industry when more than 5,000 people signed an open letter urging a pause in AI development, saying that if researchers do not pull back in this “out-of-control race,” governments should step in. Italy became the first Western country to ban ChatGPT temporarily a day later. 

What is clear now is with the launch of hugely influential text and image generative models such as ChatGPT-4, the risks and challenges it poses are clearer. The open letter penned by the Future of Life Institute cautioned that AI systems with human-competitive intelligence” could become a significant threat to humanity. Among the risks includes the possibility of AI outsmarting humans, rendering us obsolete, and taking control of civilization.

That is where the question of regulating AI comes into play–a necessary but indeed not an easy feat. The battle for regulation has often pitted governments and large technology companies against one another. It appears it might play out the same when regulating AI and tools affiliated with the technology.

China has carved out regulations for AI tools

It is unsurprising that Europe and China would be the first to chart the path of AI regulations. The Italian data protection authority recently temporarily banned ChatGPT while scrutinizing whether the generative AI chatbot complies with privacy regulations.

Italy opened an investigation against OpenAI, the company behind the massively popular chatbot, citing data privacy concerns after ChatGPT experienced a data breach involving user conversations and payment information. Italy’s decision was followed by the European Consumer Organisation (BEUC) calling on all authorities to investigate all significant AI chatbots.

On the other hand, China, despite ChatGPT being inaccessible there, has unveiled a new set of draft rules targeting ChatGPT-like services. According to the proposed regulation published by the Cyberspace Administration of China (CAC) on April 11, companies that provide generative AI services and tools in China must prevent discriminatory content, false information, and content that harms personal privacy or intellectual property.

In short, providers should avoid various forms of discrimination, fake news, terrorism, and other anti-social content. Providers must re-train their models within three months to prevent a recurrence if any banned content is discovered or reported. The very-detailed draft regulations also list detailed requirements for manual tagging or labeling of data used to train AI models.

No other country has developed regulations targeting AI tools yet, but China’s speed is not surprising considering the government’s take on data privacy. Violations of the rules can result in fines of up to 100,000 yuan (approximately US$14,520) and, worse, service termination. The draft regulations are open to public opinion until May 10.

Some experts told Bloomberg that China would probably even bar foreign AI services, like those from OpenAI or Google, as it did with American search and social media offerings. Separately, even the US is considering walking down the same path as China and Italy with AI tools like ChatGPT.

On the same day the Chinese regulators published its draft regulations, the Biden administration said it sought public comments on potential accountability measures for AI tools and systems. 

Reuters reported that the National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, basically wants to know if there are measures that could be put in place to provide assurance “that AI systems are legal, effective, ethical, safe, and otherwise trustworthy.”

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” said NTIA Administrator Alan Davidson.