ChatGPT: Did the CEO of OpenAI voluntarily call for AI to be regulated? Here's why

ChatGPT: Did the CEO of OpenAI voluntarily call for AI to be regulated? Here’s why(Photo by WIN MCNAMEE / GETTY IMAGES NORTH AMERICA / Getty Images via AFP)

ChatGPT: Did the CEO of OpenAI voluntarily call for AI to be regulated?

  • Sam Altman, the CEO of OpenAI, testified for the first time since ChatGPT exploded in popularity.
  • Senators appeared to accept Altman’s warnings that AI could “cause significant harm to the world” and his suggestion that a new agency could set rules.
  • Altman admitted that they are concerned with the impact AI could have on elections.

You read that right – the CEO of OpenAI had appeared before Congress and told US lawmakers earlier this week that regulating artificial intelligence (AI) is essential and necessary. “If this technology goes wrong, it can go quite wrong,” Sam Altman said in his first appearance before Congress on May 16.

The CEO of OpenAI, the company that owns ChatGPT, the sensational generative AI chatbot, testified before a US Senate committee on Tuesday. He was the latest figure to erupt from Silicon Valley; however, unlike other CEOs, from Facebook’s Mark Zuckerberg to TikTok’s Shou Zi Chew, Altman was welcomed in a far more warm and earnest manner. 

Altman was open to speaking about the new technology’s possibilities – and pitfalls -. However, much to our surprise, the Senators present appeared to accept his warnings, more willingly than not. The CEO of OpenAI iterated how AI could “cause significant harm to the world,” accompanied by a plea for some regulatory guardrails for this emerging technology. 

What led to the testament of the CEO of OpenAI?

WASHINGTON, DC - MAY 16: Sen. Cory Booker (R) (D-NJ) asks questions as Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. The committee held an oversight hearing to examine A.I., focusing on rules for artificial intelligence. Win McNamee/Getty Images/AFP (Photo by WIN MCNAMEE / GETTY IMAGES NORTH AMERICA / Getty Images via AFP)

WASHINGTON, DC – MAY 16: Sen. Cory Booker (R) (D-NJ) asks questions as Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. The committee held an oversight hearing to examine A.I., focusing on rules for artificial intelligence. Win McNamee/Getty Images/AFP (Photo by WIN MCNAMEE / GETTY IMAGES NORTH AMERICA / Getty Images via AFP)

Altman was present at a Senate Judiciary subcommittee hearing with a simple but tricky question at the top of the agenda: what is AI? After all, to regulate technology, especially something as complex and fast-moving as AI, Congress must first understand it.

So the idea to have the CEO of OpenAI — the Microsoft-backed startup behind ChatGPT — offer some insights is the best shot lawmakers have. To top it off, it was the Senate’s first central hearing on AI. “As this technology advances, we understand people are anxious about how it could change our lives. We are, too,” OpenAI CEO said at the Senate hearing.

When South Carolina Republican Lindsey Graham compared AI technology to a nuclear reactor requiring a license and answers to a regulator, other senators echoed this.

“I would form a new agency that licenses any effort above a certain scale of capabilities — and can take that license away and ensure compliance with safety standards,” Altman said, according to a Bloomberg report; he added that such a US authority could shape the global consensus on AI regulation. 

To that, lawmakers present agreed that Congress moves too slowly to keep up with the pace of innovation, especially when it comes to AI, and developing rules for such a dynamic industry is best left to a new agency. 

Senator Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology, and the law, reckoned AI companies should be required to test their systems and disclose known risks before releasing them. Blumenthal also expressed concern about future AI systems destabilizing the job market. 

Altman was in agreement, but with a more optimistic take on the future of work. What is certain is that the CEO of OpenAI himself appeared to be pressed on his own worst fear about the technology. At best, Altman mostly avoided specifics, admitting that the industry could cause “significant harm to the world” and that “if this technology goes wrong, it can go quite wrong.”

However, he later proposed that the new regulatory agency impose safeguards to block AI models that could “self-replicate and self-exfiltrate into the wild.” Altman even admitted that they are concerned with the impact the technology could have on elections. “This is not social media. This is different. So the response that we need is different.”

When they reached the point of discussing whether companies like OpenAI should halt the development of generative AI tools, the senators, like the hearing’s witnesses, said pausing innovation in the US would be unwise. At the same time, competitors like China pursue AI innovations. 

Altman did, however, make it clear that OpenAI has yet to make plans to push forward with the next iteration of its significant language model-based tools. “We are not currently training what will be GPT-5,” he said, adding there are no plans to start in the next six months.