AI regulations in the making? Biden administration finally considers rules to govern ChatGPT and Bing Chat

US government regulations on AI
(Image credit: Bing Image Creator)

What you need to know

  • The Biden administration has begun examining whether checks are needed on artificial intelligence.
  • There are growing concerns that AI could be implicated in discrimination or spread harmful information.
  • The Commerce Department on Tuesday put out a formal public request for comment on whether potentially risky new AI models should go through a certification process before they are released. 

The US government, which, like most prominent institutions, is behind on tech, is finally coming around to the idea that maybe AI needs to have some regulations before bad things happen.

Announced today, the US Commerce Department is formally requesting public comment on whether “potentially risky new AI models” should go through a certification process before they are released. The Commerce Department calls these “accountability measures.”

OpenAI’s ChatGPT came out late last year but quickly became a hit on the internet, with one million users by January 2023, and it is one of the fastest-growing consumer apps ever. Just a month later, Microsoft unveiled its Bing Chat Co-Pilot, which leverages ChatGPT technology along with its own to bolster its search and knowledge engine. Since then, Microsoft has been on a tear releasing AI in its Start and Bing apps, Microsoft Edge (desktop and mobile), the web, SwiftKey, and Skype, and has further plans for Office, Teams, and Windows.

ChatGPT-5, considered another major evolution of the system, is rumored to be due by the end of 2023.

The Wall Street Journal, which first reported today's news from the Commerce Department, noted that comments from the public would be accepted for the next 60 days. Those comments will then help communicate advice to policymakers by advising the president (instead of writing or enforcing regulations itself).

President Biden discussed AI with an advisory council of scientists last week. That council was created nearly one year ago by the U.S. Department of Commerce and is known as the National AI Advisory Committee (NAIAC). It has 27 members, including Microsoft and Google.

Microsoft and OpenAI have been quite open about the need for regulation, with the former outlining its Responsible AI mandate where transparency and safety are imperative. However, a recent report claimed Microsoft laid off one of the teams responsible for guiding other groups in Microsoft on AI. OpenAI has also welcomed regulation noting, “Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take.” 

Recently, Italy banned ChatGPT over data concerns, and Germany was also considering similar actions.

Windows Central’s Take

Should Bing Chat be regulated? (Image credit: Daniel Rubino)

While the US government is not only typically slow to react to new technology, it also tends to get it wrong (partially due to the heavy lobbying by companies with a vested interest). That said, this step in official regulation is needed before AI, which is rapidly advancing, gets too far ahead of everyone.

Recently, people like Elon Musk and others have signed a document calling for a six-month pause on AI development. Honestly, that letter was the dumbest thing I had ever read. I get the spirit behind it, as I agree AI needs to be reined in, but the idea that for-profit companies will “pause” AI is absurd. That’s just not how capitalism works, and it’s not like China, Russia, or other countries were going to abide by such a measure. (There is also the irony of the guy who rolled out “full-self driving” in cars with no government regulation resulting in actual deaths of people calling for caution because of risks from AI).

Of course, what’s needed, long term, is for the UN to get involved as well. All countries need to reach some accords on AI systems, regulations, and control and put measures in place to prevent significant problems and react to the ones that will inevitably crop up in the coming years. For instance, there should be a general agreement not to link AI to weapons of mass destruction for what I think are obvious reasons. Likewise, critical systems could cause chaos in a country were they to be shut down or “taken over,” like the ongoing issue with threats to the US power grid’s computer systems.

Without global cooperation and consensus, trying to limit AI on a country-per-country basis will be problematic. You can’t put this genie back in a bottle.

Daniel Rubino

Daniel Rubino is the Editor-in-chief of Windows Central. He is also the head reviewer, podcast co-host, and analyst. He has been covering Microsoft since 2007, when this site was called WMExperts (and later Windows Phone Central). His interests include Windows, laptops, next-gen computing, and watches. He has been reviewing laptops since 2015 and is particularly fond of 2-in-1 convertibles, ARM processors, new form factors, and thin-and-light PCs. Before all this tech stuff, he worked on a Ph.D. in linguistics, watched people sleep (for medical purposes!), and ran the projectors at movie theaters because it was fun.