Speaking at the United Nations General Assembly on Tuesday, U.S. President Joe Biden made comments about his plan to work with competitors around the world “to ensure we harness the power of artificial intelligence for good while protecting our citizens from this most profound risk.”
“Emerging technologies such as artificial intelligence hold both enormous potential and enormous peril,” Biden said at the U.N. Tuesday. “We need to be sure they’re used as tools of opportunity, not as weapons of oppression. Together with leaders around the world, the United States is working to strengthen rules and policies so AI technologies are safe before they’re released to the public, to make sure we govern this technology, not the other way around, having it govern us.”
His comments come as U.S. policymakers have endeavored to learn more about how the technology works in order to determine the proper guardrails to protect Americans without stifling positive innovation. The discussion is taking place with the backdrop of an intense competition with China, which is also seeking to be a world leader in the technology.
On Wednesday, Senate Majority Leader Chuck Schumer, D-N.Y., hosted prominent tech CEOs including Tesla and SpaceX’s Elon Musk and Meta’s Mark Zuckerberg, as well as labor and civil rights leaders, to speak with senators about AI as the lawmakers contemplate legislative protections. Following the meeting, Schumer told reporters that everyone in the room agreed that government needs to play a role in regulating AI.
How exactly that will look is still up for debate. Lawmakers differ on which is the proper body to regulate AI, as well as how light a touch policymakers should apply with regulation. Schumer warned it would be counterproductive to move too fast, pointing to the European Union, which has quickly created the AI Act.
But, Schumer said, “on a timeline, it can’t be days or weeks, but nor should it be years. It will be in the general category of months.”
In the meantime, several agencies have asserted their ability to rein in the abuses of AI with existing legal power. And the National Institute of Standards and Technology (NIST) in the Department of Commerce released a voluntary risk management framework for AI earlier this year.
The Biden administration has also secured voluntary commitments from leading AI companies to test their tools for security before they release them to the public.