US president Joe Biden has announced an executive order that establishes ambitious guidelines on safety and security for artificial intelligence, but it will still need political will to put regulatory teeth and resources behind it
By Jeremy Hsu
30 October 2023
US president Joe Biden announced new guidelines for the safe development of AI
AFP via Getty Images
An executive order on artificial intelligence issued by US president Joe Biden aims to show leadership in regulating AI safety and security – but most of the follow-through will require action from US lawmakers and the voluntary goodwill of tech companies.
Biden’s executive order directs a wide array of US government agencies to develop guidelines for testing and using AI systems, including having the National Institute of Standards and Technology set benchmarks for “red team testing” to probe for potential AI vulnerabilities prior to public release.
“The language in this executive order and in the White House’s discussion of it suggests an interest in being seen as the most aggressive and proactive in addressing AI regulation,” says Sarah Kreps at Cornell University in New York.
Advertisement
It is probably “no coincidence” that Biden’s executive order came out just before the UK government convened its own AI summit, says Kreps. But she cautioned that the executive order alone will not have much impact unless the US Congress can produce bipartisan legislation and resources to back it up – something that she sees as unlikely during the 2024 US presidential election year.
This follows a trend of non-binding actions by the Biden administration on AI. For example, last year the administration issued a blueprint for an AI Bill of Rights, and it recently solicited voluntary pledges from major companies developing AI, says Emmie Hine at the University of Bologna, Italy.
One potentially impactful part of Biden’s executive order covers foundation models – large AI models trained on huge datasets – if they pose “a serious risk to national security, national economic security, or national public health and safety”. The order uses another piece of legislation called the Defense Production Act to require companies developing such AIs to notify the federal government about the training process and share the results of all red team safety testing.