President Biden today issued an executive order outlining the federal government’s first regulations on AI.
The order requires developers of large AI systems to perform safety tests before selling to the government and requires federal agencies to continuously monitor and evaluate their deployed AI. It also directs the government to develop standards
for companies to label AI-generated content with watermarks.
AI New Regulatory Oversite: What you need to know
With AI evolving at breakneck speed and Congress gridlocked, the Biden administration has established some regulatory oversight:
- This is a regulatory baseline. AI regulations globally are nascent, and this executive order aims to set a baseline for future regulations: “It’s the next step in an aggressive strategy to do everything
on all fronts to harness the benefits of AI and mitigate the risks,” stated Bruce Reed, White House deputy chief of staff.
- The Federal Government is setting standards. The White House is using the power of federal purchasing to incentivize AI developers to build safer, more secure systems: “This
is an important first step and, importantly, executive orders set norms,” stated Georgetown’s Lauren Kahn.
- Is Congress next? This executive order’s impact is limited and will likely face political and legal challenges; any substantive regulation must come from Congress: “There’s a limit to what you can
do by executive order,” stated Sen. Chuck Schumer (D-NY). “They’re doing a lot regulatorily, but everyone admits the only real answer is legislative.”
READ: Exploring the Business Risks and Challenges of ChatGPT
3 Takeaways on the Biden AI Executive Order
IANS Faculty member Jake Williams noted that while it is significant that the AI executive order regulates foundation models, most organizations
won't be training foundation models. “This provision is meant to protect society at large and, for most organizations, will have minimal direct impact,” he added.
Here are Jake’s top three takeaways from the executive order:
- It places emphasis on the detection of AI-generated content and creating measures to ensure the authenticity of content. While this will likely appease many in government, who are profoundly concerned about deepfake content, as a practical matter,
generation technologies will always outpace those used for detection. Furthermore, many AI detection systems will require levels of privacy intrusion that most would find unacceptable.
- The risk of using generative AI for biological material synthesis is very real. Early ChatGPT boosters were quick to note the possibility of using the tool for "brainstorming" new drug compounds—as if this could replace
pharmaceutical researchers (or imply that they weren't already using more specialized AI tools). The impact of using generative AI for synthesizing new biological mutations, without any understanding of the impacts is a real risk, and it's great to see federal funding being tied to the newly proposed AI safety standards.
- Perhaps the most significant contribution of the executive order is dedicating funding for research into privacy-preserving technologies with AI. The emphasis on privacy and civil rights in AI use permeates the executive order. At a societal level,
the largest near-term risk of AI technologies is how they are used and what tasks they are entrusted with. The AI executive order makes it clear: privacy, equity, and civil rights in AI will be regulated. In the startup world of "move fast
and break things", where technology often outpaces regulation, this executive order sends a clear message on the areas where startups should expect more regulation in the AI space.
How IANS Faculty Expertise Benefits You
Cybersecurity today is faced with a myriad of complex challenges, and the IANS Faculty will help you make informed security decisions that protect your business. By focusing on the AI takeaways in this Faculty piece, you can strengthen your organization’s
security posture and your own.
Whether you need guidance on program direction, a tie-breaking opinion on architectural considerations, tool implementation advice, a comprehensive security assessment, a penetration test, or mapping controls to a regulatory standard, we are a trusted
partner to provide the best decision support for your security team.
Our mission is to help you make better, faster decisions, grow professionally, and stay compliant. Get in touch with IANS to learn more about how we
can help move your security program forward.
Although reasonable efforts will be made to ensure the completeness and accuracy of the information contained in our blog posts, no liability can be accepted by IANS or our Faculty members for the results of any actions taken by individuals or firms in connection with such information, opinions, or advice.