| By Loc Le |
U.S. President Joe Biden took a significant step to mitigate the potential risks associated with AI by issuing a new executive order on Monday, October 30th. The order aims to establish that the use and development of AI is reliable and supportive by creating new standards for AI safety and security, protecting Americans’ privacy, advancing equity and civil rights, standing up for consumers, patients, and students, supporting workers, promoting innovation and competition, and advancing American leadership abroad. President Biden declared this move was necessary in order “to realize the promise of AI and avoid the risk” and that “in the wrong hands AI can make it easier for hackers to exploit vulnerabilities in the software that makes our society run.”
Executive Order Guides AI Usage
Given the rapid growth and incredible capabilities of AI, the Biden administration wants to ensure that the technology is not used in any manner that could be malicious such as deep fakes, cyberattacks, or the development of weapons. Deep fakes, which was used as an example by Biden, are currently one of the more common risks associated with AI as it can easily alter the audio and video of any content to “smear reputations, spread fake news and commit fraud.” Therefore, under the order, the Department of Commerce will now help create guidelines for authenticating official content and distinguishing AI-generated content by adding watermarks. This would help Americans identify whether the information they receive from the government is legitimate and not fabricated and would also set an example for developers, businesses, and other governments to follow.
The Biden administration also wants to ensure that powerful AI systems are fully tested so that they are safe, secure, and trustworthy before companies release them for public use. As a result, the order, in accordance with the Defense Production Act, will now require any companies developing AI that could pose significant risk to the country’s security, economic security, or public health to fully disclose their safety test results and any other important information to the government. The standards for these safety test will be set by the National Institute of Standards and Technology and will also be used by the Departments of Energy and Homeland Security to address the threats that AI systems pose towards “critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.”
Beyond the Executive Order, Concerns Remain
Although it is good that the government is actively addressing the risks associated with AI, there have been mixed reviews from organizations surrounding the executive order. For instance, the policy director of the Bull Moose Project, Ziven Havens, believes that the order is a “decent first attempt at AI policy” and that most of the new rules “are crucial in the future of this new technology.” However, Havens is also concerned about the duration it will take to fully develop the guidelines as “falling behind in the AI race due a slow and inefficient bureaucracy will amount to total failure.” Furthermore, Phil Siegel, the founder of the Center for Advanced Preparedness and Threat Response Simulation, expressed his concern that the order is comprehensive but may be ineffective as “there is only so much that can be done in an executive order anyway, and it is necessary for Congress to engage with the White House to make some of this into law.”
The Path Ahead
Despite being long overdue, President Biden’s executive order on AI represents a pivotal move towards the healthy development and use of the technology. By setting safety standards, enhancing transparency, and emphasizing rigorous testing, the order is a step in the right direction to help protect the privacy and security of Americans as AI continues evolving and becoming an integral component of our future.