| By Emma Kuruppacherry |
On Friday, July 21st, seven leading AI companies came to the White House to form an agreement on basic guidelines for developing AI. These companies were Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. Each signed a voluntary agreement regulating several aspects: security testing of their program by internal and external experts prior to the program’s release, labeling AI-generated content with watermarks, publicly reporting the capabilities and limitation of AI, and researching potential risks such as bias, discrimination, and invasion of privacy.
With building pressure from governments and the public alike to use more controls, many in the AI industry were happy to comply with these standards. Sir Nick Clegg, president of global affairs at Meta said in a statement, “We are pleased to make these voluntary commitments alongside others in the sector. They are an important first step in ensuring responsible guardrails are established for AI, and they create a model for other governments to follow”. Similarly, President Biden said in his remarks on Friday, “We must be clear-eyed and vigilant about the threats emerging from emerging technologies that can pose – don’t have to but can pose – to our democracy and our values”.
Fear of the dangers AI technology can bring has remained a larger concern to the American public. Especially with the upcoming 2024 presidential election, many fear a greater spread of disinformation and misinformation. In addition, the White House is also considering how to control competitors’ ability to get ahold of AI programs and their components while keeping up in the technology race.
But creating these guidelines comes with several challenges. Lawmakers have always struggled to create laws that match the pace of rapidly advancing technology such as with social media. A presidential executive order is expected to place restrictions on advanced semiconductors and exporting large language models, however the Biden administration is wary of enforcing something so strict. Since the agreement signed Friday was purely voluntary, officials worry that too many regulations will scare off AI companies. Even the current guidelines are extremely vague, allowing companies their own interpretations. For example, one of the agreements was to enforce strict cybersecurity around the languages used to generate AI programs, but there is no specificity as to what that entails. Companies are free to choose their own means of fulfilling the agreement. The goal with these guidelines is to find a balance between protecting consumers while also staying ahead of competitors.
One more prominent idea was to use watermarks to identify AI-generated content. European regulators are also set to adopt AI regulations later in the year encouraging American legislators to do the same. EUCommissioner Thierry Breton and OpenAI CEO Sam Altman discussed the topic during his visit to San Francisco in June. In a tweet Altman said he would “love to show” what OpenAI was doing with watermarks soon.
Other bills include creating a federal agency to oversee the AI industry, data privacy requirements, and licensing for companies to release their AI technologies. However, given the wide scope of AI issues, Congress is yet to agree on anything.
Looking forward, the White House is hoping to act proactively in managing risks and help companies develop their AI programs responsibly. A statement from the Biden Administration called for industry to ensure that “innovation doesn’t come at the expense of Americans’ rights and safety”.
Photo Credit: AP Photo/Manuel Balce Ceneta