| By Xuan Zhong |
Advances in AI continue to impact government policies on a global scale. Developed countries such as the United States are tightening controls on AI. The Biden Administration announced an executive order on AI on October. 30th. On the same day, the Group of Seven also agreed to develop a voluntary code of conduct for companies developing advanced AI systems. The day after this, US Vice President Harris announced that the AI Safety Institute would be established. This series of initiatives aims to strengthen the regulation of the research, use, and promotion of AI technology and improve its safety.
In response to this technology, which is still developing at a rapid pace, countries are realizing that existing legal norms are far from adequate to guard against the risks it poses. The risks AI poses to the development of society are comprehensive and far-reaching. Governments, individuals, and other types of users dealing with AI in protecting their own privacy are clearly at a disadvantage. AI face swap, AI simulation of tone, and other abuses of AI reduce the cost of criminals disguised as others to commit crimes and also bring undeserved disaster to the victims of information theft.
Judging from the policies enacted so far, governments have mainly constructed AI management mechanisms by urging AI technology companies to strengthen self-regulation. AI companies should take the initiative to inform users of the scope of their use of user information, clarify the process of their products or business practices, and inform users of how they ensure the security of user information. Furthermore, the U.S. government should continue establishing agencies and organizations to test AI systems and provide guidance for the guidelines on the safety of AI. This could be one of the government’s ideas for managing advanced technologies.
Photo Credit: Al Jazeera