On July 21, The White House made an announcement regarding the commitment of prominent artificial intelligence (AI) companies to develop AI technology that is safe, secure, and transparent. Companies such as OpenAI, Google, and Microsoft were commended for their commitment, along with other companies like Amazon, Anthropic, Meta, and Inflection who also pledged to prioritize AI safety.
The Biden Administration stressed the responsibility of these companies in ensuring the safety of their AI products while harnessing the potential of AI and promoting high standards in its development. The goal is to manage the risks associated with AI and protect against potential misuse.
Kent Walker, Google’s president of global affairs, acknowledged the need for collaboration in achieving success in AI. He expressed satisfaction in joining other leading AI companies and supporting these commitments. Google plans to work with other companies by sharing information and best practices to ensure responsible AI development.
The commitments made by the companies include pre-release security testing for AI systems, sharing best practices in AI safety, investing in cybersecurity and insider threat safeguards, and enabling third-party reporting of vulnerabilities in AI systems. Anna Makanju, OpenAI’s vice president of global affairs, mentioned that policymakers worldwide are currently considering new regulations for advanced AI systems.
In June, bipartisan lawmakers in the United States introduced a bill to create an AI commission to address concerns in the rapidly growing AI industry. The Biden Administration is collaborating with global partners, including Australia, Canada, France, Germany, India, Israel, Italy, Japan, Nigeria, the Philippines, and the United Kingdom, to establish an international framework for AI.
Microsoft, represented by its president Brad Smith, expressed endorsement for The White House’s voluntary commitments. The company independently commits to additional practices that align with the objectives set forth by the administration, aiming to expand its safe and responsible AI practices and collaborate with other industry leaders.
Global leaders, including the United Nations secretary-general, have voiced concerns about the potential misuse of generative AI and deepfake technology in conflict zones. In May, U.S. Vice President Kamala Harris led a meeting with AI leaders to establish a groundwork for ethical AI development. The administration also announced a $140 million investment in AI research and development by the National Science Foundation.
These initiatives and commitments signify a collective effort to ensure the responsible development and deployment of AI technology. By prioritizing safety, security, and transparency, these companies and government bodies seek to address concerns and establish best practices for the future of AI.
Source link