7 tech companies including Microsoft, Google pledge to build AI responsibly
Under the impetus of the White House, an array of entities, including Microsoft, Google, Meta, OpenAI, Amazon, the AI startup Anthropic founded by former members of OpenAI, and Inflection, another AI startup backed by investors such as NVIDIA, have all pledged to ensure the safety of AI technology.
In a dialogue with President Biden, these seven tech firms heavily invested in the development of AI technology have committed to adhere to principles of AI safety. That is to say, they assure the development of AI will be conducted responsibly to avoid potentially adverse impacts on public safety, individual rights, and democratic values.
In the discourse, tech companies will ensure their AI can be audited by independent experts for any harmful effects. They are also compelled to continually uphold cybersecurity techniques and encourage third-party entities, teams, or individuals to report security vulnerabilities. Furthermore, they are obligated to ensure their AI does not harbor bias or inappropriate uses and must anticipate the societal risks that may arise from AI.
Additional undertakings encompass the necessity to establish trust mechanisms and share safety information with other industries and government bodies. They must also apply identifiable markers to content generated by AI, such as videos and audio, for ease of recognition by others. Moreover, they must consent to leverage cutting-edge AI technology to resolve societal issues.
Simultaneously, the White House has decreed that the future of AI must conform to three central tenets—safety, protection, and trust—and must be fashioned in a responsible manner.