Google and Microsoft jointly form the “Frontier Model Forum” to promote safer and responsible artificial intelligence development
Under the aegis of the White House, a consortium of companies including Microsoft, Google, Meta, OpenAI, Amazon, the artificial intelligence start-up Anthropic founded by former members of OpenAI, and Inflection, an AI start-up backed by NVIDIA and other industry leaders, have pledged to ensure the security of artificial intelligence technology. Presently, Anthropic, Google, Microsoft, and OpenAI are collaborating to establish an organization called the “Frontier Model Forum”, intended to foster safer, more responsible AI computational models.
During prior discussions with President Biden, these seven technology companies, each invested in AI technology development, committed to adhering to AI safety principles. Essentially, they aim to ensure AI is developed responsibly, to prevent negative impacts on public safety, rights, and democratic values.
The nascent “Frontier Model Forum” plans to establish an advisory council, organizational bylaws, and operational funds. It also seeks to formulate core principles, with the goal of completing these tasks within the next year. This effort aims to propel research and development towards safer AI technology, working in close collaboration with policymakers, academic institutions, and civil sector organizations, to construct AI technologies beneficial to society.
The “Frontier Model Forum” has also outlined membership criteria, including a mandatory commitment to using safe, cutting-edge technology models in development, and ensuring that AI delivers greater societal benefits.
As part of previous agreements reached with the White House, these tech companies will ensure that their AI can be independently tested by experts for potential adverse effects. They will also continually improve cybersecurity technology and encourage third-party entities, teams or individuals to report security vulnerabilities. Moreover, they must ensure that AI is neither biased nor misused, and must preemptively study the potential societal risks posed by AI.
Other commitments include building trust and sharing safety information with other industry players and government agencies. They also pledge to tag AI-generated video and audio content, making it easier for others to identify. Furthermore, they agree to apply cutting-edge AI technology to solve societal problems.
In parallel, the White House has announced that future AI must adhere to three guiding principles: safety, protection, and trust, and it must be developed in a responsible manner.