Google, Microsoft, OpenAI, and Anthropic are coming together to form an industry body focused on ensuring the safe and responsible development of frontier AI models.
The group, named the Frontier Model Forum, will leverage the collective technical and operational proficiency of its member companies to contribute positively to the AI ecosystem through technical evaluation and benchmarks and a public library of solutions that will include best practices and standards.
Some of the core objectives of the Forum involve advancing AI safety research to minimize risks and enable independent evaluations of capabilities. In addition, the Forum will collaborate with policymakers, academics, civil society, and companies to exchange knowledge about trust and safety risks.
Brad Smith, Vice Chair & President, Microsoft said: “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
The news about the forum follows the pledge of four AI firms, including Amazon and Meta, to the Biden administration to have their AI systems undergo third-party testing before public release and to distinctly label AI-generated content.
Membership criteria
To become a member of the Forum, organizations interested in joining need to be in the AI research and development business. Namely, organizations need to develop and deploy frontier models defined by the Forum, demonstrate a strong commitment to frontier model safety, and be willing to contribute to advancing the group's efforts through joint initiatives.
The Forum defines frontier models as large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models and can perform a wide variety of tasks.
Anna Makanju, Vice President of Global Affairs, OpenAI, said: “Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies–especially those working on the most powerful models–align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.”
Future plans
The group plans to serve as "one vehicle for cross-organizational discussions and actions on AI safety and responsibility." Three areas emerge as the most important over the coming year: identifying best practices, advancing AI safety research, and facilitating information sharing between companies and governments.
According to the official announcement, the group will consult with civil society and governments in the coming weeks on the design of the Forum and on "meaningful ways" to collaborate.