Meta and IBM Join Forces to Advocate ’Open Science’ Approach to AI

Meta has joined forces with IBM to unveil the AI Alliance, advocating for an "open-science" approach to AI development. This places them at odds with counterparts such as Google, Microsoft, and OpenAI, who lean towards a more closed approach to AI technology.  

The AI Alliance's mission revolves around tapping into "pre-existing collaborations" to identify new opportunities for developing open AI resources that cater to the diverse needs of both business and society. To delve into critical areas such as AI trust and validation metrics, hardware infrastructure supporting AI training, and open-source AI models and frameworks, the Alliance plans to establish working groups, a governing board, and a technical oversight committee.

“In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we need the platforms to be open source and freely available so that everyone can contribute to them. Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture,” wrote Yann LeCun, Chief AI Scientist, Meta, in a statement on X.

Complementing rather than duplicating 

The Alliance emphasizes its commitment to complementing existing efforts in the AI landscape, aiming to avoid unnecessary duplication. This move comes amid industry criticism about the potential dangers and disinformation risks associated with open-source AI models.

Diverse membership 

The AI Alliance's membership, numbering around 45 organizations, reflects a diverse array of contributors, including AMD, Intel, CERN, Yale, Imperial College London, and AI startups such as Stability AI and Hugging Face. The Alliance places a strong emphasis on fostering an "open" community and enabling responsible innovation in AI. Key focus areas include ensuring scientific trust, safety, and economic competitiveness. Notably, MLCommons, the engineering consortium behind MLPerf, is a founding member, underscoring the Alliance's commitment to benchmarking AI hardware performance.

What are the main concerns?  

There is a widespread debate about whether AI should be developed in a manner that ensures accessibility or follows a proprietary, closed model. Safety concerns are at the forefront, alongside the crucial question of who stands to profit from the advancements in AI technology. Open advocates, including IBM's senior vice president, Darío Gil, emphasize a non-proprietary and open approach, envisioning AI as a collaborative, transparent endeavor.

The term "open-source" has its roots in a long-standing practice of building software where the code is freely accessible for examination, modification, and expansion. However, the definition of open-source AI varies among computer scientists, encompassing public availability of different technology components and potential use restrictions. The AI Alliance, spearheaded by Meta and IBM, aims to highlight the future of AI's foundation, emphasizing open scientific exchange, innovation, and the utilization of open-source and open technologies.

The debate gains complexity as OpenAI, the force behind ChatGPT and DALL-E, despite its name, develops AI systems that are notably closed. This raises questions about the true nature of open-source AI and adds fuel to the growing public discourse on the benefits and risks associated with this approach to AI development.

Meta's Chief AI Scientist, Yann LeCun, takes a critical stance on the matter, expressing concern about what he perceives as "massive corporate lobbying" by OpenAI, Google, and Anthropic. LeCun's worry centers around the potential concentration of power over AI technology's development and the impact of fearmongering regarding AI "doomsday scenarios" on the push to ban open-source research and development.

For IBM, a historical supporter of open-source initiatives like Linux in the 1990s, this dispute is part of a more extended competition that predates the current AI boom. Chris Padilla, leading IBM's global government affairs team, frames the opposing perspective as a "classic regulatory capture approach," drawing parallels to Microsoft's historical opposition to open-source programs that could compete with its proprietary software.