Meta's Open Source AGI: Llama 3 Competes with Closed Models

Meta has officially escalated the global race for artificial intelligence supremacy by forming a new product group dedicated to building superintelligence. This move, announced by CEO Mark Zuckerberg, formalizes a strategic pivot that has been underway for over a year, combining the company’s fundamental research arm with its product teams under a singular, ambitious goal. Backing this declaration is a colossal investment in hardware, part of a Zuckerberg Meta superintelligence plan to accumulate one of the world’s largest AI training infrastructures. This development solidifies the competitive landscape, positioning Meta’s distinct open-source philosophy directly against the closed, proprietary systems of rivals in the AGI race, Meta vs OpenAI and Google, and signals that the battle for the future of intelligence is entering a new, capital-intensive phase.
Key Points
• Meta’s infrastructure plan includes acquiring approximately 350, 000 NVIDIA H100 GPUs by the end of 2024, creating a compute platform with power equivalent to nearly 600, 000 H100s.
• The open-source Llama 3 70B model achieves an 82.0 score on the MMLU benchmark, demonstrating performance competitive with closed models like Google’s Gemini Pro 1.0.
• Research from the International Energy Agency (IEA) indicates data center energy consumption could double by 2026, with AI as a primary driver, highlighting a critical sustainability challenge for all AGI efforts.
• Meta’s Chief AI Scientist, Yann LeCun, advocates for “World Models,” an alternative to the dominant scaling approach, indicating an internal debate on the best path to true machine reasoning.
FAIR Fusion: When Research Meets Production
Meta’s pursuit of superintelligence is built upon a significant organizational and infrastructural foundation laid over the past year. A critical step was the merger of its long-standing Fundamental AI Research (FAIR) group with its product-focused GenAI team. This Meta AI group restructuring news was explicitly designed to create tighter feedback loops, aligning the company’s exploratory research talent directly with the capital-intensive mission of building Artificial General Intelligence (AGI).
This strategic consolidation is powered by a Meta AI massive compute buildout of staggering proportions. Zuckerberg announced plans to create a compute platform with power equivalent to nearly 600, 000 NVIDIA H100s by the end of 2024. This immense investment, costing billions, is a direct response to the scaling requirements for training next-generation foundation models and ensures Meta has the raw power to compete with the infrastructures supporting OpenAI and Google.
Open Weights, Open Innovation
Meta’s most significant differentiator in the AI race is its unwavering commitment to open source. The release of the Llama model series, culminating in Llama 3, is the cornerstone of the Meta open source AGI strategy. The Llama 3 70B model’s performance, achieving a score of 82.0 on the MMLU benchmark and 8.5 on MT-Bench, demonstrates that it is highly competitive with contemporary closed models like Google’s Gemini Pro 1.0 and Anthropic’s Claude 3 Sonnet.
This approach stands in sharp contrast to the walled gardens of OpenAI and Google. By making its powerful models publicly available on platforms like Hugging Face, Meta accelerates industry-wide innovation and crowdsources research, safety testing, and application development. While competitors argue that frontier models must be kept closed for safety, Meta contends that openness leads to more robust and secure AI by allowing a global community to scrutinize and improve the technology, a debate central to the future of AI development.
World Models vs. Brute Force
While Meta scales its Llama models, a key internal debate reflects a wider challenge in the AI field. The dominant “scaling hypothesis”—that bigger models and more data will inevitably lead to AGI—is the primary driver at OpenAI and Google. However, Meta’s own Chief AI Scientist, Yann LeCun, is a prominent skeptic of this approach, arguing that current architectures are “doomed” because they lack true reasoning and planning capabilities.
LeCun champions alternative architectures like his Joint Embedding Predictive Architecture (JEPA), a type of “World Model” designed to learn abstract representations of the world to enable reasoning. This internal tension between scaling what works and researching what’s necessary for the next leap is critical. Furthermore, the entire endeavor faces monumental hurdles. The pursuit of superintelligence confronts profound safety questions, with one 2023 survey of AI researchers finding a median estimate of a 10% chance of an AI-related existential catastrophe. The difficulty of this alignment problem is underscored by competitors’ struggles, such as the recent dissolution of OpenAI’s team focused on long-term AI risks. Simultaneously, the IEA warns that the energy consumption for these massive data centers is on an unsustainable trajectory, presenting a severe resource constraint on the road to AGI.
Silicon Chessboard: The AGI Endgame
Meta’s formation of a superintelligence group is a definitive statement of intent. The company is leveraging its unique strategy—marrying a colossal compute arsenal with a philosophy of open-source development—to challenge the established leaders in the race for AGI, who are pushing forward with their own powerful agents like OpenAI’s recently unveiled GPT-4o. Its success hinges on its ability to attract elite talent and solve fundamental architectural problems that may lie beyond simple scaling. With the technical, ethical, and resource battle lines clearly drawn between open and closed development, which path will ultimately define the trajectory toward advanced artificial intelligence?
Read More From AI Buzz

Vector DB Market Shifts: Qdrant, Chroma Challenge Milvus
The vector database market is splitting in two. On one side: enterprise-grade distributed systems built for billion-vector scale. On the other: developer-first tools designed so that spinning up semantic search is as easy as pip install. This month’s data makes clear which side developers are choosing — and the answer should concern anyone who bet […]

Anyscale Ray Adoption Trends Point to a New AI Standard
Ray just hit 49.1 million PyPI downloads in a single month — and it’s growing at 25.6% month-over-month. That’s not the headline. The headline is what that growth rate looks like next to the competition. According to data tracked on the AI-Buzz dashboard , Ray’s adoption velocity is more than double that of Weaviate (+11.4%) […]
