Meta vs OpenAI: Zhao Hire Escalates Superintelligence Race

In a move that sends a clear signal across the artificial intelligence landscape, Meta has appointed former OpenAI researcher Shengjia Zhao to lead its newly formed AI Superintelligence unit. This development arrives just months after the high-profile collapse of OpenAI’s own Superalignment team, positioning Meta to capitalize on the fallout. The hire represents a calculated escalation in the AI talent wars and a doubling-down on Meta’s open-source philosophy as the primary path toward advanced AI. This strategic maneuver, a key Meta AI open source strategy update, is not merely about acquiring talent; it is a direct challenge to the closed, proprietary models that have dominated the race to build artificial general intelligence, framing the competition as a fundamental clash of development ideologies.
Key Points
• Meta’s recruitment of Shengjia Zhao exemplifies the escalating “AI talent wars,” where top researchers command compensation packages exceeding $1 million, and CEOs like Mark Zuckerberg are personally involved in recruitment.
• The move highlights a strategic divergence in AI development, pitting Meta’s open-source Llama 3 model against OpenAI’s proprietary, closed-architecture approach for models like GPT-4.
• Meta is launching its superintelligence initiative in the wake of the OpenAI superalignment collapse, where OpenAI’s team disbanded amid internal disagreements over prioritizing safety versus “shiny products.”
• Shengjia Zhao’s documented expertise in model efficiency, compression, and large-scale training directly aligns with the immense computational challenges of building superintelligent systems, as detailed by the Stanford AI Index.
The Million-Dollar Talent Chess Match
Meta’s appointment of Shengjia Zhao is the latest and most pointed move in an industry-wide talent acquisition frenzy. The competition for elite AI researchers has driven compensation to extraordinary levels, with recruiting firm Heidrick & Struggles noting that AI vice presidents can command packages up to $5 million. As reported by Reuters, even researchers with just a few years of experience can expect annual compensation well over $1 million, fueled by intense demand for the small pool of individuals who have built large-scale models.
This aggressive environment is one Meta has actively cultivated. Reports from The Verge have documented CEO Mark Zuckerberg’s personal involvement in poaching talent, sending direct emails to researchers at competitors like Google’s DeepMind. By framing Meta as the premier destination for ambitious builders, Zuckerberg has established a clear pattern of behavior. The hiring of a key researcher from OpenAI is a direct continuation of this strategy, underscoring the immense value Meta places on securing the architects of future AI systems.
Open Weights, Open Future
The significance of Zhao’s move extends beyond talent acquisition into the core philosophical debate shaping AI: open versus closed development. Zhao leaves an organization that has become increasingly proprietary. As WIRED notes, OpenAI has disclosed “almost nothing about the data used to train it or the specifics of its architecture” for its flagship GPT-4 model. This secrecy makes it difficult for external experts to assess risks or verify capabilities.
In stark contrast, Meta has staked its AI future on an open approach. In its announcement for Llama 3, the company asserted that “openness leads to better, safer AI.” Zhao’s expertise, evidenced by his Google Scholar profile, focuses on model efficiency, quantization, and compression—skills essential for managing the colossal computational costs of frontier models. The Stanford AI Index 2024 highlights these soaring costs, estimating GPT-4’s training at $78 million and Google’s Gemini Ultra at a staggering $191 million in compute. Zhao’s background indicates the Meta superintelligence group’s latest effort is not just about building bigger models, but building them more efficiently and sustainably, a goal well-suited to an open, collaborative ecosystem.
From Collapse to Construction
Meta is launching its superintelligence unit in the shadow of a major competitor’s public failure. In July 2023, OpenAI announced its Superalignment team, a high-profile effort to solve AI control problems, committing 20% of its secured compute to the task. Less than a year later, that team was gone.

As TIME reported, the team disbanded following the resignations of its leaders, Ilya Sutskever and Jan Leike. Upon his departure, Leike publicly stated that at OpenAI, “safety culture and processes have taken a backseat to shiny products.” This internal conflict between advancing capabilities and ensuring safety provides a critical lesson. The Meta superintelligence vs OpenAI superalignment narrative is now one of a fresh start versus a troubled past. By forming its own group now, Meta has the opportunity to structure its long-term research in a way that attempts to learn from OpenAI’s missteps, potentially integrating safety and alignment as foundational principles of its open development process rather than a separate, and ultimately conflicting, priority.
The Efficiency Gambit in AGI’s Marathon
Meta’s formation of an AI Superintelligence unit under Shengjia Zhao is a multifaceted strategic play. It is simultaneously an aggressive talent acquisition, a firm bet on its open-source philosophy, and a direct response to a rival’s organizational stumble. This move acknowledges the reality that the pursuit of AGI is a marathon fueled by immense capital and computational power, where strategic direction and research culture are as vital as technical prowess. The central question this development poses is clear: can a strategy rooted in openness and efficiency outmaneuver a closed, product-first approach in the long and arduous race to build truly general artificial intelligence?
Tags
Read More From AI Buzz

Vector DB Market Shifts: Qdrant, Chroma Challenge Milvus
The vector database market is splitting in two. On one side: enterprise-grade distributed systems built for billion-vector scale. On the other: developer-first tools designed so that spinning up semantic search is as easy as pip install. This month’s data makes clear which side developers are choosing — and the answer should concern anyone who bet […]

Anyscale Ray Adoption Trends Point to a New AI Standard
Ray just hit 49.1 million PyPI downloads in a single month — and it’s growing at 25.6% month-over-month. That’s not the headline. The headline is what that growth rate looks like next to the competition. According to data tracked on the AI-Buzz dashboard , Ray’s adoption velocity is more than double that of Weaviate (+11.4%) […]
