Meta CEO Mark Zuckerberg announced a major overhaul of Meta’s AI efforts in a memo to staff on Monday, including a clutch of significant hires from rival AI companies.
The restructuring comes after Meta has struggled in recent months to stay at the cutting edge of AI. Zuckerberg has largely staked Meta’s future on the rapidly-evolving technology, saying the company will spend between $64 billion and $72 billion building out data centers to handle AI workloads this year. This is up from just $28 billion in annual capital spending in 2023.
Zuckerberg’s announcement signals a major strategic shift and aggressive investment in AI, with the CEO reportedly investing billions of dollars to secure key AI talent. Here are key takeaways from Zuckerberg’s memo and background on the rationale behind Meta’s moves:
- Creation of Meta Superintelligence Labs (MSL):
Meta is consolidating all its efforts that involve building large AI models—including its teams working on its Llama models, product teams, its Fundamental AI Research (FAIR) team, as well as a brand new unit focused on developing the next generation of cutting-edge AI—under a new division, MSL. MSL is being co-led by ex-Scale CEO and cofounder Alexandr Wang, who is becoming Meta’s “Chief AI Officer” and ex-GitHub CEO and AI investor Nat Friedman. - Strategic Goal:
The explicit aim is to build “personal superintelligence for everyone,” Zuckerberg said in the memo. Superintelligence would be AI that can perform beyond human level at most cognitive tasks. - Aggressive Talent Acquisition:
Meta is poaching top AI talent from rivals like OpenAI, Google, and Anthropic. To do so, he is offering unprecedented signing bonuses—up to $100 million, according to comments made by OpenAI CEO Sam Altman. In his memo, Zuckerberg announced the hiring of 11 well-respected AI researchers from these other AI labs. Meta also invested $14.3 billion into Scale AI as part of the deal to bring Wang to Meta and reportedly invested billions into Friedman’s AI-focused venture capital firm to secure his move to Meta.
- Failed Acquisition Attempts:
The hiring spree follows rebuffed attempts to acquire key AI startups, including former OpenAI Chief Scientist Ilya Sutskever’s company Safe Superintelligence and Perplexity AI. - Llama Struggles and Competitive Pressure:
Meta’s latest Llama AI model family, called Llama 4, has underperformed expectations. The company faced criticism that it published misleading benchmark figures for Llama 4, designed to make the models appear more competitive than they actually are. The release of Llama 4 Behemoth, the largest—and, according to Meta, the most powerful—model it has produced, has repeatedly been delayed. Meta has not yet debuted models with “reasoning” capabilities, losing ground to rivals such as OpenAI, Anthropic, Google, DeepSeek, and Alibaba’s Qwen. This has led to internal debate about Meta’s AI direction and increased urgency to revitalize its AI portfolio. - Retention and Reputation Challenges:
Meta has suffered from the loss of key Llama researchers to competitors, further increasing the need to attract and retain world-class AI talent. The company also faces ongoing legal and ethical scrutiny over data practices, - Massive Capital Commitment:
Meta is investing tens of billions in infrastructure, data centers, and custom hardware in an attempt to secure a leading role in the AI era.
What follows is the full-text as Zuckerberg’s memo, which was obtained by Fortune reporter Sharon Goldman:
As the pace of AI progress accelerates, developing superintelligence is coming into sight. I believe this will be the beginning of a new era for humanity, and I am fully committed to doing what it takes for Meta to lead the way. Today I want to share some details about how we’re organizing our AI efforts to build towards our vision: personal superintelligence for everyone.
We’re going to call our overall organization Meta Superintelligence Labs (MSL). This includes all of our foundations, product, and FAIR teams, as well as a new lab focused on developing the next generation of our models.
Alexandr Wang has joined Meta to serve as our Chief AI Officer and lead MSL. Alex and I have worked together for several years, and I consider him to be the most impressive founder of his generation. He has a clear sense of the historic importance of superintelligence, and as co-founder and CEO he built ScaleAI into a fast-growing company involved in the development of almost all leading models across the industry.
Nat Friedman has also joined Meta to partner with Alex to lead MSL, heading our work on AI products and applied research. Nat will work with Connor to define his role going forward. He ran GitHub at Microsoft, and most recently has run one of the leading AI investment firms. Nat has served on our Meta Advisory Group for the last year, so he already has a good sense of our roadmap and what we need to do.
We also have several strong new team members joining today or who have joined in the past few weeks that I’m excited to share as well:
- Trapit Bansal — pioneered RL on chain of thought and co-creator of o-series models at OpenAI.
- Shuchao Bi — co-creator of GPT-4o voice mode and o4-mini. Previously led multimodal post-training at OpenAI.
- Huiwen Chang — co-creator of GPT-4o’s image generation, and previously invented MaskGIT and Muse text-to-image architectures at Google Research
- Ji Lin — helped build o3/o4-mini, GPT-4o, GPT-4.1, GPT-4.5, 4o-imagegen, and Operator reasoning stack.
- Joel Pobar — inference at Anthropic. Previously at Meta for 11 years on HHVM, Hack, Flow, Redex, performance tooling, and machine learning.
- Jack Rae — pre-training tech lead for Gemini and reasoning for Gemini 2.5. Led Gopher and Chinchilla early LLM efforts at DeepMind.
- Hongyu Ren — co-creator of GPT-4o, 4o-mini, o1-mini, o3-mini, o3 and o4-mini. Previously leading a group for post-training at OpenAI.
- Johan Schalkwyk — former Google Fellow, early contributor to Sesame, and technical lead for Maya.
- Pei Sun — post-training, coding, and reasoning for Gemini at Google Deepmind. Previously created the last two generations of Waymo’s perception models.
- Jiahui Yu — co-creator of o3, o4-mini, GPT-4.1 and GPT-4o. Previously led the perception team at OpenAI, and co-led multimodal at Gemini.
- Shengjia Zhao — co-creator of ChatGPT, GPT-4, all mini models, 4.1 and o3. Previously led synthetic data at OpenAI.
I’m excited about the progress we have planned for Llama 4.1 and 4.2. These models power Meta AI, which is used by more than 1 billion monthly actives across our apps and an increasing number of agents across Meta that help improve our products and technology. We’re committed to continuing to build out these models.
In parallel, we’re going to start research on our next generation of models to get to the frontier in the next year or so. I’ve spent the past few months meeting top folks across Meta, other AI labs, and promising startups to put together the founding group for this small talent-dense effort. We’re still forming this group and we’ll ask several people across the AI org to join this lab as well.
Meta is uniquely positioned to deliver superintelligence to the world. We have a strong business that supports building out significantly more compute than smaller labs. We have deeper experience building and growing products that reach billions of people. We are pioneering and leading the AI glasses and wearables category that is growing very quickly. And our company structure allows us to move with vastly greater conviction and boldness. I’m optimistic that this new influx of talent and parallel approach to model development will set us up to deliver on the promise of personal superintelligence for everyone.
We have even more great people at all levels joining this effort in the coming weeks, so stay tuned. I’m excited to dive in and get to work.
***
Disclaimer: For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing.