Meta CEO Mark Zuckerberg said on Instagram Reel that the company is developing an open-source Artificial General Intelligence (AGI). This will bring its two AI research teams, FAIR and GenAI, closer together to build a comprehensive AGI and make it as open-source as possible.
Enabling Responsible AI to Advance as Fast as AI
Zuckerberg wrote in a statement, "Our long-term vision is to build a universal intelligence that is responsibly open-sourced and widely available for everyone's benefit." In the video, he said, "It's clear that the next generation of services we need to build is a comprehensive AGI, building the best AI assistants, providing AI for creators, AI for businesses, and so on, all of which require progress in various areas of AI, from reasoning to planning, from coding to memory and other cognitive abilities."
Zuckerberg also mentioned that the company is currently training Llama 3 and claimed to be building a "massive computing infrastructure," including 350,000 NVIDIA H100 by the end of this year.
He also praised Meta's Ray-Ban smart glasses and metaverse. He said, "People will also need new AI devices, and over time, AI and the metaverse will gradually merge together." "I think many of us will have very frequent conversations with AI throughout the day. I think many of us will use glasses to do that. These glasses are ideal form factors for AI to see what you see and hear what you hear. So, it's always there to help."
Before Zuckerberg's announcement, OpenAI CEO Sam Altman made numerous comments about AGI at the World Economic Forum in Davos, Switzerland, softening his tone on the existential risks of AGI two months after being fired in November 2023.
Although Meta's Chief Scientist Yann LeCun often expresses skepticism about the imminent arrival of AGI, it is certainly not coming within the next five years.
Finally, the news of Meta's future open-source AGI comes months after VentureBeat reported Llama and open-source AI "winning" in 2023. This announcement is sure to spark further debates about open-source vs. closed-source AI, especially after Anthropic published a paper suggesting that open models may harbor disruptive "sleeper agents."