Hugging Face, one of the giants in the field of machine learning, is investing $10 million in free shared GPUs to help developers create new AI technologies. The goal of this initiative is to assist small developers, scholars, and startups in countering the centralization of AI development.
"We are fortunate to be in a position to invest in the community," said Clem Delangue, CEO of Hugging Face. Delangue stated that this investment is possible because Hugging Face is "profitable or close to profitability" and recently raised $235 million in funding, bringing the company's valuation to $4.5 billion.
Delangue is concerned about the ability of AI startups to compete with tech giants. Most significant advancements in AI, such as GPT-4, the algorithms behind Google Search, and Tesla's fully autonomous driving system, are hidden within large tech companies. These companies not only keep their models proprietary for financial incentives but also have billions of dollars in computing resources at their disposal, allowing them to continuously accumulate these advantages and outpace competitors, making it difficult for startups to keep up.
"If only a few organizations end up dominating, it will be even harder to compete with them in the future."
Hugging Face's goal is to make state-of-the-art AI technology accessible to everyone, not just tech giants. At Google's flagship conference, Google I/O, executives showcased numerous AI features of their proprietary products, including a series of open-source models called Gemma. For Delangue, proprietary technology is not the future he envisions.
Operation
Obtaining computing power is a significant challenge for building large language models, which typically favors companies like OpenAI and Anthropic that have agreements with cloud providers to access substantial computing resources. Hugging Face aims to balance the competitive landscape by donating these shared GPUs to the community through a new project called ZeroGPU.
These shared GPUs can be used by multiple users or applications simultaneously, eliminating the need for each user or application to have dedicated GPUs. ZeroGPU will be provided through Hugging Face's Spaces, a hosting platform for deploying applications. According to the company, there are currently over 300,000 AI demos created on CPUs or paid GPUs.
"Obtaining sufficient GPUs from major cloud providers is very challenging."
Access to shared GPUs depends on usage, so if a portion of the GPU's capacity is not actively utilized, it becomes available for others to use. This makes them cost-effective, energy-efficient, and highly suitable for community-wide utilization. ZeroGPU utilizes Nvidia A100 GPU devices to support this operation.
Open-source AI Catching Up
As AI progresses rapidly in closed environments, Hugging Face aims to enable people to build more AI technologies in open environments.
"If only a few organizations end up dominating, it will be even harder to compete with them in the future," said Delangue.
Hugging Face's machine learning engineer, Andrew Reed, even developed an application that visualizes the progress of proprietary and open-source LLM (large language models) over time through the ratings of LMSYS Chatbot Arena, demonstrating the narrowing gap between the two.
Since Meta released its first version a year ago, over 35,000 variants of the open-source AI model Llama have been shared on Hugging Face, covering "quantized fusion models and dedicated models for biology and Mandarin," according to the company.
"AI should not be controlled by a few. We are committed to supporting open-source developers and are excited to see what everyone will create next in the spirit of collaboration and transparency," stated Delangue in a press release.