OpenAI Collaborates with Broadcom and TSMC to Develop First AI Custom Chip

2024-10-30

According to sources from Reuters, the rapidly expanding OpenAI—known as the driving force behind ChatGPT—is collaborating with Broadcom and TSMC to develop its first custom chip designed to support artificial intelligence systems. Additionally, OpenAI plans to incorporate AMD and Nvidia chips to address the growing infrastructure demands.

OpenAI has been actively exploring various avenues to diversify its chip supply and effectively reduce costs. They have not only considered the possibility of building everything in-house but have also investigated the expensive option of securing funding to establish a chip manufacturing network, known as "fabs."

Following the news, Broadcom's stock price rose by over 4.5%, while AMD's shares continued their morning uptrend, closing the day with a 3.7% increase. OpenAI and AMD declined to comment on this report, and TSMC has not yet responded to requests for comments. Broadcom also did not provide an immediate response.

OpenAI is renowned for its commercialized generative AI, with systems capable of producing human-like responses, which rely on substantial computational power for training and operation. As one of the primary purchasers of Nvidia's graphics processing units (GPUs), OpenAI's AI chips are used both for training models (the process of AI learning from data) and for inference (applying AI to make predictions or decisions based on new information).

Broadcom has extensive experience in chip manufacturing, having aided numerous clients, including Google’s divisions under Alphabet, in adjusting chip designs before manufacturing and assisting in designing components that rapidly process information on chips. In AI systems, thousands of chips need to work in harmony, making this expertise crucial.

Sources indicate that OpenAI is still considering whether to develop or acquire additional elements of its chip designs and may seek further collaborations with other partners. They have formed a chip team of approximately 20 people, led by top engineers who previously developed tensor processing units (TPUs) at Google, including Thomas Norrie and Richard Ho.

With Broadcom's assistance, OpenAI has secured production capabilities in partnership with TSMC and plans to launch its first custom chip in 2026. However, sources also note that this timeline is subject to change.

Currently, Nvidia's GPUs account for over 80% of the AI chip market. However, supply shortages and rising costs have prompted major clients like Microsoft, Meta, and OpenAI to explore alternative internal or external options. OpenAI plans to use AMD chips through Microsoft's Azure, marking the first report of such a collaboration and demonstrating AMD's determination to gain a foothold in the Nvidia-dominated market. AMD expects to launch its new MI300X chip in the fourth quarter of 2023 and aims to achieve $4.5 billion in AI chip sales by 2024.

Training AI models and operating services like ChatGPT are cost-intensive. Sources reveal that OpenAI expects a loss of $5 billion this year, with revenues amounting to $3.7 billion. Among these, computing costs—including expenses for processing large datasets, developing models, hardware, electricity, and cloud services—are the company's largest expenditures. Therefore, OpenAI is striving to optimize utilization and diversify suppliers.

Despite actively exploring new chip supply solutions, OpenAI still aims to maintain good relationships with chip manufacturers like Nvidia, particularly to access their next-generation Blackwell chips. As a result, OpenAI has been cautious about poaching talent from Nvidia, sources add. Nvidia declined to comment on this report.