OpenAI and Broadcom team up on $10 billion custom AI chip deal
Will co-engineer new AI accelerators and high-speed Ethernet solutions for large-scale data centers
by Skye Jacobs · TechSpotServing tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.
Forward-looking: OpenAI's push for in-house chip design is part of a broader trend among Big Tech companies, with firms like Google, Amazon, and Meta already developing custom hardware tailored to their AI workloads. However, OpenAI's approach stands out for the scale of its ambition, the speed of its implementation, and close collaboration with hardware partners to develop not only chips but also rack-scale systems that use the latest networking technologies.
OpenAI and Broadcom are moving forward to develop and roll out 10 gigawatts of custom AI chips and computing systems over the next four years. This not only marks a significant stride in OpenAI's effort to expand its infrastructure but also signals its ambition to influence the design of specialized hardware directly tied to its research and deployment of advanced models such as GPT and Sora. This is part of the recently announced partnership, estimated at around $10 billion, specifically aimed at co-developing and deploying custom AI chips at massive scale with Broadcom.
The collaboration follows months of negotiations and builds on an 18-month partnership in which OpenAI and Broadcom co-developed a new series of AI accelerators, focusing on chips tailored for AI inference.
These chips are expected to be manufactured using advanced nodes from TSMC and are likely to feature innovations such as systolic array architectures and high-bandwidth memory, critical for handling complex matrix and vector operations required by modern AI models. OpenAI plans to deploy these custom systems in its own data centers and in facilities managed by third parties, with a large-scale introduction scheduled to begin in the second half of next year.
Although financial details of the agreement have not been made public, independent industry analyses estimate the contract to be worth at least $10 billion, with the potential to rise significantly depending on the scale and duration of the deployment.
OpenAI's commitment brings its total secured computing capacity – spanning agreements with Broadcom, Nvidia, and AMD – to more than 26 gigawatts. For context, this figure surpasses New York City's summer electricity consumption several times over and is roughly equivalent to the output of about 26 nuclear reactors. In addition, OpenAI recently signed contracts to deploy 6 gigawatts of AMD Instinct GPUs and secured multi-gigawatt deals with Nvidia.
// Related Stories
- OpenAI forecasts massive $115 billion cash burn by 2029
- OpenAI no longer required to store all users' deleted ChatGPT logs after court ruling
Such Herculean infrastructure comes with enormous financial implications. Industry observers estimate that OpenAI could ultimately spend hundreds of billions of dollars, or even more, to meet its ambitious capacity targets. Reports indicate that OpenAI expects to generate approximately $13 billion in revenue this year.
OpenAI and its leadership have emphasized that the current agreements are only the beginning. CEO Sam Altman has outlined plans for even more ambitious expansion, setting an internal goal of 250 gigawatts of new data center capacity by 2033. At current estimates, achieving this target could cost more than $10 trillion and would require building the equivalent of 250 nuclear power plants. This raises fundamental questions about funding, energy supply, and the feasibility of such rapid growth within a globally constrained supply chain.
As OpenAI continues its rise as the world's most valuable private tech startup, currently valued at around $500 billion, its aggressive hardware strategy is rapidly reshaping expectations across both the technology and energy sectors.
Consultancy estimates from Bain & Co. suggest that the scale of OpenAI's planned computing infrastructure could drive global AI revenue to about $2 trillion annually by 2030, surpassing the combined 2024 revenues of Amazon, Apple, Alphabet, Microsoft, Meta, and Nvidia.
While OpenAI's path presents significant operational and financial challenges, its executives remain convinced that continuously expanding computing resources is essential to advancing the next generation of artificial general intelligence. Whether these enormous investments will ultimately pay off remains uncertain, but for now, OpenAI and its partners are moving forward at a pace few in the industry can match.