OpenAI’s ambitious plan to build a semiconductor foundry has been temporarily shelved, according to reports. The ChatGPT-owned company is instead currently partnering with Broadcom and TSMC to develop its own AI chips to support its AI large models, sources told Reuters. Furthermore, OpenAI is procuring chips from NVIDIA and AMD to meet its rising infrastructure demands.
Why it matters: By diversifying chip suppliers and exploring manufacturing options, OpenAI aims to reduce costs and ensure scalability, critical for sustaining its rapid growth in the competitive AI landscape.
Details: OpenAI, the rapidly expanding company behind ChatGPT, has explored various ways to diversify its chip supply and lower costs.
- OpenAI has been working with Broadcom for several months to develop its first AI chip focused on inference, according to Reuters. While there is currently high demand for AI training chips in the market, analysts predict that as more AI applications are deployed, the demand for AI inference chips is likely to surpass that for AI training chips.
- OpenAI has formed a chip team of around 20 members, led by top engineers such as Thomas Norrie and Richard Ho, who previously developed Tensor Processing Units (TPUs) for Google. Additionally, OpenAI has secured chip manufacturing capacity from TSMC through Broadcom, with the first custom-designed chip expected to be produced in 2026.
- OpenAI’s intention to use AMD chips through Microsoft’s Azure highlights AMD’s efforts to capture market share from NVIDIA with its new MI300X chips, the report said. AMD forecasts $4.5 billion in AI chip sales for 2024, following the chip’s release in the fourth quarter of 2023.
Context: Training AI models and operating services like ChatGPT are costly. Sources told Reuters that OpenAI is projecting a $5 billion loss this year against $3.7 billion in revenue. The largest expense for the company is compute costs, which include hardware, electricity, and cloud services necessary for processing large datasets and developing models.