OpenAI is finalising the design for its first in-house AI accelerator chip and plans to send it to Taiwan’s TSMC for an initial fabrication run, called tape-out, in the next few months, Reuters reported.
The report, which cited unnamed sources indicates OpenAI is moving ahead at a rapid pace on its plans for an in-house chip to diversify away from dependence on Nvidia’s GPUs, which control roughly 80 percent of the market.
The approaching tape-out, which refers to sending the final design for a chip to a manufacturer for production, indicates OpenAI’s plans are on track to bring its in-house chip into use next year.
It would normally take about six months for the final data to be turned into a final chip ready for mass production, at a cost of tens of millions of dollars, the report said.
AI infrastructure
If the initial tape-out does not produce a working chip any issues would need to be diagnosed and corrected, adding more time to the process.
OpenAI is considering the development of the chip as key to negotiating leverage with other chip suppliers, the report said.
The initial chip is intended mainly for use in running AI models, a process known as inference, and is intended to be deployed on a limited scale, with a limited role in the company’s infrastructure, but future iterations are planned to add broader capabilities.
The chip has been developed relatively quickly by a team of about 40 people led by Richard Ho, who joined OpenAI from Google. The team has doubled since a previous report in October.
The team has been working with Broadcom on the chip, the report said.
Ho’s team is far smaller than in-house chip initiatives from Amazon or Google, which employ hundreds of engineers.
TSMC is to manufacture the chip using its advanced 3-nanometre process using a common systolic array architecture and high-bandwidth memory (HBM).
Demand for HBM has soared along with the demand for GPUs, leading memory producers such as Samsung and SK Hynix to ramp production.
Diversification
OpenAI is also planning to deploy AI chips from AMD alongside Nvidia and its own chips, Reuters reported last October.
OpenAI initially considered producing its own in-house chips by establishing a network of its own foundries, but abandoned that plan due to the costs and time involved, the report said.
OpenAI’s talks with Broadcom were first reported last July.
The deal with Broadcom reportedly enabled OpenAI to secure manufacturing capacity at TSMC.
Companies involved in AI such as Microsoft and Meta Platforms are spending tens of billions of dollars a year on AI infrastructure, but wariness of dependence on Nvidia has led them to explore the use of in-house silicon.