Broadcom Upgrades Jericho Data Centre Chip For AI Age

Share This Post


Broadcom on Monday released its next-generation Jericho networking chip, which takes aim at artificial intelligence workloads by speeding up traffic over long distances, allowing sites up to 60 miles (100 km) apart to be linked together.

The Jericho4 also improves security by encrypting data, a critical concern when data is passing outside a facility’s physical walls.

The new chip, which uses TSMC’s 3 nanometre manufacturing process, incorporates high-bandwidth memory (HMB), which has also become the preferred memory type for AI accelerator chips from the likes of Nvidia and AMD.

Image credit: Microsoft

Multiple sites

The increased memory is another factor enabling the network chips to transfer data over long distances, said Ram Velaga, senior vice president and general manager of Broadcom’s Core Switching Group.

Essentially the Jericho4 allows multiple, smaller data centres to be linked up into a single, more powerful system, Broadcom said.

This creates more flexibility in the age of AI, whose workloads require massive computing and electrical power.

Cloud infrastructure companies are looking to link together hundreds of thousands of power-hungry AI chips, but a network of 100,000 or 200,000 GPUs requires more power than is typically available in one physical building, Velaga said.

To make the cluster possible, companies can use Jericho4-based networks to link together clusters of server racks across multiple buildings, he said.

Products based on the chip can also help cloud companies move compute workloads closer to customers by creating data centre sites in congested urban areas, where it may be more practical to link together multiple, smaller sites.

The chip complements Broadcom’s Tomahawk line of chips, which connect racks within a data centre, typically at distances under one kilometer.

Broadcom said the Jericho4 can connect more than 1 million processors across multiple data centres and can handle about four times more data than the previous version.

The company said it began shipping the chip this week to early customer such as cloud providers and networking gear manufacturers, with products using it expected to appear in about nine months.

In-house AI chips

Broadcom also makes custom AI accelerator chips for the likes of Facebook parent Meta Platforms, which is working with Broadcom to build new Santa Barbara AI data centre servers.

Broadcom is supplying the custom processors for the servers, which are being manufactured by Taiwanese firm Quanta Computer, Economic Daily News reported.

Meta has ordered up to 6,000 racks of the Santa Barbara servers, which are to replace its existing Minerva servers, the news outlet reported.

OpenAI is reportedly working with Broadcom and TSMC on its first in-house AI accelerator chip, as part of an effort to diversify its supply of specialised processing power.



Source link

spot_img

Related Posts

OpenAI Just Released Its First Open-Weight Models Since GPT-2

OpenAI just dropped its first open-weight models in...

Banned Russian media sites ‘still accessible’ across EU, report finds

After Russia invaded Ukraine in February 2022, EU...

Rocket Lab launches iQPS radar imaging satellite

WASHINGTON — A Rocket Lab Electron launched the...

Amazon Great Freedom Festival Sale: Top Deals on Performance Laptops Under Rs. 70,000

Amazon Great Freedom Festival Sale 2025, which started...

OnePlus Pad Lite review: A reliable everyday performer

In a world where budget tablets often feel...
spot_img