top of page

Broadcom Extends Leadership in Custom Accelerators and Merchant Networking Solutions for AI Infrastructure

Latest Offerings Advance Broadcom’s Portfolio of Open, Scalable and Power-Efficient Technologies for AI solutions

PALO ALTO, Calif., March 20, 2024 (GLOBE NEWSWIRE) -- Cloud and data center providers are building AI systems at a pace that requires a new level of performance, scale and efficiency. Consumer AI use cases are increasingly driving the need for lowest power custom AI accelerators, while open, standards-based merchant networking solutions scale large AI clusters. Broadcom Inc. (NASDAQ: AVGO) is evolving a broad portfolio of technologies to extend its leadership in enabling next-generation AI infrastructure. This includes foundational technologies and advanced packaging capabilities aimed at building the highest performance, lowest power custom AI accelerators. In addition, the complete set of end-to-end merchant silicon connectivity solutions ranging from best-in-class Ethernet and PCIe to optical interconnects with co-packaging capabilities drives the scale-up, scale-out and front-end networks of AI clusters.




“For providers contending with the ever-increasing demand for generative AI clusters, the key to success will be a network-centric platform, based on open solutions, that scales at the lowest power,” said Charlie Kawwas, Ph. D., president of Broadcom’s Semiconductor Solutions Group. “The innovations we’ve introduced extend our leadership for custom AI accelerators, Ethernet, PCI Express and optical interconnect portfolios. Built on our world-class foundational technologies like SerDes and DSP, they provide the best custom XPUs and merchant networking solutions enabling AI infrastructure.”

Broadcom’s latest AI infrastructure innovations include:

  • Delivery of its industry-first 51.2T Bailly CPO Ethernet switch. Broadcom Bailly delivers unprecedented bandwidth density and economic efficiency addressing connectivity challenges in data center switching and computing.

  • An expanded portfolio of proven optical interconnect solutions supporting 200G/lane for AI and ML applications. Broadcom’s industry-leading VCSEL, EML and CW laser technologies enable high-speed interconnects for front-end and back-end networks of large-scale generative AI compute clusters.

  • The industry’s first end-to-end PCIe connectivity portfolio. Broadcom’s new PCIe Gen5/Gen6 retimers, together with our PEX series switches, offer the lowest power solutions and unparalleled efficiency to interconnect CPUs, accelerators, NICs and storage devices.

  • Trident 5-X12 chip with neural network chip integrates NetGNT technology, marking a pioneering advancement in switching silicon by enabling it to adeptly identify traffic patterns typical in AI/ML workloads and effectively avert congestion.

  • Vision for AI acceleration and democratization outlined at OCP Global Summit 2023, spanning a combination of ubiquitous AI connectivity, innovative silicon, and open standards.

  • Sian™ BCM85822 800G PAM-4 DSP PHY for AI workloads at scale. The BCM85822 features 200G/lane serial optical interfaces, which enables lowest-power, highest-performance 800G and 1.6T optical transceiver modules to address the growing bandwidth demands in hyperscale data centers and cloud networks.

  • High-performance Jericho3-AI fabric for AI networks. Networks based on Jericho3-AI will help handle the ever-expanding workloads AI demands will present.

  • Tomahawk® 5 provides a major performance boost for AI/ML infrastructure. The family of Ethernet switch/router chips is available for production deployments.

Supporting Resources


bottom of page