Research Finder
Find by Keyword
Why Is Ethernet Still the AI Interconnect of Choice?
Broadcom’s 102.4T switches and 800G NICs underscore Ethernet dominance in scaling next-generation AI fabrics. Open ecosystem versus proprietary stack debates will abound, however Ethernet is the solid bet.
Key Highlights:
The new Tomahawk 6 series doubles switching capacity to 102.4 Terabits per second, setting a new bar for AI cluster performance.
Broadcom is prioritizing open standards by aligning its portfolio with the Scale-Up Ethernet (SUE) framework and the Ultra Ethernet Consortium (UEC).
The third-generation TH6-Davisson Co-packaged Optics (CPO) switch aims to deliver substantial power efficiency improvements in the data center.
The Thor Ultra 800G AI Ethernet NIC introduces advanced networking features like packet-level multipathing to the Ethernet AI ecosystem.
This portfolio is a clear strategic move to position Ethernet as the resilient, scalable, and economical choice over proprietary alternatives such as InfiniBand.
The News
Broadcom recently showcased major enhancements to its extensive portfolio of AI networking silicon at the 2025 OCP Global Summit. The announcements centered on products such as the Tomahawk 6 and Tomahawk Ultra Ethernet switches, the Jericho4 fabric router, and the Thor Ultra 800G AI Ethernet NIC. These innovations establish an end-to-end Ethernet-based infrastructure designed to meet the exponential bandwidth and scale requirements of large AI clusters. The new silicon solutions emphasize power efficiency and an open ecosystem approach, reinforcing Ethernet as a unifying technology for scale-up and scale-out AI workloads. Find out more by clicking here to read the press release.
Analyst Take
When we look at the current infrastructure wars driving AI adoption, the core battle is not about who has the fastest chip, but who controls the fabric connecting those chips. Broadcom’s recent announcement regarding its end-to-end AI networking solutions, particularly the unveiling of the 102.4 Terabits per second (Tbps) class switching capacity, places a massive marker in the ground. This is fundamentally about reinforcing Ethernet's standing as the foundational technology for global hyperscalers and cloud builders.
For years, we watched InfiniBand carve out a specialized niche in high-performance computing (HPC) and early AI deployments due to its low latency advantage. However, Broadcom is relentlessly architecting Ethernet to eliminate this gap while capitalizing on Ethernet’s inherent ubiquity, standardization, and resilience. The sheer volume of this new technology release, such as spanning core switches, routing fabric, specialized NICs, and advanced optical components, is a comprehensive strategy designed to challenge the reliance on proprietary, single-vendor solutions that dominate large AI environments today.
Our perspective is that the industry is entering a key phase where network scale is becoming the primary bottleneck for building trillion-parameter models. You need immense bandwidth. You also need low latency and the ability to operate across geographically dispersed data centers without proprietary lock-in. Broadcom's focus on the Scale-Up Ethernet (SUE) framework and UEC compliance aims to deliver this freedom of choice. It means customers can integrate components from a broad ecosystem of partners, which is crucial for mitigating supply chain risks and fostering competitive pricing. The sheer number of partner demos shown at OCP - covering everything from air-cooled designs to liquid-cooled racks - underscores the health of this open ecosystem approach. Ecosystem diversity matters.
The commitment to Co-packaged Optics (CPO) is another area we are closely watching. This isn’t simply a feature update; it is a fundamental shift in how the silicon and optics interact. By physically integrating the optical engines with the switch ASIC, Broadcom is seeking to significantly reduce the electrical path length, which translates directly into lower power consumption and improved signal integrity. Power efficiency is not a luxury anymore; it is the central operational expense determining AI profitability for cloud providers. Meta’s success cited in using the previous generation CPO switches suggests this technology is moving past the experimental phase and is ready for large-scale, mission-critical deployment. Broadcom is betting that CPO is the future of the AI data center.
Furthermore, the introduction of the Thor Ultra 800G NIC directly addresses the scale-up challenge. When you are running massive distributed training jobs, maximizing XPU utilization is paramount. Traditional networking protocols struggle with congestion at scale, leading to inefficient resource use. The Thor Ultra is designed to mitigate this using advanced features such as packet-level multipathing and selective retransmission. This ensures data flows efficiently to the GPU memory, maintaining high utilization even in hyper-scale cluster environments.
This entire rollout suggests a long-term vision where Broadcom provides the foundation for standardizing AI compute clusters using standard Ethernet infrastructure. This vision stands in direct contrast to closed, proprietary stacks. It is a smart strategic approach.
What was Announced
Broadcom showcased multiple highly detailed silicon innovations aimed to deliver an open, end-to-end Ethernet platform for AI infrastructure. The headliner is the Tomahawk 6 (TH6) switch silicon, which is the world’s first to deliver 102.4 Terabits per second (Tbps) of switching capacity in a single chip, doubling the maximum bandwidth currently available on the merchant market. The TH6 series is architected to power the next generation of scale-up and scale-out AI networks with support for 100G and 200G SerDes. The silicon supports a scale-up cluster size of up to 512 XPUs (or GPUs) and is compliant with the Ultra Ethernet Consortium (UEC).
The Tomahawk 6 – Davisson (TH6-Davisson) is the third-generation Co-Packaged Optics (CPO) Ethernet switch variant. This solution is designed to deliver the 102.4 Tbps capacity while migrating the core lane speed from 100GB per lane to 200GB per lane. It aims to deliver a power reduction of up to 70% compared to traditional pluggable optics solutions by placing high-density optical engines on the same substrate as the ASIC. The switch complements this with advanced power and thermal management features.
For ultra-low latency applications, the company introduced the Tomahawk Ultra, an Ethernet switch that achieves an impressive 250 nanoseconds (ns) latency while maintaining a full 51.2 Tbps throughput. This level of performance is designed to support the demanding requirements of scale-up domains and supports up to 77 billion packets per second, even at minimum packet sizes.
For fabric scaling, the Jericho4 Ethernet fabric router is featured, which has the ability to interconnect over one million XPUs across multiple data centers, forming vast, interconnected AI fabrics.
Finally, the Thor Ultra 800G AI Ethernet Network Interface Card (NIC) is positioned as the industry’s first 800G AI Ethernet NIC. The Thor Ultra is architected to support UEC specifications, enabling features such as packet-level multipathing for efficient load balancing and out-of-order packet delivery directly to XPU memory. It also includes selective retransmission and programmable congestion control algorithms. This design aims to deliver advanced RDMA capabilities within an open ecosystem, ensuring interoperability with various XPUs, optics, and switches.
Looking Ahead
Broadcom is not just competing in the AI infrastructure market; they are attempting to define the dominant standard. This torrent of highly advanced silicon - the 102.4 Tbps Tomahawk 6, the 800G Thor Ultra, and the CPO advancements - demonstrates a coordinated effort to address every single aspect of the AI networking stack simultaneously. This is about architectural completeness.
The key trend that we are going to be looking out for is the adoption rate of the CPO switches. If the TH6-Davisson delivers on the promise of 70% power savings compared to traditional pluggable optics at scale, it will quickly become a necessary technology, not just a niche high-end offering. Hyper-scale operators cannot ignore that level of efficiency when building 10+ gigawatt AI data centers. CPO is a tectonic shift.
When you look at the market as a whole, the announcement is a direct, formidable challenge to the incumbent specialized solutions, chiefly those offered by NVIDIA’s InfiniBand and Spectrum-X Ethernet portfolios. While NVIDIA retains a strong position due to its CUDA ecosystem and integrated stack, Broadcom is relentlessly driving performance and features into the open Ethernet standard that were once exclusive to proprietary fabrics.
Our perspective is that the success of the UEC is intrinsically linked to the deployment of silicon like Tomahawk 6 and Thor Ultra. The technical specifications released prove that Ethernet is now more than capable of handling the most demanding AI workloads. Going forward, we are going to be closely monitoring how the company performs on customer traction and volume deployments of CPO solutions in future quarters. The financial results from hyperscalers’ networking spend will tell the real story of adoption.
We find that Broadcom's strategy for improving its competitiveness and ecosystem influence over the next 12 months must focus on rapidly driving CPO deployment and solidifying the UEC as the industry standard. The technical superiority of the Tomahawk 6 (TH6) and Thor Ultra, especially when combined with CPO's reported 70% power savings, can give Broadcom a non-negotiable advantage in the energy-conscious hyperscale market. Broadcom needs to convert its high-profile customer traction (like the initial success with Meta's CPO testing) into massive volume deployments and publicly available financial proof-points.
This involves accelerating the CPO supply chain, ensuring hardware and software integration with major AI frameworks, and actively collaborating with UEC members to produce certified, interoperable products. By proving CPO's reliability, scale, and power-efficiency through concrete, large-scale results, Broadcom can force hyper-scalers to adopt its technology as an economic necessity, creating a powerful, open Ethernet alternative to NVIDIA's proprietary stacks.
To directly challenge NVIDIA's integrated ecosystem, we believe Broadcom must strategically leverage the UEC as the open-standard counter-narrative. Over the next year, Broadcom should push for the UEC specification to be rapidly adopted across the entire AI networking stack - not just in switches (Tomahawk 6), but in NICs (Thor Ultra) and third-party software, actively showcasing multi-vendor interoperability. This open approach provides customers with a critical supply chain and cost alternative to NVIDIA's lock-in, which Broadcom can aggressively market.
Furthermore, Broadcom should not only drive raw silicon performance but also aggressively build out the software and developer tools required to make their Ethernet-based fabrics as easy to deploy, manage, and optimize for AI workloads as NVIDIA's CUDA-backed solutions. The success of this strategy hinges on translating technical specifications into measurable customer value, effectively commoditizing high-performance AI networking under the open Ethernet banner.
Ron Westfall | VP and Practice Leader for Infrastructure and Networking
Ron Westfall is a prominent analyst figure in technology and business transformation. Recognized as a Top 20 Analyst by AR Insights and a Tech Target contributor, his insights are featured in major media such as CNBC, Schwab Network, and NMG Media.
His expertise covers transformative fields such as Hybrid Cloud, AI Networking, Security Infrastructure, Edge Cloud Computing, Wireline/Wireless Connectivity, and 5G-IoT. Ron bridges the gap between C-suite strategic goals and the practical needs of end users and partners, driving technology ROI for leading organizations.
Share
Steven Dickens | CEO HyperFRAME Research
Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the CEO and Principal Analyst at HyperFRAME Research.
Ranked consistently among the Top 10 Analysts by AR Insights and a contributor to Forbes, Steven's expert perspectives are sought after by tier one media outlets such as The Wall Street Journal and CNBC, and he is a regular on TV networks including the Schwab Network and Bloomberg.