Research Notes

Cisco’s New Chip, Massive-scale AI Networking for the Enterprise?

Research Finder

Find by Keyword

Cisco's New Chip, Massive-scale AI Networking for the Enterprise?

Silicon One G300 targets gigawatt-scale AI clusters, liquid cooling delivers claimed 70% efficiency gains, Nexus One unifies operations as Cisco bets its full stack against Broadcom's approach

2/11/2026

Key Highlights

  • Cisco unveiled the Silicon One G300, a 102.4 Tbps switching ASIC, integrating to the company’s vertical stack approach to massive AI clusters across training, inference, and agentic workloads, including Intelligent Collective Networking aimed at delivering 28% better GPU job completion time up to 33% increased overall network utilization.

  • New G300-powered N8000 and 9000 systems feature 100% liquid-cooled designs that, combined with advanced optics, aim to improve energy efficiency by nearly 70% compared to equivalent air-cooled configurations.

  • Cisco introduced 1.6T OSFP optics and 800G Linear Pluggable Optics (LPO) designed to reduce optical module power consumption by 50%, which positions the company across the full silicon-to-optics stack.

  • The announcement signals Cisco's continuing moves beyond networking vendor to full-stack AI infrastructure provider, arriving as Ethernet overtakes InfiniBand in AI back-end networking and the datacenter Ethernet switch market that IDC estimates grew 62% year-over-year in Q3 2025.

  • Cisco's vertically integrated approach (proprietary silicon + systems + optics + software) represents a contrarian bet against the silicon + white box model that has powered Broadcom's Tomahawk dominance, setting up the race for captive silicon versus multi-vendor ecosystems at hyperscale.

The News

Cisco announced the Silicon One G300 at its CiscoLive event in Amsterdam, a 102.4 Tbps switching ASIC architected to power massive, distributed AI clusters across training, inference, and agentic workloads, unveiled at Cisco Live EMEA in Amsterdam. The G300 is designed to power new Cisco N8000 and N9000 series systems featuring 100% liquid-cooled designs and 1.6T OSFP optics, with the company claiming a nearly 70% energy efficiency improvement over equivalent prior-generation air-cooled configurations. Cisco also advanced its Nexus One platform with a unified management plane and AgenticOps capabilities, aiming to simplify AI fabric operations across hyperscaler, neocloud, sovereign cloud, and enterprise deployments. Full press release here.

Analyst Take

We can see that Cisco's G300 launch represents something beyond a silicon refresh towards a positioning statement and market paradigm. With the G300, the company is declaring that in the AI era, the network is not a commodity pipe connecting GPUs, it is the compute fabric itself. That framing matters because it challenges the prevailing hyperscaler assumption that merchant silicon from Broadcom, combined with white-box switches, provides the optimal and economic path to AI cluster scaling. There is no such thing as an ‘easy button’ in engineering at this scale, but Cisco is betting that enterprise and neocloud want to build a customer base instead of network infrastructure.

Here is the contrarian observation worth consideration: Cisco is making a vertically integrated bet in a market that has been sprinting toward disaggregation for a decade. While Broadcom's Tomahawk 6 already ships at 102.4 Tbps with broad ecosystem adoption across Arista, white-box OEMs, and hyperscaler custom builds, Cisco is arguing that owning the full stack (silicon, systems, optics, software, management) delivers a differentiated outcome that is worth the potential lock-in. This may be a tough sell into the existing paradigm, but mirrors similar full-stack arguments from NVIDIA and AMD now applied to networking infrastructure. The question is whether enterprise and neocloud buyers will value that integration premium, and whether the hyperscalers are going to reconsider their stance in a world where they design their own optimized silicon to avoid the multi-vendor patchwork.

What Was Announced

The Silicon One G300 is a 102.4 Tbps switching ASIC that introduces what Cisco calls Intelligent Collective Networking, a combination of a fully shared packet buffer, path-based load balancing, and proactive network telemetry. The fully shared buffer is architecturally significant because AI training workloads generate synchronized microbursts that can overwhelm traditional per-port buffer designs. The company claims the new chip achieves 2.5x increased burst absorption over prior architectures, a feature designed around preventing packet drops that stall GPU training jobs, wasting expensive compute cycles.

The new processor is the heart of both the N8000 (targeting Service Providers/Hyperscalers running IOS XR) and N9000 (targeting Enterprise running NX-OS and ACI) product families which claim to deliver 102.4 Tbps switching speeds. Across all of it is the ground-up designed liquid cooling, standard instead of optional. Cisco has built a liquid-cooled chassis aimed at single-platform consolidation of the power from six prior-generation 51.2T air-cooled systems, targeting a nearly 70% improvement in energy efficiency. This efficiency goal is laudable, but assumes optimal deployment and will need to be proven out in field metrics on launch. This design philosophy aligns with the reality that the current push towards enormous GW-scale AI clusters faces power as their binding constraint, not bandwidth.

On the optics front, Cisco introduced 1.6T OSFP modules for scale-out connectivity and 800G Linear Pluggable Optics (LPO) that aim to reduce optical module power by 50% compared to retimed modules, with overall switch power reduction of 30% when deployed with the new systems. The Nexus One management platform was enhanced with AgenticOps for data center networking through AI Canvas, delivering guided, human-in-the-loop troubleshooting and native Splunk integration (arriving March 2026, probably longer for full functionality crossover) for in-place network telemetry analysis. The G300, systems and optics are expected to ship in the second half of 2026.

Market Analysis

Cisco launches the G300 into a datacenter Ethernet switch market that is experiencing extraordinary momentum. The datacenter portion of the Ethernet switch market grew 62% year-over-year in Q3 2025, according to IDC, with 800GbE switch revenues surging 91.6% sequentially. More importantly, 2025 marked the year Ethernet overtook InfiniBand in AI back-end networking, a structural shift that validates the broader market Cisco is targeting. Dell'Oro Group projects 2026 will mark the first year of volume 1.6 Tbps switch deployments, with the ramp expected to be even faster than the 800 Gbps transition.

The competitive landscape, however, is intensely contested. Broadcom's Tomahawk 6 has been shipping since mid-2025 at the same 102.4 Tbps that Cisco and NVIDIA are claiming for 2H 2026. Industry estimates indicate Broadcom's share at 70-85% based on shipment data, but NVIDIA's Spectrum-X is gaining traction in Ethernet AI fabrics, potentially eroding that dominance faster than expected in 2026. Broadcom has also introduced Tomahawk Ultra for scale-up workloads and the Thor Ultra 800G NIC, building an increasingly comprehensive AI networking portfolio. NVIDIA's Spectrum-X platform continues to advance, with the Spectrum-X1600 at a claimed 102.4 Tbps expected in H2 2026. Arista's 7800R4 modular spine systems, powered by Broadcom Jericho3-AI, delivered 460 Tbps system throughput in late 2025.

What makes Cisco's positioning distinctive is the explicit targeting of the next wave of AI infrastructure buyers beyond the hyperscalers. As Nick Kucharewski, Cisco's SVP, noted, the company sees the next phase of investment coming from enterprises, neoclouds, and sovereign clouds that lack the engineering resources to assemble best-of-breed white-box solutions. The customer quotes from du Tech, Sharon AI, and Cirrascale Cloud Services reinforce this thesis, pointing to organizations that value validated, turnkey infrastructure over component-level optimization. This is a deliberate market segmentation play: concede the hyperscale merchant silicon market to Broadcom (for now) while owning the integrated solution for everyone else.

The ecosystem support from AMD, Intel, NVIDIA, DDN, NetApp, and VAST further reinforces Cisco's bet that the AI networking opportunity extends well beyond switching silicon into a broader infrastructure platform sale. Scale-out networks at the back-end are going to be a key driver pushing the Ethernet datacenter switch market to greater heights, in our opinion, set to more than double by decade's end. The total addressable market is expanding so rapidly that there is room for multiple architectural approaches to succeed.

Looking Ahead

Jeetu Patel, in his CiscoLive EMEA keynote, was keen to stress that Cisco is a full-stack company aligned with the AI opportunity. Positioning Cisco as a semiconductor company is on message with the wider corporate message and aligns Cisco with the tailwinds that are powering the valuations of the likes of Broadcom, Qualcomm, and other semiconductor companies.

Based on what we are seeing in the broader market, the most consequential question for the AI networking market in 2026 is not who has the fastest silicon, since Cisco, Broadcom, and eventually NVIDIA all converge at 102.4 Tbps, but who owns the operational stack that makes these clusters actually run. Also, whether enterprise and neocloud, which already have multiple choices, are going to opt for an integrated versus multi-vendor stack. Cisco is betting that the answer lies in vertical integration through Nexus One and AgenticOps, essentially arguing that managing a 100,000-GPU cluster is harder than wiring one.

We will be tracking whether Cisco's second-half 2026 shipping timeline allows it to capture the enterprise and neocloud wave before the Broadcom ecosystem becomes entrenched at those tiers. Any delays here will give Broadcom more time to reach adoption, and NVIDIA more time to press their full-stack position. The real test will come when sovereign cloud operators in the Middle East, Europe, and Asia, where Cisco's enterprise relationships run deep, begin making their AI fabric procurement decisions. Power efficiency will be key, once the 70% energy improvement claim by Cisco proves out in the wild, which translates directly to operational cost savings that enterprise CFOs prioritize during the procurement process, a different mindset from hyperscaler engineers.

Author Information

Steven Dickens | CEO HyperFRAME Research

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the CEO and Principal Analyst at HyperFRAME Research.
Ranked consistently among the Top 10 Analysts by AR Insights and a contributor to Forbes, Steven's expert perspectives are sought after by tier one media outlets such as The Wall Street Journal and CNBC, and he is a regular on TV networks including the Schwab Network and Bloomberg.

Author Information

Stephen Sopko | Analyst-in-Residence – Semiconductors & Deep Tech

Stephen Sopko is an Analyst-in-Residence specializing in semiconductors and the deep technologies powering today’s innovation ecosystem. With decades of executive experience spanning Fortune 100, government, and startups, he provides actionable insights by connecting market trends and cutting-edge technologies to business outcomes.

Stephen’s expertise in analyzing the entire buyer’s journey, from technology acquisition to implementation, was refined during his tenure as co-founder and COO of Palisade Compliance, where he helped Fortune 500 clients optimize technology investments. His ability to identify opportunities at the intersection of semiconductors, emerging technologies, and enterprise needs makes him a sought-after advisor to stakeholders navigating complex decisions.