Research Notes

NVIDIA’s AI Gold Rush: From Picks & Shovels to Owning the Oregon Trail?

Research Finder

Find by Keyword

NVIDIA’s AI Gold Rush: From Picks & Shovels to Owning the Oregon Trail?

Record Q3 revenue shows extraordinary demand, but China’s data center evaporation and overreliance on a handful of mega-buyers make broad enterprise AI adoption the next critical test

22/11/2025

By the Numbers:

  • Revenue: $57.01B (+62% YoY, +22% QoQ)

  • EPS: $1.30 (beating consensus by roughly 4–5 cents)

  • Data Center Revenue: $51.2B (+66% YoY)

  • Gross Margin: 73.6% (non-GAAP)

  • Net Income: $31.91B (+65% YoY)

  • Q4 Guidance: $65B (+14% QoQ)

Key Highlights:

  • Blackwell sales are "off the charts" and cloud GPUs are sold out, with unprecedented demand across all segments

  • GB300 now represents roughly two-thirds of Blackwell revenue, underscoring how quickly NVIDIA is transitioning customers to the latest rack-scale NVL72 systems

  • Sizable H20 purchase orders never materialized in the quarter due to geopolitical issues and the increasingly competitive market in China

  • Company has visibility to a half a trillion dollars in Blackwell and Rubin revenue from the start of this year through the end of calendar year 2026

  • Networking revenue more than doubled year-over-year and reached a new quarterly record of $8.2B, becoming a major growth driver

The News

NVIDIA reported record third-quarter fiscal 2026 results on 19 November 2025, with revenue of $57.0 billion, beating consensus estimates and demonstrating continuing acceleration of the company’s sequential growth. Revenue was up 22% from the previous quarter and up 62% from a year ago, with data center revenue contributing the vast majority at $51.2 billion. Management raised Q4 guidance to $65 billion, well above Wall Street expectations, signaling continued momentum in the AI infrastructure buildout. Read more here https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-third-quarter-fiscal-2026

Analyst Take

NVIDIA's third-quarter results show the company continuing to successfully navigate the tension between unprecedented demand and emerging structural challenges. The numbers themselves border on the absurd - a $10 billion sequential revenue increase that would represent the entire annual revenue of most Fortune 500 companies.

Yet beneath the triumphant business press headlines exists a more nuanced narrative. Jensen Huang described three fundamental platform shifts as core drivers of multi-year infrastructure investment: CPU to GPU acceleration, the mainstreaming of generative AI, and the emergence of agentic AI. This framework isn't simple marketing fluff; it's a sophisticated articulation of why AI infrastructure spending isn't a bubble but a fundamental computing architecture transition.

The evidence supporting this thesis is compelling. Analyst expectations for the top CSPs and hyperscalers in 2026 aggregate CapEx have continued to increase and now sit roughly at $600 billion, more than $200 billion higher relative to the start of the year. Meta has disclosed that their GEM foundation model is delivering over a 5% increase in ad conversions on Instagram and 3% gain on Facebook feed, providing tangible ROI that justifies the massive spending.

What Was Announced

The technical achievements announced during the quarter reinforce NVIDIA's architectural advantage. In the latest MLPerf training results, Blackwell Ultra delivered 5x faster time to train than Hopper. More impressively, on DeepSeek r1, Blackwell delivered 10x higher performance per watt and 10x lower cost per token versus H200.

The company's strategic partnerships expanded dramatically. Anthropic is adopting NVIDIA and establishing a deep technology partnership to support Anthropic's fast growth, with Anthropic's compute commitment initially including up to one gigawatt of compute capacity. Combined with the OpenAI partnership targeting at least 10 gigawatts of AI data centers, NVIDIA is architected to capture value across competing AI platforms.

The networking business transformation deserves special attention. At $8.2 billion in quarterly revenue, it's no longer ancillary to stack ownership, it is a huge business all its own. NVIDIA is winning in data center networking as the majority of AI deployments now include their switches with Ethernet GPU attach rates roughly on par with InfiniBand. This positions NVIDIA as the only vendor capable of delivering single-vendor complete rack-scale AI systems.

But three critical challenges threaten to complicate the growth narrative.

First, the China situation has deteriorated from challenging to catastrophic. Management stated, "we are not assuming any data center compute revenue from China" in the Q4 outlook. This isn't just a temporary setback - it's the potential permanent loss of what was once NVIDIA's second-largest market. The geopolitical reality is forcing Chinese companies to accelerate domestic alternatives to NVIDIA, creating future competitors while eliminating current revenue.

Second, margin pressure is building. Colette Kress stated, "...input costs are on the rise but we are working to hold gross margins in the mid-seventies over fiscal 2027." The shift from selling chips to delivering complete rack-scale systems increases complexity and cost. While performance per dollar improves for customers, NVIDIA's margins face structural headwinds.

Third, customer concentration risk is extreme. This past quarter, the company announced AI factory and infrastructure projects amounting to an aggregate of 5 million GPUs, but these are concentrated among a handful of hyperscalers and sovereign wealth funds. Customers such as AWS and HUMAIN announced plans to deploy up to 150,000 AI accelerators, with xAI and HUMAIN co-developing a flagship 500 megawatt data center. When your growth depends on perhaps a dozen customers, any hesitation or hiccup creates outsized impact.

Jensen Huang's response to ongoing ASIC competition was particularly revealing. He discussed 5 strategic moats for the company and its broad spectrum products: acceleration across all computing transitions, excellence at every AI phase, capability for running every AI model, the company’s presence in every cloud, and diverse offtake capabilities. This isn't arrogance, because arrogance is an exaggerated sense of one's own capabilities, and the company keeps delivering. So if it’s not arrogance, it seems to be a realistic assessment that competing with NVIDIA requires not just better/faster/more efficient chips but an entire ecosystem.

The Rubin platform update provides confidence in continued innovation. NVIDIA has received silicon back from supply chain partners and teams across the world are executing the bring-up beautifully. This ability to maintain an annual cadence of x-factor performance improvements while ensuring backward compatibility represents a formidable competitive advantage.

Perhaps most intriguing was the discussion of inference scaling. There are three scaling laws that are scaling at the same time. The first scaling law called pre-training continues to be very effective. The second is post-training. And the third is inference. This suggests the computational requirements for AI are still in early exponential growth phases.

Looking Ahead

Based on what we are observing, NVIDIA has successfully transitioned from possibility to inevitability in the AI infrastructure market. The $500 billion backlog provides exceptional visibility, but it's increasingly clear this is a floor, not a ceiling.

The key trend that we are tracking is the broadening of AI adoption beyond hyperscalers. Organizations, including enterprises and operators, broadly are leveraging AI to boost productivity, increase efficiency, and reduce cost, with examples ranging from RBC leveraging agentic AI to drive significant analysts' productivity, slashing report generation time from hours to minutes to Salesforce's engineering team seeing at least 30% productivity increase in new code development after adopting Cursor.

Going forward we are looking for how the company performs on geographic diversification post-China, the sustainability of 162% networking growth, and whether enterprise adoption can offset any hyperscaler spending moderation.

Based on our analysis of the market, NVIDIA's true competitive advantage isn't hardware superiority but ecosystem lock-in. Thanks to CUDA, the A100 GPUs shipped six years ago are still running at full utilization today. This software-driven longevity creates switching costs that pure hardware competitors cannot overcome.

When you look at the market as a whole, the announced figures confirm we're witnessing a once-in-a-generation platform shift comparable to the internet's emergence. NVIDIA has moved way beyond selling picks and shovels for a gold rush - they're positioned to own the entire Oregon Trail.

HyperFRAME will be closely monitoring how the company navigates the US-China technology decoupling, whether Rubin can maintain performance leadership against increasingly sophisticated competition, and if the transition from chip vendor to infrastructure provider enhances or erodes long-term margins. The next two quarters will determine whether NVIDIA's massive backlog represents peak visibility or merely the beginning of a multi-trillion dollar opportunity.

We believe NVIDIA can augment its overall competitiveness and AI ecosystem influence over the next 12 months by executing a multi-pronged strategy focused on supply, software, and market expansion. Building on its record $51.2 billion data center performance, the company must aggressively maximize the production and supply of its next-generation AI GPUs (e.g., Blackwell) and high-speed networking components to fully capitalize on the continued infrastructure buildout and prevent rivals or custom ASICs from capturing market share due to scarcity. Moreover, it must relentlessly invest in and deepen the CUDA software ecosystem and its AI frameworks, making it technically and economically prohibitive for developers to transition off the platform, thereby solidifying its AI ecosystem influence as the industry gold standard. Finally, to ensure diversified, long-term growth, NVIDIA should strategically segment its focus, aggressively targeting high-growth verticals like sovereign AI clouds, enterprise AI adoption, telco 6G, and AI at the edge/robotics while maintaining a commanding performance-per-watt lead over all competitors.

Author Information

Stephen Sopko | Analyst-in-Residence – Semiconductors & Deep Tech

Stephen Sopko is an Analyst-in-Residence specializing in semiconductors and the deep technologies powering today’s innovation ecosystem. With decades of executive experience spanning Fortune 100, government, and startups, he provides actionable insights by connecting market trends and cutting-edge technologies to business outcomes.

Stephen’s expertise in analyzing the entire buyer’s journey, from technology acquisition to implementation, was refined during his tenure as co-founder and COO of Palisade Compliance, where he helped Fortune 500 clients optimize technology investments. His ability to identify opportunities at the intersection of semiconductors, emerging technologies, and enterprise needs makes him a sought-after advisor to stakeholders navigating complex decisions.

Author Information

Ron Westfall | VP and Practice Leader for Infrastructure and Networking

Ron Westfall is a prominent analyst figure in technology and business transformation. Recognized as a Top 20 Analyst by AR Insights and a Tech Target contributor, his insights are featured in major media such as CNBC, Schwab Network, and NMG Media.

His expertise covers transformative fields such as Hybrid Cloud, AI Networking, Security Infrastructure, Edge Cloud Computing, Wireline/Wireless Connectivity, and 5G-IoT. Ron bridges the gap between C-suite strategic goals and the practical needs of end users and partners, driving technology ROI for leading organizations.