Research Finder
Find by Keyword
Vultr with AMD, the Real AI Race is About Megawatts, Not Petaflops?
Vultr commits $1B in Ohio for 50MW of 24,000 MI355X GPUs; AMD securing a key neocloud distribution channel outside hyperscalers.
12/04/2025
Key Highlights:
- Vultr announced a $1 billion investment to deploy an AI supercluster with 24,000 AMD Instinct MI355X GPUs inside a new 50 MW data center in Springfield, Ohio.
- Expansion sustains Vultr's position as the largest privately held cloud infrastructure company, service pricing extremely competitive vs hyperscalers according to CEO J.J. Kardwell.
- AMD's MI355X architecture delivers 288 GB of HBM3e memory/GPU, that is 50% more memory capacity than NVIDIA's B200, enables full Llama 3.1 405B model deployment on a single 8-GPU node.
- Vultr is committed to the next-generation AMD Instinct MI450 series and Helios rack-scale infrastructure, demonstrates hardware roadmap alignment in the long-term.
- This partnership looks to be AMD's most significant non-hyperscaler channel win, continuing the quest for alternatives based on compute scarcity that neoclouds have exploited since 2024.
The News
Vultr, the world's largest privately held cloud infrastructure company, today announced a $1 billion investment to expand its strategic collaboration with AMD, architected to meet accelerating global demand for AI compute capacity. The initiative involves deploying an AI supercluster featuring 24,000 AMD Instinct MI355X GPUs at a new 50 MW data center in Springfield, Ohio, designed to deliver what Vultr describes as unprecedented performance per dollar for AI training and inference workloads. The cluster is expected to go online by early 2026, with Vultr CEO J.J. Kardwell indicating the company expects capacity to be sold before it goes live. Vultr and AMD Expand Collaboration to Drive Global AI Innovation and Scale
Analyst Take
When I examine this announcement through the lens of a technology executive selling services at scale, I observe something more significant than a capacity expansion. This is AMD securing a high-volume distribution channel that operates entirely outside the traditional hyperscaler procurement framework. While the the industry is evolving away from legacy obsession over peak floating-point operations per second, the actual constraint facing enterprise AI deployment is increasingly the availability of reliable, affordable, and geographically dispersed power and cooling infrastructure. The 50 MW commitment in Ohio implicitly acknowledges this reality. Power is the bottleneck. Not silicon.
What Was Announced
Vultr is targeting a specific, underserved market segment: organizations requiring massive AI compute at lower total cost of ownership than the primary hyperscalers can deliver. The core of this announcement is the integration of 24,000 AMD Instinct MI355X GPUs into Vultr's global platform. This hardware is architected to provide competitive performance for both training and, critically, inference workloads. The MI355X is designed around 288 GB of HBM3e memory capacity per accelerator, a specification I consider pivotal for large model fine-tuning and running expansive inference tasks without the complexities of model partitioning across multiple nodes. Additionally, Vultr's commitment to adopting the next-generation AMD Instinct MI450 series may indicate the company’s long-term alignment with AMD for hardware roadmap continuity. Integrating AMD's Helios rack-scale infrastructure should simplify deployment and management, with a faster proprietary model time-to-market for Vultr's enterprise customers. This expansion reinforces Vultr's position as a provider of full-stack AMD infrastructure, extending beyond GPUs to include AMD EPYC processors and its VX1 Cloud Compute offerings.
Market Analysis
The context for this partnership resides in explosive demand for high-performance computing capacity, driven by the shift from AI experimentation to operational scale. According to McKinsey's November 2025 analysis, more than 100 neoclouds now exist globally, with these specialized providers pricing GPUs as much as 85% less than hyperscalers do. Neoclouds emerged to fill the gap left when hyperscalers faced difficulties meeting rising GPU demand at the start of 2024, when wait times for premium accelerators stretched from weeks to months. Vultr CEO J.J. Kardwell stated that its cloud infrastructure services are typically priced at half of what hyperscalers charge. The MI355X's demonstrated strength in Llama inference tasks, where analyst testing showed meaningful throughput advantages over NVIDIA's B200 at high concurrency, makes this capacity immediately attractive to enterprises seeking alternatives to vendor lock-in. Companies like Vultr that prioritize AMD's open-source ROCm software stack are architecting a competitive moat by offering more open, composable infrastructure solutions. The investment in a 50 MW campus in Ohio reflects what McKinsey describes as an environment where training and inference workload demand will continue to accelerate through the end of the decade, with infrastructure supply presenting the main bottleneck. For an enterprise executive focused on capital efficiency and supply chain risk mitigation, the Vultr-AMD alignment offers a compelling alternative supply chain for AI compute, injecting competition into a heavily centralized landscape.
Looking Ahead
Based on what I am observing, the sustainability of the neocloud model will hinge on whether chip supply normalization erodes the current pricing advantages these providers enjoy. Vultr currently benefits from meeting compute scarcity left by hyperscalers, but its long-term success depends on the maturity of AMD's full-stack offering, particularly the ROCm software ecosystem that must counter NVIDIA's CUDA moat. Going forward, I will be tracking whether ROCm adoption rates accelerate among enterprise developers and if the MI450 series can execute its performance claims in real-world, large-scale deployments. The announced supercluster is going to be a litmus test on whether a competitive, open-source AI infrastructure ecosystem is viable. If Vultr can maintain its superior price-to-performance ratio while avoiding the traditional cloud rigidities that frustrate enterprise customers, it stands to capture meaningful market share among high-growth AI innovators and cost-sensitive organizations seeking alternatives to hyperscaler dependency.
Stephen Sopko | Analyst-in-Residence – Semiconductors & Deep Tech
Stephen Sopko is an Analyst-in-Residence specializing in semiconductors and the deep technologies powering today’s innovation ecosystem. With decades of executive experience spanning Fortune 100, government, and startups, he provides actionable insights by connecting market trends and cutting-edge technologies to business outcomes.
Stephen’s expertise in analyzing the entire buyer’s journey, from technology acquisition to implementation, was refined during his tenure as co-founder and COO of Palisade Compliance, where he helped Fortune 500 clients optimize technology investments. His ability to identify opportunities at the intersection of semiconductors, emerging technologies, and enterprise needs makes him a sought-after advisor to stakeholders navigating complex decisions.