Research Finder
Find by Keyword
Vultr and NetApp: What If AI Compute Could Use Your Existing Datasets With No Migration?
A validated design using ONTAP, SnapMirror, and AMD GPUs gives enterprises a low-disruption path to cloud AI by bringing compute to the data they already manage
12/11/2025
Key Highlights
- Enterprises can now use existing ONTAP datasets directly with cloud GPU compute through SnapMirror replication into ONTAP Select running in Vultr Cloud.
- The architecture provides a secure, governed, read-ready data environment for AI without migrating or restructuring datasets.
- AMD GPU clusters in Vultr gain immediate NFS/SMB access to replicated volumes, enabling AI and analytics pipelines to start faster and with less operational overhead.
- This approach extends the customer’s current data estate rather than replacing it, offering a practical path to cloud AI adoption.
The News
Vultr, NetApp, and AMD introduced a jointly validated hybrid-cloud architecture that allows enterprises to run AI workloads in the cloud using the governed, production-grade datasets they already manage on-premises. SnapMirror replicates data from existing ONTAP systems into ONTAP Select running inside Vultr Cloud, where AMD GPU nodes can access it immediately through standard NFS or SMB mounts. Connectivity is secured through an IPSec VPN and dedicated intercluster interfaces, and ONTAP Select preserves the same governance, snapshots, and access controls as on-prem deployments. This gives organizations a practical, low-disruption path to cloud AI by bringing compute to their data rather than forcing data migration or architectural change. For more information, read the Vultr blog.
Analyst Take
Our view, shaped by nearly every major briefing this year, is that the data layer determines the success of AI projects, not GPUs or models. Compute availability will continue to expand across regions and providers, and models will increasingly be consumed rather than built. What does not change is the requirement for clean, consistent, policy-aligned data delivered to the right place at the right time. This is where most organizations struggle today, and this is where the architecture announced here offers meaningful relief.
The most important outcome of this announcement is how directly it addresses the enterprise data bottleneck. Organizations want to take advantage of cloud GPUs, yet their most valuable datasets remain governed, curated, and protected on-prem. Moving that data is slow, costly, and often unrealistic. By extending the ONTAP environment into Vultr using SnapMirror and ONTAP Select, enterprises gain a straightforward way to run cloud AI on trusted datasets without replatforming or changing familiar governance practices.
The real customer benefit is operational continuity. Teams do not need to learn new data models or convert workloads to object storage. Instead, the cloud-side presentation of the data behaves just like ONTAP on-prem, preserving access controls, retention rules, and semantics. This reduces friction for both IT and AI practitioners, giving organizations a predictable and low-risk path to begin accelerating AI initiatives.
In the broader market, we see multiple prevailing approaches to solving the AI data challenge. For example, a new global data platform designed for AI performance requires customers to adopt a completely new architecture. Another relies on global metadata virtualization to span heterogeneous systems, but this adds abstraction that may complicate governance. The strategic tension for enterprise architects is whether to extend a stable, governed data foundation into cloud GPUs or to migrate toward a next-generation, AI-first data platform that may require significant reinvention.
The Vultr and NetApp announcement strengthens the case for extension by offering a practical implementation that moves compute to the existing data estate with minimal disruption, preserving the maturity, lineage, and control enterprises already rely on. For organizations that prioritize continuity and compliance, this is a highly differentiated and pragmatic model.
There is a competitive perspective here; the Vultr/NetApp architecture highlights a critical bifurcation in the enterprise AI market: the choice of GPU ecosystem. NetApp and Vultr maintain a parallel partnership with NVIDIA, which also leverages ONTAP and the SnapMirror mechanism to power AI workloads. NVIDIA’s competing solution, anchored by the NVIDIA AI Data Platform and the proprietary AI Enterprise (AIE) software suite, is deeply embedded in the enterprise space and offers its own validated workflow patterns (NVIDIA Blueprints).
This creates a direct competitive dynamic for enterprise architects. While the NetApp/Vultr architectural model provides a consistent data foundation, the decision becomes a trade-off between:
- NVIDIA, which could be seen as the software-priority path, leveraging the mature and dominant CUDA software stack, extensive ecosystem support, and enterprise-grade AIE software, this path yields high performance and development speed but potentially at a higher cost and with vendor lock-in risk.
- AMD, a more hardware-focused path, embracing cost-competitive hardware with substantial future headroom (MI355X/MI450), and open-source ROCm platform freedom to reduce long-term TCO and avoid proprietary constraints - but with less developer adoption and platform maturity.
Of note, Vultr and AMD’s commitment to scaling GPU infrastructure continues in parallel: the two recently announced a 50-megawatt AI supercluster being built at a new data center in Springfield, Ohio, with ~24,000 AMD Instinct MI355X GPUs scheduled to come online in early 2026. In addition to expanding raw GPU capacity, Vultr has rolled out its MI355X-based GPU cloud offerings more broadly and reaffirmed its long-term backing from AMD through prior financing rounds and infrastructure investments. This growing hardware and financial commitment strengthens the value proposition of the architecture described above: customers will not only be able to access their existing data via ONTAP + SnapMirror, but also tap into a rapidly expanding, globally distributed pool of high-performance GPU compute that supports both training and inference at scale.
What Was Announced
Vultr, NetApp, and AMD introduced a validated architecture that extends the customer’s ONTAP data estate directly into Vultr Cloud for use by GPU-accelerated AI workloads. The design uses SnapMirror to replicate datasets from multiple on-prem locations into ONTAP Select running on VMware ESXi inside Vultr. Once replication is active, GPU compute clusters mount the datasets as NFS or SMB shares, working with clean, point-in-time, read-ready copies of the data in its existing structure.
The architecture is secured by an IPSec VPN and uses dedicated intercluster interfaces to ensure encrypted, predictable transport. ONTAP Select preserves governance, snapshots, access controls, and lifecycle policies, so the cloud dataset behaves exactly like ONTAP on-prem. Teams can begin AI and analytics work immediately without reshaping files or rewriting applications. Optional break and resync steps allow controlled write testing without compromising the system of record. Validation procedures such as network path testing, replication health checks, and throughput sampling confirm readiness for downstream GPU pipelines.
In practical terms, this model gives enterprises a way to run cloud AI using the data foundation they already trust. It removes migration friction, simplifies operations, and provides a governed and operationally familiar data layer for modern workloads.
Looking Ahead
At HyperFRAME Research, we are watching the AI data layer closely, and we see that organizations making progress with AI have stabilized their data architecture early on. As we move into 2026, decisions about data consistency, semantics, governance, and recovery will matter more than choices about specific GPU models or cloud regions. The ability to deliver trustworthy, policy-aligned datasets to cloud compute environments will increasingly determine AI success.
The approach introduced here suggests that many enterprises will extend rather than replace their existing data foundations. Compute options will continue to proliferate, making the data layer the central architectural anchor. Hybrid approaches that preserve lineage and governance while enabling flexible access to cloud GPUs will appeal to organizations seeking impact without disruption.
As this Vultr + NetApp architecture rolls out, we will be watching several indicators to gauge its success. First, the ease with which customers can operationalize SnapMirror into ONTAP Select will determine time-to-value. If organizations can deploy this pattern with minimal assistance and begin running real workloads quickly, it will validate the low-disruption promise. Second, the performance characteristics of cloud-mounted datasets under AI load will matter; predictable throughput and low-latency access will be necessary for both training and inference pipelines. Third, we will watch how governance and compliance teams respond to ONTAP semantics extending cleanly into the cloud. Lineage and policy continuity could make this model especially attractive for regulated industries.
We will also monitor the expansion of AMD GPU availability within Vultr’s regions. The long-term viability of this architecture depends on customers having reliable access to competitively priced GPU capacity, including the MI355X expansion announced for 2026. Finally, we will observe whether this approach influences broader market behavior: if enterprises begin favoring data-layer stability over wholesale replatforming, this would signal that hybrid extension models are becoming a preferred path to AI adoption.
We conclude that companies that resolve their data layer architecture early can scale AI with more confidence and predictability. The evolution of this Vultr + NetApp model will offer an early look at how hybrid data foundations and cloud compute may converge into standard practice for enterprise AI.
Don Gentile | Analyst-in-Residence -- Storage & Data Resiliency
Don Gentile brings three decades of experience turning complex enterprise technologies into clear, differentiated narratives that drive competitive relevance and market leadership. He has helped shape iconic infrastructure platforms including IBM z16 and z17 mainframes, HPE ProLiant servers, and HPE GreenLake — guiding strategies that connect technology innovation with customer needs and fast-moving market dynamics.
His current focus spans flash storage, storage area networking, hyperconverged infrastructure (HCI), software-defined storage (SDS), hybrid cloud storage, Ceph/open source, cyber resiliency, and emerging models for integrating AI workloads across storage and compute. By applying deep knowledge of infrastructure technologies with proven skills in positioning, content strategy, and thought leadership, Don helps vendors sharpen their story, differentiate their offerings, and achieve stronger competitive standing across business, media, and technical audiences.
Stephen Sopko | Analyst-in-Residence – Semiconductors & Deep Tech
Stephen Sopko is an Analyst-in-Residence specializing in semiconductors and the deep technologies powering today’s innovation ecosystem. With decades of executive experience spanning Fortune 100, government, and startups, he provides actionable insights by connecting market trends and cutting-edge technologies to business outcomes.
Stephen’s expertise in analyzing the entire buyer’s journey, from technology acquisition to implementation, was refined during his tenure as co-founder and COO of Palisade Compliance, where he helped Fortune 500 clients optimize technology investments. His ability to identify opportunities at the intersection of semiconductors, emerging technologies, and enterprise needs makes him a sought-after advisor to stakeholders navigating complex decisions.