Research Notes

From Storage to Intelligence: How Does Dell’s AI Data Platform Bridge the Gap to the Innovation Layer?

Research Finder

Find by Keyword

From Storage to Intelligence: How Does Dell’s AI Data Platform Bridge the Gap to the Innovation Layer?

Dell’s AI Data Platform delivers an open, modular, and enterprise-governed data foundation for AI initiatives, integrating disaggregated architecture, advanced search, GPU acceleration, and agentic analytics

Key Highlights:

  • Dell is expanding its AI Data Platform to unify distributed enterprise data and accelerate AI workloads.
  • This disaggregated architecture separates storage, data services, and compute in a scalable, open design.
  • PowerScale F710 and ObjectScale deliver AI-optimized performance validated for NVIDIA GB200/GB300 systems.
  • New Elastic and NVIDIA cuVS integrations enable GPU-accelerated vector search and real-time data retrieval.
  • A Data Analytics Engine with Starburst adds AI-assisted querying and model-agnostic orchestration across structured data.
  • MetadataIQ enhances visibility across PowerScale and ObjectScale, improving governance and search.

The News

Dell Technologies introduced major advancements to its AI Data Platform, reinforcing its position as the data foundation of the Dell AI Factory. The company expanded its open architecture through deeper collaborations with NVIDIA, Elastic and Starburst, while optimizing PowerScale and ObjectScale for next-generation GPU workloads. Its goals are to eliminate data bottlenecks, unify access across distributed environments, and help enterprises tap data value at scale for AI training, inference, and retrieval-augmented generation (RAG) pipelines. For more information, read the company press release here.

Analyst Take

Dell’s AI Data Platform represents a meaningful evolution of the company’s role in the enterprise data stack. The company’s analyst and media pre-briefing underscored a fundamental shift occurring across the market: As enterprises scale AI from pilot to production, the bottleneck is no longer compute. It’s data mobility, governance, and interoperability. Customers are discovering that AI success depends less on GPU density and more on the ability to orchestrate clean, current, and compliant data across fragmented storage systems. Dell’s narrative and technology roadmap reflects this emerging truth.

Technically, the company is addressing these pain points through a disaggregated design that separates storage, data “engines,” and compute layers. This decoupling lets customers scale infrastructure incrementally, deploy new data services without forklift upgrades, and use open table formats such as Apache Iceberg and Delta Lake to help avoid lock-in.

In effect, Dell is assembling an open data layer that spans storage, discovery, and analytic. This is anchored by PowerScale and ObjectScale, and extended through Elastic, NVIDIA cuVS, and Starburst for active data use. The integration of GPU-accelerated vector search (via cuVS) and metadata-driven discovery (via Elastic + MetadataIQ) gives Dell’s architecture a practical edge in retrieval-augmented generation (RAG) and inference workflows, two of the fastest-growing enterprise AI patterns.

Market context matters here. Every major storage vendor is repositioning around “AI data platforms,” but we believe Dell’s execution is grounded in scope and realism. Competitors like Pure Storage and VAST Data focus on single-stack integration that’s highly tuned for GPU throughput and built on proprietary file systems. NetApp, conversely, leads with data management but depends heavily on hyperscaler alignment. Dell’s approach appears to sit between those extremes: open by design yet vertically optimized for enterprise control. By grounding its roadmap in measurable efficiency, for example up to 72% lower power use and 5x less rack space, Dell is appealing to CIOs and infrastructure leaders who must balance AI demands with data center economics and sustainability targets.

Equally important is Dell’s recognition that AI pipelines must operate across heterogeneous environments. Enterprises aren’t moving data to a single “AI box.” They’re trying to unify data governance across on-premises, cloud, and edge systems. Dell’s integration with Starburst for SQL federation, MetadataIQ for discovery, and cuVS for GPU acceleration collectively help address that challenge. The result is a platform architecture that moves the center of gravity for AI from compute to data, acknowledging that success in this era depends not on owning models, but on activating data with trust, speed, and context.

If Dell can maintain performance parity with its storage peers while proving that this modular, multi-partner model simplifies enterprise operations, it will have staked out a differentiated position in the AI infrastructure market.

What Was Announced

Dell Technologies announced significant enhancements to its AI Data Platform, helping to advance the Dell AI Factory. The updates span both the storage and data layers and are designed to help customers securely transform distributed, siloed data into actionable AI insights.

At the infrastructure level, PowerScale (all-flash NAS) is now integrated with NVIDIA GB200 and GB300 NVL72 reference designs, promising reliable performance and simplified management. PowerScale F710 storage, which achieved NVIDIA Cloud Partner certification earlier this year, saves up to 5x less rack space, 88% fewer switches, and 72% lower power consumption compared to competing systems, according to company testing data. Software improvements enhance parallelism, backend messaging, and throughput for large-scale GPU workloads. Meanwhile ObjectScale, available as both an appliance and software-defined option on Dell PowerEdge, introduces S3 over RDMA (entering technical preview in December 2025). Dell reports this can achieve up to 230% higher throughput, 80% lower latency, and 98% lower CPU utilization than traditional S3. Small-object performance has also been improved, alongside new AWS S3 integration and compression for greater efficiency.

On the data side, Dell expanded its open-engine ecosystem through deeper collaborations with Elastic, NVIDIA and Starburst. The Data Search Engine, powered by Elastic and integrated with MetadataIQ, enables metadata-driven search and discovery across billions of files on PowerScale and ObjectScale. It supports semantic and natural-language search, RAG, and LangChain-based AI pipelines, while NVIDIA cuVS provides GPU acceleration for hybrid keyword and vector search. The new Data Analytics Engine, co-developed with Starburst, enables unified querying across structured data sources such as databases, spreadsheets, and lakehouses, including the aforementioned Iceberg and Delta Lake. Its Agentic Layer uses LLMs to automate query creation and documentation, while the MCP Server provides a multi-agent orchestration framework for building AI applications.

Underpinning all of this is Dell’s Professional Services portfolio to help enterprises plan, align and operationalize data strategies across AI workloads. These services cover the full data lifecycle, from assessment and design to deployment and governance, helping customers move AI projects from POC to production.

Dell’s updates reflect a clear strategy to move beyond static storage and into data activation, coupling open data standards with ecosystem depth. The company confirmed a multi-stage rollout schedule: PowerScale F710 is available now; ObjectScale S3 over RDMA and supporting updates arrive in December 2025; the Data Analytics Engine Agentic Layer and MCP Server are expected to launch in February 2026; and the Data Search Engine with NVIDIA cuVS integration follows in the first half of 2026.

Looking Ahead

Dell’s next phase will test how well it delivers on its promise of open integration at scale. The partnerships with Elastic, NVIDIA, and Starburst create a differentiated architecture but can also introduce operational complexity. At HyperFRAME Research, we will be looking for evidence of Dell’s ability to make these integrations seamless for enterprise IT teams deploying AI pipelines across mixed environments.

We believe that independent validation of the company’s power, rack, and throughput claims will further reinforce the credibility of its efficiency leadership. Meanwhile, Dell’s support for open data formats positions it well to serve enterprises that need interoperability across cloud, edge and on-prem systems.

The next milestone to watch for will be delivering superior customer experiences with the December 2025 and February 2026 releases. Evidence of smooth deployment, strong governance, and cross-domain data correlation will indicate that Dell is moving from roadmap ambition to operational maturity. 

If delivered as described, the AI Data Platform could become Dell’s defining bridge to the innovation layer: a disaggregated, AI-optimized data fabric built to activate, not just store, enterprise data.

To improve the competitiveness of its AI Data Platform over the next 12 months, Dell should focus on accelerating ecosystem integration and delivering measurable, practical AI outcomes for customers. Specifically, Dell must rapidly deepen its collaboration with partners NVIDIA, Elastic, and Starburst by developing and promoting fully integrated, validated "AI Factory" solutions that simplify the deployment and management of the entire AI lifecycle, from data ingestion and preparation to inferencing. 

Key actions include quantifiably boosting the performance and cost-efficiency of PowerScale and ObjectScale for next-generation GPU-intensive workloads, focusing on features like S3 over RDMA for high-throughput object storage and vector search integration with partners like Elastic to unlock real-time, high-quality data retrieval for Generative AI applications. By clearly demonstrating a superior Total Cost of Ownership (TCO), providing comprehensive AI-specific managed services, and actively supporting an open, multi-cloud architecture, Dell can differentiate itself from competitors and transition customers from initial AI experimentation to full-scale, ROI-driving enterprise deployments.

Author Information

Don Gentile | Analyst-in-Residence -- Storage & Data Resiliency

Don Gentile brings three decades of experience turning complex enterprise technologies into clear, differentiated narratives that drive competitive relevance and market leadership. He has helped shape iconic infrastructure platforms including IBM z16 and z17 mainframes, HPE ProLiant servers, and HPE GreenLake — guiding strategies that connect technology innovation with customer needs and fast-moving market dynamics. 

His current focus spans flash storage, storage area networking, hyperconverged infrastructure (HCI), software-defined storage (SDS), hybrid cloud storage, Ceph/open source, cyber resiliency, and emerging models for integrating AI workloads across storage and compute. By applying deep knowledge of infrastructure technologies with proven skills in positioning, content strategy, and thought leadership, Don helps vendors sharpen their story, differentiate their offerings, and achieve stronger competitive standing across business, media, and technical audiences.