Research Notes

Is Wasabi Fire Key to Enabling the Next Major AI-Era Storage Platform?

Research Finder

Find by Keyword

Is Wasabi Fire Key to Enabling the Next Major AI-Era Storage Platform?

New high-performance storage class and co-location in a Silicon Valley region optimized for GPU-adjacent workloads may catalyze Wasabi’s expansion from backup and archive to supporting active AI data pipelines

22/11/2025

Key Highlights:

  • Wasabi has introduced Wasabi Fire, an SSD-based object storage class designed for performance-sensitive AI data pipelines

  • Fire launches first in a San Jose, California region located in an IBM Cloud data center, giving customers physical adjacency to GPU compute and eliminating egress charges

  • During the controlled introduction period, customers should expect improved read/write performance compared with the company’s HDD-based Hot Cloud Storage tier.

  • Wasabi now operates sixteen storage regions worldwide and manages more than three exabytes of data, serving a global partner ecosystem of nearly eighteen thousand MSPs and channel resellers.

The News

Wasabi Technologies has announced Wasabi Fire, its first SSD-based, high-performance object storage tier, alongside a new Silicon Valley storage region inside an IBM Cloud facility. Fire is priced at US$19.99 per terabyte per month, with no egress or API fees, and is entering a controlled introduction phase in late 2025, with general availability expected in mid-2026. Full details are available in the company’s press release.

Analyst Take

Wasabi says its 100,000+ customer mix has shifted, now including large-scale AI data lakes. In our recent conversations with company leadership, they shared examples including a computer vision provider storing 30 billion images and petabytes of training data, and a multimedia AI platform storing raw video footage and derived artifacts. In these cases, Wasabi serves as the persistent storage layer, allowing organizations to scale their data without being locked into a single hyperscaler's billing model. Let this resonate: Wasabi now operates at multi-exabyte scale with a global footprint spanning sixteen regions, placing it in the upper tier of cloud object storage platforms outside the major hyperscalers.

The timing of Fire’s arrival also lines up with a structural shift in AI infrastructure buying behavior. Enterprise teams are beginning to separate storage decisions from compute decisions as GPU scarcity forces them into multi-cloud and hybrid deployments. In this environment, object storage has to act as the durable backbone across increasingly distributed pipelines. Fire positions Wasabi to benefit directly from this decoupling trend, serving as a vendor-neutral performance tier that can follow the data wherever compute must land.

Seen through that lens, the Wasabi Fire service is less of a departure and more of an extension. Until now, Wasabi’s storage platform has primarily relied on HDD-based capacity, which works well for long-term retention and many production workloads, but is not always ideal for the front end of AI pipelines. The early stages of model development, especially for vision and multimodal systems, can be dominated by small-object reads, random access to frames inside video files, and repeated passes through raw and lightly processed datasets. Those patterns benefit from lower latency and higher throughput. Fire is designed to move Wasabi into those stages of the pipeline by placing data on SSD/NVMe, while preserving the S3 compatibility and flat-rate billing that customers already understand.

In our view, the decision to launch Fire first in a San Jose region inside an IBM Cloud data center is strategically consistent with how Wasabi has approached the market. The company has never tried to build its own compute service. Instead, it is colocating storage with partners that already run compute and GPU infrastructure and negotiating economics that remove friction. In San Jose, customers can place Wasabi Fire storage and IBM Cloud compute in the same facility and move data between them without egress fees, connected through cross-connects in the Meet-Me-Room. For AI teams that have grown weary of paying to move their own data inside and out of hyperscale platforms, this is significant.

The company clearly expects a step-function performance improvement over its HDD-backed tier, but during the controlled introduction it is holding back from publishing throughput, IOPS or latency guarantees. High-performance object storage is a category where expectations are high and the workloads are varied. Taking time to tune behavior with a small set of early customers before translating those lessons into public commitments is a disciplined approach.

From our perspective, the Fire launch and the San Jose region indicate that Wasabi is stepping more directly into the AI infrastructure conversation in a way that fits its identity. The company is not trying to match the breadth of AWS, Azure or Google Cloud. Instead, it is sharpening its role as a storage foundation that can sit under AI data lakes, lakehouse pipelines and vector databases while keeping cost and complexity in check. Fire gives Wasabi a way to support more of the AI lifecycle, from long-term retention of raw datasets to performance-sensitive preparation and embedding generation.

What Was Announced

Wasabi has launched the Wasabi Fire storage class, built on SSD/NVMe infrastructure and aimed squarely at the unstructured data workloads that power AI and data-intensive applications. Fire is intended for use cases where HDD-based object storage can become a bottleneck: computer vision datasets made up of small images, video pipelines that require frequent frame access, log and telemetry streams, and embedding or feature stores that see high read and write activity.

Pricing for Fire is set at US$19.99 per terabyte per month, keeping with Wasabi’s flat-rate philosophy. As with its Hot Cloud Storage tier, there are no egress fees and no API request charges, which means customers pay only for capacity, not for the act of storing, reading or moving data out.

The first Fire deployment is in a new San Jose region located in an IBM Cloud data center. In this configuration, Wasabi has arranged for customers to move data between Fire and IBM Cloud compute within the same facility without paying egress charges, and has provisioned high-speed cross-connects so GPU and CPU clusters can access Fire as a low-latency storage tier. The company has also highlighted partnerships with other compute and edge providers that can be colocated or interconnected in similar ways, with the goal of giving customers an alternative to storing performance-sensitive data inside a single hyperscaler domain.

Strategically, Fire gives Wasabi a competitive narrative that hyperscalers cannot easily mimic. AWS and Azure remain incentivized to meter API calls and data movement because those charges underpin their broader cloud revenue models. Wasabi’s single-line-item approach allows it to position Fire not only as a lower-cost performance tier, but as a structurally simpler one. That simplicity is increasingly attractive to AI teams that want to avoid the complexity of tier juggling and cost modeling across multiple storage classes.

Fire enters a controlled introduction phase in late 2025, during which Wasabi will work with a defined set of early customers to tune performance and operational behavior. General availability is expected in mid-2026, with a public waitlist available via the Wasabi website.

Looking Ahead

After 10 years in business this September, Wasabi enters the new year at a pivotal point in its development. The Fire storage class does not replace the company’s existing HDD-based Hot Cloud Storage tier, but it does broaden the set of problems Wasabi is aiming to solve. The company has already proven that predictable, S3-compatible object storage can win in backup, archive, video surveillance and public sector environments. The question for 2026 is how fully Wasabi can translate that success into the AI domain, where data volumes are larger, access patterns are more demanding, and customers are more sensitive to both performance and total cost of ownership.

A central theme to watch is how Fire changes the economics and behavior of AI data pipelines. Many AI projects are constrained less by the cost of storing bytes and more by the cost and friction of moving and accessing them. Egress fees, request-based billing, and the operational overhead of managing hot and cold tiers inside a single hyperscaler account have pushed some customers to separate their long-term storage layer from their compute provider. Fire allows Wasabi to participate more directly in earlier pipeline stages while keeping the commercial model grounded in flat-rate capacity pricing. Whether customers treat Fire as a primary performance tier, a staging ground between raw storage and GPU clusters, or simply a cost-stable alternative to hyperscaler premium tiers will become clearer with real-world deployments.

The ecosystem around Fire will also be an important measure of success. Wasabi already has meaningful integrations in backup, archive and surveillance with most major providers. AI-centric workloads introduce new software relationships and dependencies. Vector databases that sit on object storage, AI metadata and catalog services, lakehouse platforms like Snowflake that ingest from external stages, and data management tools that can tier or classify content for AI use will all influence how valuable Fire becomes in practice. I expect to see Wasabi deepen these alliances during 2026, particularly as it extends Fire to additional regions and as customers demand more turnkey patterns for building AI data lakes on top of object storage.

Geography will matter as well. As a test case, launching Fire in San Jose makes sense given its proximity to Silicon Valley AI ecosystems and IBM Cloud’s presence there. Over time, however, global AI activity will require similar capabilities in Europe and Asia Pacific. Research institutions, media hubs and service providers in those regions are building region-specific AI data lakes with their own performance and data-sovereignty requirements.

Finally, the market will judge Fire not just on price but on how it compares in practice with services like AWS S3 Express One Zone and Azure Blob Premium. Hyperscalers still offer the deepest integration with their own compute platforms and higher-level AI services. If Wasabi can show that Fire delivers competitive performance for the most common AI data patterns, while keeping the billing model understandable and eliminating surprise line items, it will have a differentiated story in a space that has been dominated by incumbents.

It remains to be seen if Wasabi Fire becomes a specialized tier for a subset of AI workloads or sets the bar as a foundational capability in customers’ infrastructure. The building blocks are in place: a proprietary storage engine, a clear economic model, a global footprint and an installed base that already includes some of the most demanding AI data lake operators.

Author Information

Don Gentile | Analyst-in-Residence -- Storage & Data Resiliency

Don Gentile brings three decades of experience turning complex enterprise technologies into clear, differentiated narratives that drive competitive relevance and market leadership. He has helped shape iconic infrastructure platforms including IBM z16 and z17 mainframes, HPE ProLiant servers, and HPE GreenLake — guiding strategies that connect technology innovation with customer needs and fast-moving market dynamics. 

His current focus spans flash storage, storage area networking, hyperconverged infrastructure (HCI), software-defined storage (SDS), hybrid cloud storage, Ceph/open source, cyber resiliency, and emerging models for integrating AI workloads across storage and compute. By applying deep knowledge of infrastructure technologies with proven skills in positioning, content strategy, and thought leadership, Don helps vendors sharpen their story, differentiate their offerings, and achieve stronger competitive standing across business, media, and technical audiences.

Author Information

Stephanie Walter | Practice Leader - AI Stack

Stephanie Walter is a results-driven technology executive and analyst in residence with over 20 years leading innovation in Cloud, SaaS, Middleware, Data, and AI. She has guided product life cycles from concept to go-to-market in both senior roles at IBM and fractional executive capacities, blending engineering expertise with business strategy and market insights. From software engineering and architecture to executive product management, Stephanie has driven large-scale transformations, developed technical talent, and solved complex challenges across startup, growth-stage, and enterprise environments.