Research Finder
Find by Keyword
Can AI agents finally tidy up the mess of modern enterprise infrastructure?
SUSE doubles down on agentic AI and unified virtualization to address the growing complexity of hybrid cloud management and the high cost of GPU resources.
3/25/2026
Key Highlights
The expansion of Liz introduces a context-aware AI agent ecosystem architected to coordinate specialized tasks across diverse environments.
Support for NVIDIA Multi-Instance GPU brings hardware partitioning to virtualized environments to improve efficiency for intensive AI workloads.
The integration of the Model Context Protocol aims to deliver standardized connectivity between AI agents and third-party enterprise tools.
New virtual cluster multi-tenancy provides isolated sandboxes designed to let developers experiment with AI models without impacting production systems.
Enhanced virtualization tools, including live storage migration and auto-balancing, offer a viable alternative to increasingly expensive proprietary hypervisors.
The News
SUSE announced a series of updates at KubeCon Europe designed to transform container management into automated infrastructure operations. The release centers on an open ecosystem for AI agents within Rancher Prime and the deeper unification of virtual machines and containers. These updates arrive as organizations struggle to move AI from experimental pilots into production-ready environments. Find out more by clicking here to go to the SUSE Newsroom.
Analyst Take
We see a clear shift in how infrastructure is being managed as the industry moves away from manual intervention toward autonomous oversight. The complexity of running modern stacks has reached a stage where human operators simply cannot keep pace with the telemetry data being generated. SUSE is positioning itself as the middleman that makes this mess manageable. By leaning into the concept of agentic AI, they are essentially betting that the future of the system administrator is not a person with a keyboard, but a person managing a fleet of digital agents. This is a bit of a gamble, but given the current shortage of specialized talent, it is one that makes a great deal of sense.
This pivot is supported by a harsh market reality: according to the HyperFRAME Research Lens (1H 2026), only 23% of AI/ML projects launched in the last year successfully reached production and met their original ROI objectives. This "Execution Gap" suggests that the "plumbing" of the operation—the underlying infrastructure—is the primary failure point. Furthermore, with 78% of organizations agreeing that AI is strategically important but only 37% utilizing a structured process for deployment, SUSE’s move to automate the operational "mess" addresses a critical lack of internal framework within most enterprises.
What Was Announced
The updates to Rancher Prime and SUSE Virtualization focus on three primary pillars of infrastructure modernization. First, the expansion of Liz, an AI agent, is architected to serve as a central coordinator for a wider ecosystem of specialized agents. This system is designed to provide site reliability teams with automated insights by utilizing the Model Context Protocol; this allows Liz to connect with third-party software and data sources without requiring teams to write bespoke integration code. The agentic ecosystem aims to deliver a way for organizations to retrieve and process data directly from external tools, essentially turning the AI agent into a functional member of the operations team.
Second, SUSE has introduced significant updates to its virtualization stack to better handle high-performance computing. This includes support for NVIDIA Multi-Instance GPU (MIG), which is designed to allow enterprise-grade GPU partitioning. This feature aims to deliver higher hardware efficiency by letting multiple virtual machines share a single physical GPU while maintaining isolation. Additionally, the release introduces advanced operational tools such as VM Auto Balance for distributed workloads, Live Storage Migration for moving data without downtime, and more granular controls over system upgrades.
Third, the developer experience has been bolstered through Rancher Developer Access. This includes a curated catalog of over 140 hardened applications, such as Redis and Postgres, designed to provide developers with secure base images from the start. We also see the introduction of Virtual Cluster Multi-Tenancy; this is architected to give developers isolated, self-service Kubernetes control planes. This sandbox environment is designed to let teams experiment with complex AI models and heavy workloads without competing for shared resources or risking the stability of the broader organizational infrastructure.
The decision to adopt the Model Context Protocol is perhaps the most observant move here. In the early days of the cloud, we saw a fragmented landscape of proprietary APIs that made it a nightmare to move data around. By adopting a standard like MCP, SUSE is trying to avoid that same trap in the AI era. It is essentially the "USB-C" for AI agents. It allows for a plug-and-play approach to intelligence where an agent can suddenly understand the instructions for a new tool without a developer having to spend a fortnight building a custom connector. This focus on interoperability is a classic open-source move, and it keeps the ecosystem flexible.
We also have to consider the current state of the virtualization market. Since the Broadcom acquisition of VMware, we have heard a lot of noise about "sticker shock" and licensing frustrations. SUSE is clearly trying to catch the eye of those looking for an exit strategy. By unifying the management of VMs and containers under one roof and adding features like live migration and auto-balancing, they are making a strong case for their stack as a drop-in replacement. It is about making the transition from legacy VMs to modern containers feel like a natural progression rather than a forced march.
Looking Ahead
Based on what we are observing, the convergence of virtualization and containerization is no longer a luxury but a fundamental requirement for the "AI factory" model. The market is currently navigating a structural supply crisis where GPU scarcity has made resource efficiency the top priority for every CIO.
The urgency for these solutions is underscored by HyperFRAME data showing that only 14% of enterprises currently classify their core data architecture as "fully modernized" for AI workloads, while 23% remain tethered to legacy on-premises systems. This suggests a massive migration wave is imminent. Additionally, as enterprises move toward a multi-model standard, with 79% of organizations anticipating the concurrent deployment of multiple foundation models, the need for a substrate-agnostic "control plane" like Rancher Prime becomes paramount to prevent vendor lock-in and operational collapse.
The key trend that we are going to be looking out for is the actual efficacy of these agentic ecosystems in high-stakes production environments. While the promise of "Liz" and the specialized agents is compelling, the industry still harbors a healthy skepticism regarding the autonomy of AI in infrastructure. On a side note, personally, I would have preferred a non-gender-based name. Gecko would have been a much cooler name and would have signalled a break from the trend of having to give AI gender-specific names, but hey ho.
HyperFRAME will be tracking how the company performs on the delivery of its early access features, particularly the VM Auto Balancing and Live Storage Migration, which are critical for parity with established players like Red Hat and Broadcom.
Our perspective is that the success of this strategy hinges on the maturity of the Model Context Protocol and its adoption across the wider tech stack. If SUSE can successfully position Rancher Prime as the definitive "control plane" for both VMs and AI agents, it will have a significant advantage in the race for technological sovereignty. Going forward, we will be closely monitoring how they navigate the competitive pressure from hyperscalers who are also moving aggressively into the agentic operations space. Open standards usually win, but only if they are easier to use than the proprietary alternatives.
Steven Dickens | CEO HyperFRAME Research
Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the CEO and Principal Analyst at HyperFRAME Research.
Ranked consistently among the Top 10 Analysts by AR Insights and a contributor to Forbes, Steven's expert perspectives are sought after by tier one media outlets such as The Wall Street Journal and CNBC, and he is a regular on TV networks including the Schwab Network and Bloomberg.