Research Finder
Find by Keyword
AI Won’t Kill the Mainframe
As comments from Anthropic wreak havoc with IBM’s stock price, the focus turns to how AI can modernize the mainframe platform
02/24/2026
Key Highlights
- COBOL is only part of an entire system architected for performance, availability, and scalability
- Mainframe environments are now being architected to integrate generative AI directly into core systems through solutions like Spyre.
- Automated code transformation aims to deliver a low-risk path for updating decades of stable business logic.
- Data gravity remains a primary driver for keeping high-volume transactions on existing specialized hardware.
- New toolsets are designed to translate complex COBOL into modern languages without requiring platform migration.
- Availability is key for transactional systems and is often ignored in the discussion of modernization
Analyst Take
IBM has become a stock market darling again of late, with its share price recently hitting all-time highs, largely powered by the mainframe, which had a box office last quarter, delivering 67% growth after the first full quarter of shipments for the latest z17 model. So, against this recent resurgence of IBM’s stock, for one Anthropic, highly reductive blog, focused solely on refactoring COBOL to cause IBM’s stock price to drop by 13% in a single day, its worst performance in 25 years, is big news. However, when COBOL makes the headlines, the nuance is thrown out of the window, and the fundamental issues that have helped the mainframe endure for 64 years, and counting, are ignored.
The conversation around the mainframe has been stuck in the same loop for years. People talk about it as if it were a relic. They assume every large bank or insurer is just one bad day away from moving everything to the public cloud. My analysis and conversation with clients is that this view is fundamentally flawed. For some context, I have spent half my career working for mainframe vendors such as IBM and Broadcom, and the last almost 5 years as a technology industry analyst tracking this space. I regularly get briefed by both sides of the modernization divide, with briefings from the heavily funded Mechanical Orchard, Astadia (part of Amdocs), Heirloom Computing, and, of course, AWS all being on my calendar in the last few months.
The simplistic and reductive nature of the Anthropic blog ignores the sheer physical and logical center of gravity of these systems. Moving them is not just hard; it is often illogical. What I am seeing now is a shift where the platform is not just surviving but is being architected to lead in the era of AI. I see AI as a tailwind for the mainframe, not the opposite.
My perspective is that we are moving past the era of the "rip and replace" strategy. For a long time, the only way to modernize was to leave the mainframe behind. That proved to be a recipe for massive budgets and even bigger failures. The annals of CIO tenure are littered with CIOs who have been booted out after failed mainframe migrations, only for their replacements to ditch the whole idea and double down on on-platform mainframe modernization. I recall chatting with the CIO of a UK bank that had spent 900m GBP trying to move off the mainframe and five years, only for the incoming CIO, with whom I met, to cancel the project and buy two new mainframes from IBM.
Based on my analysis of the market, and I talk to vendors on both sides, the introduction of generative AI has changed the math entirely. We are seeing a new model where AI is brought to the data, rather than the other way around. This is a subtle but massive change in how enterprise computing works. The IBM Spyre accelerator is a key component in this discussion.
Our in-depth perspective highlights a critical trend in AI for mainframe modernization. It is not about using AI to simply refactor COBOL and dump it on the cloud. It is about using generative tools to understand the business logic that runs the global economy and then modernize it, often times in-situ, to leverage the underlying unique hardware and security benefits of the mainframe.
Many of these systems have been running since the seventies. But again, this story is more nuanced than the clickbait headlines would have you believe. For British audiences, Trigger’s Broom is a classic analogy, where the roadsweeper has had the same broom for 20 years, all while changing the broom head 17 times and the handle 14 times. For international audiences, the Porsche 911 was launched in the same year as the IBM Mainframe and has undergone continual evolution, all while retaining its leadership as a paragon of performance automobiles. Put another way, nobody calls a modern 911 legacy, nor should they call the latest IBM z17.
The people who wrote the original code are retired. This created a knowledge gap that was previously hard to bridge. Although nuance is needed here also, research that I led at my previous research firm highlights that getting skilled mainframe talent is no harder than hiring security practitioners or AI developers, so this must be taken into account.
AI tools are now designed to read that code, document it, and even convert it into Java or other modern languages while keeping it on the same reliable hardware. This approach aims to deliver the best of both worlds. You get the speed and flexibility of modern code with the legendary uptime of the mainframe. When you look at the sheer volume of transactions these machines handle, it is clear why they persist. They are built for a level of input and output that would make a standard server rack sweat. My analysis suggests that by using AI to refactor code in place, companies can finally stop spending eighty percent of their budget just to keep the lights on. They can start innovating on the platform they already trust.
The logic here is centered on data gravity. Data is heavy. It is expensive to move and risky to expose. If you have petabytes of transaction history sitting on a mainframe, it is far more efficient to bring the AI processing power to that location. This is what the latest generation of hardware and software assistants aim to deliver. They are designed to make the mainframe act like a first-class citizen in a hybrid environment. It is no longer a silo; it is a hub. The IBM Spyre accelerator is a key component in this equation. I spoke with the lead architects for this technology back in 2024 when it was announced. Fundamentally, it brings AI acceleration inside the box to deliver what I call “in-transaction AI.” to workloads on the mainframe. Coupled with Small Language Models (SLMs) this can drive huge advances in applications like fraud scoring of credit card transactions, moving the paradigm from sampling of certain transactions to scoring of every transaction, which can move the needle for many financial institutions. IBM has over 200+ uses cases for Spyre it has been working on with clients, according to my conversations with IBM execs.
Availability - Never discussed, until everything breaks
I wrote about how the mainframe is the only hardware that is earthquake tested back in April of 2024 to highlight how the attention to detail that IBM puts into system availability is crucial for the types of workloads that run on the mainframe. IBM will stand behind an 99.999999% availability claim for systems running DB2 data sharing. To put that in context, eight nines of availability translates to 315.58 milliseconds of downtime per year. Public cloud SLAs are typically 99.9%, which is 8.77 hours per year. Why does this matter? As Mastercard said on stage at an IBM event last year that I attended, for certain workloads (and this statement is key) only the mainframe can hit the SLAs needed.
For more details on nines and availability, check out this Wikipedia link. If you want to deep dive into the mechanics of a Cloud outage again, I have been writing about this topic for some years now. Check out this report for my deepest dive, and this report for the most recent coverage.
I find the resilience of this platform to be quite remarkable. Every decade, a new technology is supposed to kill it. First it was client-server, then it was the internet, then it was the cloud. Each time, the mainframe has adapted. My own analysis is that AI is the most significant adaptation yet. It solves the one thing the mainframe was bad at: accessibility. By using AI to bridge the gap between legacy code and modern developers, the platform becomes accessible to a whole new generation of engineers who have never even seen a terminal.
I am not saying that cloud is bad; in fact, quite the opposite. The analogy I use is that workloads are like forms of transport. While the car is great, and services that leverage it like Uber and Lyft are great for certain journeys, the car is not suited to oceanic transport of freight or intercontinental air travel. The cloud, on-premises infrastructure, and the mainframe are designed for different workloads, SLAs, and performance requirements. All have their place
Complexity is the enemy of the modern enterprise. Trying to manage thousands of microservices in the cloud can be just as difficult as managing a mainframe. The difference is that the mainframe is a known quantity. It is stable. If you can use AI to remove the "legacy" baggage from the code, you are left with the most reliable computing platform ever built. My perspective is that the market is finally realizing that the goal isn't to get off the mainframe, but to make the mainframe move as fast as the rest of the business.
Looking Ahead
Based on what we are observing, the narrative around the mainframe is being rewritten by the necessity of AI. The key trend that we are going to be tracking is the rate of adoption for these AI-assisted modernization tools among Tier 1 financial institutions. Rob Thomas gave examples in his rebuttal; we need to see more. While the broader market focuses on high-level AI applications, the real structural change is happening in the "basement" of the enterprise where the core ledgers live.
Based on HyperFRAME's analysis of the market, my perspective is that the mainframe will remain the backbone of the enterprise for at least another twenty years because of this AI pivot. When you look at the market as a whole, the focus on moving everything to a public cloud is starting to cool as the costs, complexities, and crucially, sovereignty become apparent.
However, you do need to be able to hold two truths to be true at the same time. Certain workloads are badly suited to the mainframe, and players like AWS and its Transform service are well-suited to moving these off the platform. All while IBM will report MIPS growth and continue to drive mainframe topline expansion. It's not a zero-sum game; in fact the exact opposite.
Going forward, we are going to be tracking how the mainframe ecosystem evolves to support more open source AI frameworks directly on the hardware. HyperFRAME will be tracking how IBM continues to perform on the hardware refresh deployment of z17 in future quarters. We will also be tracking the other side of the debate, and a directional measure will be the success of the AWS Transform service, as it represents the most robust and holistic approach by any cloud vendor, in our opinion.
Interested and want a deep dive on how AI is modernizing the mainframe. Then read HyperFRAME Research report here.
Steven Dickens | CEO HyperFRAME Research
Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the CEO and Principal Analyst at HyperFRAME Research.
Ranked consistently among the Top 10 Analysts by AR Insights and a contributor to Forbes, Steven's expert perspectives are sought after by tier one media outlets such as The Wall Street Journal and CNBC, and he is a regular on TV networks including the Schwab Network and Bloomberg.