Research Finder
Find by Keyword
The Future of Hyperscaler Capital Expenditures: A Deep Dive into AI and Cloud Computing
Key Highlights
Hyperscaler capital expenditures (CapEx) are projected to reach $335 billion in 2025, a 16% increase from previous forecasts, driven by the demand for AI infrastructure.
Major hyperscalers like AWS, Google, and Microsoft are significantly increasing their CapEx, with AWS projecting $100 billion, Google $75 billion, and Microsoft $80 billion, to support their AI initiatives.
This surge in CapEx is benefiting semiconductor companies like NVIDIA, AMD, Broadcom, and Marvell, who provide the advanced chips required for AI computations.
The Jevons paradox suggests that increased efficiency in AI computing will lead to higher demand for computing power, further fueling the growth of the AI infrastructure market.
Investors should consider factors like diversification, technological innovation, supply chain risks, and regulatory changes when evaluating investments in the AI-driven semiconductor market.
The News:
Contrary to the DeepSeek narrative, particularly in regard to NVIDIA, aggregate hyperscaler capex 2025 forecasts by the big four hyperscalers rose by 16% to $335 billion during earnings according to analysis by New Street. We are hearing a lot about the Jevons Paradox, which simply put, contends that improving compute efficiency increases demand. Against this context, we saw some big players report earnings this past week, and as part of these earnings reports we also got a clearer picture of the proposed CapEx these hyperscalers plan to make in the next quarter. For more details of Amazon Earnings click here, Google here, Microsoft can be found here, and finally Oracle’s latest earnings can be seen here.
By the Numbers
CapEx Projections for the current fiscal year
AWS: $100 billion
Google: $75 billion
Microsoft: $80 billion
Oracle - No specific clarification in the last earnings, but consensus estimates expect that capex will be double the $6.87bn in FY2024.
QoQ Growth of Cloud revenue (last quarter reported)
AWS: 19% year over year and now has a $115 billion annualized revenue run rate.
Google: Revenue increased by 30% to $12 billion in the fourth quarter
Microsoft: Microsoft Cloud, which surpassed $40 billion in revenue for the first time, up 21% year over year.
Oracle - Revenue increased 52% to $2.4 billion in Q2 2025
The convergence of hyperscaler capital expenditures (CapEx) and the rapid evolution of AI is dramatically reshaping the technology sector and significantly impacting the stock market, particularly for semiconductor companies. Major hyperscalers, including AWS, Google, Microsoft, and Oracle, are directing substantial investments toward AI infrastructure, driving growth for key players like AMD, NVIDIA, Marvell, Broadcom, and Intel. The recent earnings season revealed a notable surge in hyperscaler CapEx, with collective investment projected to reach an impressive $335 billion in 2025, a substantial 16% increase from previous estimates. This surge reflects the critical need to scale infrastructure capable of handling the demands of AI and machine learning workloads.
AWS, demonstrating its commitment to expanding its cloud infrastructure for AI-driven services, has projected a CapEx of $100 billion. With an annualized revenue run rate of $115 billion and a 19% year-over-year growth, AWS is clearly prioritizing AI as a core growth driver. Google, investing $75 billion, is focusing on enhancing its Search, YouTube,gemini range of AI services, and other platforms through AI innovations, contributing to a 30% revenue increase in the last reported quarter. Microsoft, with an $80 billion CapEx, is leveraging its cloud segment, which showed a 21% year-over-year revenue rise, to fuel its AI initiatives, particularly within Azure and AI-powered productivity tools. Oracle, while less specific with its figures, is expected to double its FY2024 CapEx of $6.87 billion, driven by a remarkable 52% revenue increase in its cloud sector, signaling a robust expansion into both AI and cloud computing.
What do the CapEx numbers mean for the AI trade?
This increased CapEx is a significant catalyst for semiconductor companies, as hyperscalers require advanced chips for AI computations. NVIDIA, a long-standing leader in the AI chip market with its GPUs, remains pivotal for training and deploying AI models. Despite some market skepticism surrounding projects like DeepSeek, the overall narrative around NVIDIA remains positive due to its established position in AI infrastructure. The Jevons paradox, which suggests that efficiency gains can lead to increased consumption, implies that NVIDIA's market could expand even further as hyperscalers demand more powerful computing solutions.
AMD is also benefiting from this trend, particularly with its MI300X chips gaining traction as viable alternatives to NVIDIA's offerings. Microsoft's strategic integration of AMD's technology into Azure exemplifies how hyperscalers are diversifying their AI compute sources, potentially impacting NVIDIA's market share.
Broadcom and Marvell, specializing in networking and specialized AI chips, are also key beneficiaries. Broadcom's collaboration with hyperscalers on next-generation chips and Marvell's focus on data infrastructure directly align with the expanding needs of AI capabilities at scale, suggesting potential stock appreciation. Intel, while facing challenges in the AI race, is not out of contention. Its aggressive investments in new technologies like the Intel Ponte Vecchio for AI and HPC could lead to a resurgence in its stock performance, especially if it can leverage its manufacturing expertise to produce competitive AI chips.
The Jevons paradox, applied to AI compute, suggests that as efficiency improves, the demand for computing power will likely rise, creating a continuous cycle of hardware investment. This dynamic not only supports the growth forecasts for hyperscalers but also underpins the positive sentiment surrounding semiconductor stocks involved in AI. Investors should, however, consider several crucial factors. Diversification of risk is important, as hyperscalers are spreading their investments across multiple chip manufacturers. While this mitigates the risk for any single semiconductor stock, market leadership can still significantly influence share prices. Technological innovation will be paramount; companies offering more power-efficient, performant, or cost-effective AI solutions are likely to experience stronger market performance. Supply chain and geopolitical risks, including potential disruptions or shifts in technology export policies, could impact these stocks. Finally, the evolving regulatory environment surrounding AI and tech giants could also shape investment landscapes.
The MIT License is a popular open-source license known for its simplicity and permissiveness. It allows users to freely use, copy, modify, merge, publish, distribute, sublicense, and sell the software. The sole requirement is including the license and copyright notice in all copies or substantial portions of the software. It's highly compatible with other licenses, facilitating integration into projects with diverse licensing schemes, and the software is provided "as is," without any warranty.
Compared to other open-source licenses, the Apache License 2.0 shares similarities with the MIT License in its permissiveness and requirement for preserving copyright and license notices. It also grants patent rights, which the MIT License doesn't explicitly address. However, Apache 2.0 includes additional clauses regarding contributions, such as stating significant changes and a specific clause about trademark use. It's slightly more verbose but remains permissive. The GPL v3, in contrast, is copyleft, meaning derivative works must also be distributed under the GPL. This contrasts sharply with the MIT License, which doesn't impose such requirements. While GPL aims to keep software free, MIT allows proprietary use, potentially attracting commercial entities more readily. BSD licenses also offer comparisons. The 3-Clause BSD license is very similar to MIT but includes an additional clause about not using copyright holders' names for endorsement without permission. The 2-Clause BSD license is even closer to MIT, removing the endorsement clause and making it almost identical in permissiveness. The Creative ML OpenRAIL-M license, focused on AI, includes ethical usage guidelines to prevent misuse, unlike the MIT License's minimal restrictions.
In the AI context, models like GPT-2 were released under the MIT License, promoting wide accessibility for research and commercial use. However, some argue that not all MIT-licensed AI models are "truly open-source" if they don't release training data or weights. MIT-licensed models like Mistral 7B and Zephyr 7B have demonstrated competitive performance with fewer parameters than closed-source counterparts, offering a balance of capability and flexibility due to the license's permissiveness. The MIT License can foster a large community around a project due to its liberal nature, leading to more contributions, forks, and adaptations.
When comparing licenses in AI development, the MIT License doesn't mandate transparency in training data or model weights, which can be a limitation compared to licenses emphasizing this aspect for true open-source AI. Its permissiveness benefits commercial use, as companies can incorporate models into proprietary software easily. While the MIT License promotes innovation by reducing legal barriers, it may not directly address ethical concerns or misuse like some AI-specific licenses. The MIT License often results in broader community adoption but less control over how the technology is used or modified compared to stricter licenses. In conclusion, the MIT License is favored for its simplicity and permissiveness, making it attractive for developers and companies in AI who want to contribute to or use open-source models without stringent copyleft requirements or ethical stipulations. However, the lack of requirements for data transparency or ethical use means that while it fosters wide adoption and innovation, it might not align with all definitions or goals of open-source AI, particularly those emphasizing ethical deployment or data privacy.
We have already seen Sam Altman hint that OpenAI may change its perspective on open-source as an approach for their model development and the Deepseek developments will certainly have created pressure within Meta and the speed of innovation of Llama. TL;DR - Watch this space.
Looking Ahead
The idea that the purported Deepseek fueled efficiency gains could somehow be detrimental to long-term growth overlooks the broader picture. Efficiency in AI doesn't mean a reduction in need; it means an increase in capability and application, which in turn fuels further demand for infrastructure.
For investors, us analysts and tech enthusiasts, the message is clear: the AI infrastructure boom is far from over. Companies like Amazon, Google, Microsoft, Meta, and the chip manufacturers directly tied to their success (like Nvidia and AMD) are poised for continued growth, not just because they're spending more but because they're innovating at scale. This bullish posture doesn’t factor in the Stargate project that will further fuel growth and could provide another mega customer.
In conclusion, dismissing the potential of AI and cloud infrastructure investments based on short-term efficiency gains is shortsighted. The future of tech is not just about more data centers or chips; it's about what these enable. We will increasingly see a world where AI drives unprecedented levels of efficiency, personalization, and productivity across all facets of human activity. The hyperscaler capex figures for 2025 and beyond are not just numbers; they're a testament to the enduring, expanding horizon of AI.
However, investors must carefully navigate this landscape, considering innovation, market dynamics, and broader economic indicators to effectively capitalize on this trend. The AI stock trade, fueled by hyperscaler investments, is not just about the present but about anticipating the future trajectory of technological advancement.
Steven Dickens | CEO HyperFRAME Research
Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the CEO and Principal Analyst at HyperFRAME Research.
Ranked consistently among the Top 10 Analysts by AR Insights and a contributor to Forbes, Steven's expert perspectives are sought after by tier one media outlets such as The Wall Street Journal and CNBC, and he is a regular on TV networks including the Schwab Network and Bloomberg.