Research Notes

OpenAI and NVIDIA to Partner to Deliver 10 Gigawatts of NVIDIA…

Research Finder

Find by Keyword

OpenAI and NVIDIA to Partner to Deliver 10 Gigawatts of NVIDIA Systems

The new strategic OpenAI NVIDIA partnership will enable OpenAI to build and deploy millions of NVIDIA's next-generation GPUs within at least 10 gigawatts of AI data centers, transforming the AI ecosystem.

Key Highlights:

  • NVIDIA will invest up to $100 billion in OpenAI to build at least 10 gigawatts of next-generation AI data centers.
  • The partnership names NVIDIA a preferred strategic partner for its AI factory growth, with the first phase of systems expected to be online by late 2026.
  • This massive infrastructure is considered a fundamental requirement for OpenAI to develop artificial general intelligence (AGI) and its advanced safety mechanisms.
  • To power this initiative, the companies may invest in their own on-site clean energy sources, such as nuclear reactors and renewable power plants.
  • The alliance will also focus on fostering a developer ecosystem and demonstrating tangible progress on ethical AI to maintain a competitive edge and build public trust.

The News

OpenAI and NVIDIA announced a letter of intent for a landmark strategic partnership to deploy at least 10 gigawatts of NVIDIA systems for OpenAI’s next-generation AI infrastructure to train and run its next generation of models on the path to deploying superintelligence. For more information read the NVIDIA press release

Analyst Take

OpenAI and NVIDIA have announced a major strategic partnership to build next-generation AI infrastructure. Under the agreement, NVIDIA will invest up to $100 billion to help deploy at least 10 gigawatts of its systems for OpenAI's new data centers. The initial phase of this deployment, using the NVIDIA Vera Rubin platform, is expected to be operational by the second half of 2026.

 As part of a new strategic partnership, OpenAI has named NVIDIA its preferred partner for compute and networking, a move that will support OpenAI's AI factory growth plans. The two companies will collaborate to align their product roadmaps, ensuring that OpenAI's model and software infrastructure are co-optimized with NVIDIA's hardware and software. This partnership will also complement the ongoing work that both companies are already doing with other key collaborators, including Microsoft, Oracle, and SoftBank, to build advanced AI infrastructure.

 

 OpenAI's user base has expanded to over 700 million weekly active users, with significant adoption among global enterprises, small businesses, and developers. This strategic partnership with NVIDIA is intended to support OpenAI's mission to develop artificial general intelligence (AGI) for the benefit of all humanity. The final details of the new partnership are expected to be completed in the coming weeks.

 

Why the OpenAI NVIDIA Alliance is Key to Safeguarded AGI Development

From our perspective, the OpenAI NVIDIA partnership is integral to the development of AGI primarily because of the immense computational scale it provides. The creation of AGI, a hypothetical AI with intellectual capabilities far exceeding any human's, requires an unprecedented level of processing power. The deal to deploy at least 10 gigawatts of NVIDIA systems, representing millions of GPUs, is a direct response to this need. 

This infrastructure can enable OpenAI to train and run its next-generation models, which are far more complex and resource-intensive than their predecessors. As OpenAI's CEO, Sam Altman, stated, "Everything starts with compute," highlighting that this massive-scale infrastructure is the fundamental bedrock upon which new AI breakthroughs will be built. Without this level of investment and co-optimization, the company's research and development of more powerful, capable, and complex AI systems would be severely constrained, slowing the entire trajectory toward AGI.

While the partnership itself is focused on compute, we find that it implicitly ties into the development of ethical safeguards. The scale of the collaboration allows for the creation of sophisticated, and resource-intensive safety mechanisms. The training of AGI is not just about building a more intelligent system; it also requires massive computing resources to rigorously test for unintended consequences, biases, and harmful behaviors. 

The partnership gives OpenAI the capacity to run extensive red-teaming exercises and conduct large-scale evaluations on its models before they are deployed. This infrastructure also supports the development of new tools for monitoring and controlling AI systems at an unprecedented scale, which is essential for managing the risks associated with AGI. Therefore, the partnership provides not only the power to build AGI but also the computational resources needed to develop and implement the robust safety frameworks required to ensure that such powerful technology benefits all of humanity, aligning with OpenAI's core mission.

10 Gigawatts: A Staggering Yet Attainable Power Requirement for AI Data Centers

A 10-gigawatt power capacity is a massive amount, equivalent to the output of several large-scale nuclear power plants. It is a huge number for AI data centers because these facilities are consolidating enormous amounts of computing power in one location. A single AI training run can require up to 1 gigawatt, and a single server can consume 10,000 watts or more. The high-density, 24/7 nature of these workloads puts a tremendous strain on local and national power grids, and represents a massive and concentrated demand that is comparable to the power needs of a large city or a small country.

 We see that attaining 10 gigawatts of power capacity requires a comprehensive strategy that extends far beyond traditional data center construction. This massive amount of energy, which is comparable to what several nuclear power plants or a large metropolitan area consumes, means that OpenAI and NVIDIA will have to actively engage in large-scale energy and infrastructure development.

 

To fulfill their ambitious objectives, we expect that OpenAI and NVIDIA will likely focus on securing vast amounts of clean and reliable energy, going beyond simply connecting to the public grid. To meet the immense 10-gigawatt demand, which is enough to power a major city, they will likely enter into large-scale, long-term power purchase agreements (PPAs) for renewable energy. This could involve direct investments in the construction of new solar, wind, and geothermal power plants, effectively influencing the energy market and ensuring their data centers have a dedicated and sustainable power supply.

In addition to securing renewable energy, the duo may choose to build their own power generation facilities directly next to their data centers. This approach would bypass the potential strain on existing public power grids and provide a more reliable energy source. One option is to use compact and continuous sources of power like small modular nuclear reactors (SMRs). Another is to deploy on-site natural gas-powered microgrids that can be scaled up quickly and potentially use carbon capture technology to reduce emissions. 

Regardless of the method, generating their own power would give them greater control over their energy supply and help them achieve the immense scale required for their AI operations. A crucial part of this strategy is a relentless focus on efficiency within the data centers. This includes adopting advanced cooling techniques like liquid cooling, which is far more effective for high-density GPU racks than traditional air conditioning. Furthermore, both companies will co-optimize their hardware and software roadmaps to ensure that every watt of power is used as efficiently as possible for both AI training and inference.

Looking Ahead

We believe the new investment and infrastructure partnership represents the next major leap, as these titans of AI look to work together to deploy 10 gigawatts of power to fuel the next era of AI. For over a decade, NVIDIA and OpenAI have challenged each other to innovate, from the initial DGX supercomputer to the breakthrough launch of ChatGPT. 

Over the next 12 months, OpenAI and NVIDIA can improve the outcomes of their partnership and boost their competitiveness by focusing on three key areas: accelerating infrastructure deployment, fostering a robust developer ecosystem, and demonstrating tangible progress on ethical AI development.

To swiftly realize the benefits of their partnership, OpenAI and NVIDIA must prioritize the rapid construction and operationalization of their new AI data centers. A key move would be to co-locate their teams, embedding NVIDIA hardware and software engineers directly with OpenAI's researchers. This tight integration would enable them to co-optimize hardware and software in real time, identifying and resolving bottlenecks much faster. 

These companies, in our opinion, should look to establish a joint task force to manage the complex logistics of securing land, permits, and power sources. By streamlining these processes, the powerhouse players can bring the initial phases of the 10-gigawatt infrastructure online ahead of schedule, giving them a significant first-mover advantage and demonstrating a quick return on their investment.

Boosting competitiveness in the long term requires more than just powerful hardware; it demands a thriving ecosystem of developers and applications. In the next year, the partnership should focus on creating a specialized platform that gives developers early access to this next-generation AI infrastructure. This platform would offer a suite of integrated tools and APIs tailored for running complex AI models. They could also launch a global developer challenge or hackathon centered around leveraging this new infrastructure to solve real-world problems. By doing so, they can attract the best minds in the field, drive innovation, and cement their position as the leading platform for AI development.

As the leaders in AI development, OpenAI and NVIDIA face increasing scrutiny over the safety and ethics of their technology. Over the next 12 months, they should make their commitment to ethical AI highly visible. This can be achieved by publicly sharing their research on AI safety and alignment, detailing how they are using the new infrastructure to rigorously test and red-team their models for potential risks. They should also collaborate with external ethics organizations to conduct independent audits of their safety protocols and publish the results. By proactively addressing these concerns, these players can build trust with regulators, the public, and potential partners, setting a new industry standard and strengthening their long-term competitiveness.

Author Information

Ron Westfall | Analyst In Residence

Ron Westfall is a prominent analyst figure in technology and business transformation. Recognized as a Top 20 Analyst by AR Insights and a Tech Target contributor, his insights are featured in major media such as CNBC, Schwab Network, and NMG Media.

His expertise covers transformative fields such as Hybrid Cloud, AI Networking, Security Infrastructure, Edge Cloud Computing, Wireline/Wireless Connectivity, and 5G-IoT. Ron bridges the gap between C-suite strategic goals and the practical needs of end users and partners, driving technology ROI for leading organizations.

Author Information

Steven Dickens | CEO HyperFRAME Research

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the CEO and Principal Analyst at HyperFRAME Research.
Ranked consistently among the Top 10 Analysts by AR Insights and a contributor to Forbes, Steven's expert perspectives are sought after by tier one media outlets such as The Wall Street Journal and CNBC, and he is a regular on TV networks including the Schwab Network and Bloomberg.