
From "Stargate" to AWS's large computing order, OpenAI signs AI computing contracts vigorously, NVIDIA and storage giants win big

OpenAI has signed agreements worth hundreds of billions of dollars for AI computing power supply with several tech giants over the past month, including a $38 billion collaboration with Amazon AWS. NVIDIA, as the leader in AI chips, has become the main beneficiary of these agreements, especially in its AI GPU computing power devices. This collaboration marks OpenAI's large-scale investment in AI computing infrastructure, which is expected to drive growth for related companies
According to Zhitong Finance APP, the global AI application leader OpenAI, valued at up to $500 billion, has reached AI computing power resource supply agreements with several tech giants over the past month, with the scale expanding to hundreds of billions of dollars. The overall scale is very close to the $1.4 trillion AI computing power infrastructure expenditure plan envisioned by OpenAI CEO Sam Altman.
For the "AI chip giant" NVIDIA (NVDA.US), the three major players in HBM storage systems—SK Hynix, Samsung, and Micron—as well as data center enterprise storage product giants like Western Digital, SanDisk, and Seagate, they are the biggest beneficiaries of OpenAI's recent large-scale AI computing power infrastructure contracts (from the $500 billion "Stargate" AI super infrastructure project to the latest secured mega deal with Amazon AWS).
Among the many agreements signed by OpenAI concerning the immensely large AI computing power infrastructure resources, NVIDIA's widely popular and increasingly powerful AI GPU computing clusters have become the key beneficiaries of these large orders for AI computing power infrastructure, followed by the giants focused on high-performance storage products for data centers. Additionally, other leading AI chip manufacturers and cloud service providers are also working together to share the "super big cake" brought by OpenAI.

The latest deal was announced on Monday morning Eastern Time, with the partner being AWS (Amazon Web Services), the world's largest cloud computing service provider under Amazon. As part of this seven-year AI computing power agreement worth up to $38 billion, OpenAI will gain access to hundreds of thousands of NVIDIA AI GPU computing devices through AWS, including NVIDIA's GB200 and GB300 AI GPUs, which will operate in clusters and can be scaled to "tens of millions" of CPUs to rapidly expand its generative AI and AI agentic reasoning workloads.
Not long before this, OpenAI updated its cloud AI computing power supply agreement with one of its main supporters, Microsoft (MSFT.US), adding a purchase of up to $250 billion in Azure cloud computing services. An important condition in the agreement is that Microsoft will no longer have the preferential selection rights as a provider of high-performance computing resources for OpenAI.
Moving the timeline slightly back, last month, OpenAI disclosed that it would engage in a long-term computing power supply collaboration with AI ASIC leader Broadcom (AVGO.US) to jointly develop and deploy customized AI ASIC self-developed AI computing clusters with a capacity of up to 10 gigawatts According to media reports, chip design leader Arm (ARM.US) is also part of the computing infrastructure agreement between Broadcom and OpenAI, which will help OpenAI build a server-grade central processing unit chip (i.e., ARM architecture data center server CPU) for AI training/inference workloads in conjunction with the AI ASIC computing cluster that OpenAI is co-designing with Broadcom.
Shortly before the announcement of this large-scale AI ASIC computing agreement with Broadcom, OpenAI had just signed an innovative "equity-AI computing supply bet contract" with NVIDIA's long-time competitor in the data center and PC space, AMD (AMD.US). OpenAI will deploy approximately 6 gigawatts of AMD AI GPU computing clusters over the next few years. The first 1-gigawatt level AMD Instinct MI450 GPU computing cluster deployment is expected to officially start in the second half of 2026.
As part of the agreement with AMD, AMD granted OpenAI a warrant allowing it to purchase up to 160 million shares of AMD common stock in the future. This means that once OpenAI completes the GPU computing cluster deployment and AMD's stock price reaches specific milestone targets, OpenAI could acquire these shares at almost no cost, potentially becoming about a 10% shareholder in the $400 billion chip giant.
In addition to its "never satisfied" demand for AI computing infrastructure, OpenAI has rapidly accumulated numerous partnerships. This includes an agreement reached last week with digital payment giant PayPal (PYPL.US) to embed its digital wallet into ChatGPT. Following this was a similar e-commerce ecosystem partnership agreement between this AI unicorn led by Sam Altman and Shopify (SHOP.US), Etsy (ETSY.US), and Walmart (WMT.US).
OpenAI is also collaborating with CRM cloud software giant Salesforce (CRM.US) to integrate ChatGPT directly into the Slack system, enabling CRM work teams to quickly extract insights, draft content, and summarize complex conversations. Last month, U.S. life sciences and medical testing leader Thermo Fisher Scientific announced that it would integrate OpenAI's API into its key business functions and operational processes, such as product development, service delivery, and customer interaction.
All of these tightly-knit contracts follow the announcement at the beginning of the year of the $500 billion "Stargate Project," in which OpenAI and Oracle are deeply involved (also known as the interstellar gate AI super infrastructure project, referred to as the modern Manhattan Project). Core partners of the project include Oracle (ORCL.US), SoftBank, Arm, Microsoft, and NVIDIA, with OpenAI and Oracle having signed a multi-billion dollar AI computing infrastructure supply agreement primarily for the "Stargate" related AI infrastructure projects OpenAI is also involved in the global Stargate branch project, such as the "Stargate Global Branch" program in the UAE, Argentina, and Norway, and is collaborating deeply with local data center operators, where the local AI computing power infrastructure is generally concentrated around NVIDIA AI GPU computing clusters.
This series of significant agreements seems to be paving the way for OpenAI's eventual initial public offering (IPO). Reports suggest that this IPO could take place at the end of 2026 or early 2027, at which point the creator of ChatGPT could be valued at $1 trillion by Wall Street. According to historical statistics, this would become the second-highest initial market value for an IPO in global stock history, following the Saudi energy giant Saudi Aramco, which had an initial market value of $1.7 trillion when it went public in December 2019, raising approximately $25.6 billion in its IPO. OpenAI's starting scale would far exceed the $81.3 billion initial market value of Meta Platforms (then known as "Facebook") when it went public in 2012. OpenAI's latest valuation is approximately $500 billion.
All AI infrastructure projects prominently feature NVIDIA AI GPUs and high-performance storage from data centers
From the nearly $1 trillion worth of AI computing power infrastructure agreements that OpenAI has signed, it is evident that these super AI infrastructure projects are closely tied to NVIDIA AI GPU computing clusters and enterprise-level high-performance storage products from data centers (focusing on HBM storage systems, enterprise-grade SSD/HDD, and server-level DDR5 storage products).
In this unprecedented AI investment cycle centered around the iteration of large AI models and the expansion/new construction of AI data centers, core AI computing component manufacturers like NVIDIA are undoubtedly the biggest winners; closely following them are high-end memory suppliers represented by HBM (such as SK Hynix, Samsung, and Micron), as well as enterprise-level high-performance storage manufacturers serving AI data centers (nearline HDD and data center SSD). These two chains are forming a dual-driven AI investment cycle of "AI computing power × storage," where HBM storage systems are the first tier of storage products closely aligned with AI GPU/AI ASIC computing clusters, while the subsequent enterprise-grade HDD/SSD represent another major winning force in the AI infrastructure construction wave that is accommodating the "storage deluge" of AI data.
The "Stargate" project led by OpenAI is expected to consume up to 40% of the global DRAM storage product output and has signed a letter of intent for cooperation with Samsung and Hynix for a maximum of 900,000 DRAM wafers per month, focusing on DDR5 and HBM. The demand for AI is pushing the global HBM leader, SK Hynix, to achieve record profits—reporting a record operating profit of 11.4 trillion won (approximately $8 billion) and revealing that orders for HBM and enterprise-grade NAND across its entire series of storage chips are already sold out for next year. Its stock price has tripled this year, while Micron and Samsung have also seen their stock prices rise by triple-digit percentages this year under the unprecedented bull market narrative of the "storage super cycle." Goldman Sachs, a major Wall Street firm, previously released a research report stating that due to the exceptionally strong demand for generative artificial intelligence (Gen AI) from enterprises, there has been a surge in AI server shipments and a higher level of HBM density in each AI GPU. The institution has significantly raised its total market size estimate for HBM, predicting that the HBM market will achieve a 100% compound annual growth rate (CAGR) expansion from 2023 to 2026, growing from just $2.3 billion in 2023 to $30.2 billion in 2026. Goldman Sachs expects that the supply-demand imbalance in the HBM market will continue in the coming years, benefiting major players such as SK Hynix, Samsung, and Micron.
In the unprecedented "AI computing power race" closely associated with global expansion and AI training/inference-related infrastructure, Wall Street firms like Morgan Stanley are proclaiming that the "storage supercycle" has arrived. The surge in demand for enterprise-grade storage hard drives has driven the stock prices of data storage giants such as Seagate (STX.US), SanDisk (SNDK.US), and Western Digital (WDC.US) to increase by over three digits this year, significantly outperforming the U.S. stock market and even the global stock market.
Morgan Stanley stated in its report that amid the unprecedented AI infrastructure frenzy, where large enterprises and various government departments are investing heavily in AI, the demand for core storage chips closely related to AI training/inference systems remains extremely strong, driving a surge in revenue for data center storage businesses, including HBM storage systems, server-grade DDR5, and enterprise-grade SSDs.

It is reported that Samsung has taken the lead in suspending the October DDR5 DRAM contract quotes, prompting other storage manufacturers such as SK Hynix and Micron to follow suit, which will lead to a "supply chain starvation." The expected time to resume quoting is likely to be delayed until mid-November. Industry insiders point out that in the fourth quarter, upstream manufacturers are only providing quotes to technology leaders or first-tier cloud giants, with DDR5 almost completely not releasing production capacity to other general customers, indicating that the storage products have "fully entered a seller's market."
The tide of AI computing power is unstoppable; is a $5 trillion market cap far from NVIDIA's limit?
After Jensen Huang released a series of significant positive catalysts at the GTC conference, and Microsoft, Google, and Facebook's parent company Meta signaled in their latest earnings meetings that they would continue to invest heavily in AI computing power infrastructure to build AI data centers on a large scale, the global AI chip industry chain has fallen into a long-term bullish atmosphere of "bullish frenzy," especially as the "AI chip superpower" NVIDIA (NVDA.US) has surpassed and stabilized its market cap at $5 trillion, becoming the first company in the world to reach a market cap of $5 trillion Recently, the prices of high-performance storage products in the global DRAM and NAND series have continued to rise sharply. Coupled with the fact that OpenAI, the highest-valued AI startup in the world, has reached a deal exceeding $1 trillion for AI computing power infrastructure, as well as the "king of chip foundries" TSMC, storage giants Samsung and SK Hynix announcing exceptionally strong earnings that exceeded expectations and raised their revenue growth forecasts for 2025 and 2026, it can be said that these factors have significantly strengthened the "long-term bull market narrative logic" for AI computing power infrastructure sectors such as AI GPUs, ASICs, HBM, data center SSD storage systems, liquid cooling systems, and core power equipment.
The AI computing power demand driven by generative AI applications and AI agents at the inference end can be described as "starry seas," and is expected to drive the artificial intelligence computing power infrastructure market to continue showing exponential growth. The "AI inference system" is also considered by Jensen Huang to be the largest source of future revenue for NVIDIA.
The ongoing explosive expansion of global AI computing power demand, coupled with the increasingly large AI infrastructure investment projects led by the U.S. government, and the continuous massive investments by tech giants in building large data centers, largely means that for long-term investors who are fond of NVIDIA and the AI computing power industry chain, the sweeping "AI faith" around the world has not yet concluded its "super catalysis" on the stock prices of computing power leaders. They bet that the stock prices of computing power leaders such as NVIDIA, TSMC, Micron, SK Hynix, Seagate, and Western Digital will continue to depict a "bull market curve."
The most exciting news for Wall Street analysts recently is undoubtedly Jensen Huang's projection of "visibility for cumulative data center business revenue of $500 billion from 2025 to 2026—specifically from the Blackwell and next-generation Rubin architecture AI GPU series products' cumulative data center business revenue over the next five quarters."
From the perspective of revenue data, the expected revenue from Blackwell and Rubin over five quarters is projected to exceed around $500 billion. It is worth noting that this only includes Blackwell and Rubin, completely excluding other important segments such as NVIDIA's high-performance networking, automotive chips, and HPC, and does not include any expectations for the Chinese market.
Sell-side and buy-side investment institutions on Wall Street have begun to incorporate this astonishing expectation into their investment models, which is why Wall Street investment firm Loop Capital has set the highest target price at $350—implying that NVIDIA's market value will reach $8.5 trillion, approximately 70% higher than NVIDIA's latest closing price of $206.88. Not long before this, the highest target price on Wall Street was $320 given by HSBC, strongly countering the recent "AI bubble theory" that has been popular in the market.
In the view of top Wall Street institutions such as Loop Capital, Cantor Fitzgerald, HSBC, Goldman Sachs, and Morgan Stanley, NVIDIA will still be the core beneficiary of the trillion-dollar wave of AI spending. These institutions believe that NVIDIA's stock price's repeated creation of historical highs is far from over, and Wall Street analysts have recently been continuously raising their 12-month target price for NVIDIA, with more and more analysts beginning to look towards the milestone target of $300

