
NVIDIA conference call: Jensen Huang counters, "What we see is completely opposite to the AI bubble," with company order visibility reaching $500 billion, Rubin to launch in the second half of next year

Jensen Huang is trying to prove to Wall Street that the engine of this AI technology revolution is not only still running but is also penetrating into broader fields
In the face of the rampant AI bubble theory and the situation where well-known investors are reducing their positions or even exiting the market, NVIDIA CEO Jensen Huang chose to respond directly.
After the U.S. stock market closed on Wednesday, during a key earnings call that determines industry confidence, Jensen Huang clearly stated, "What we see is completely different from the bubble theory," and boldly claimed that NVIDIA has penetrated "every cloud, every computer, and every robotic system."
Jensen Huang attempted to prove to Wall Street that the engine of this AI technology revolution has not only not stalled but is instead penetrating into broader fields. He also emphasized that the world is simultaneously undergoing three fundamental technological platform transformations, which provide a solid foundation for the continued growth of artificial intelligence, and NVIDIA is at the center of this transformation.
During the call, Jensen Huang elaborated on the three driving forces behind growth: the transition from CPU to GPU accelerated computing in the post-Moore's Law era, the transformation of existing applications by generative AI, and the new revolution brought by Agentic AI. Meanwhile, company executives revealed that the cumulative revenue visibility for its next-generation chip platforms Blackwell and Rubin has reached $500 billion, with demand continuing to exceed expectations.
After releasing a third-quarter earnings report that far exceeded expectations, NVIDIA provided a stronger fourth-quarter performance guidance, expecting revenue to reach $65 billion. This optimistic forecast was made under the assumption that "there are no data center computing revenues from China," highlighting the strong momentum of its core business.
In the context of a recent significant pullback in AI sector stocks and deepening market concerns about AI investment returns and "circular trading," NVIDIA management's strong statements and better-than-expected guidance undoubtedly injected a dose of confidence into anxious investors, aiming to reshape the market's confidence in AI's long-term growth.
Key points from the earnings call:
Direct rebuttal to the "AI bubble theory": NVIDIA CEO Jensen Huang believes that the market is not in an AI bubble but is experiencing three fundamental platform transformation waves: the transition from general computing to accelerated computing, the transition from classical machine learning to generative AI, and the rise of new Agentic AI and physical AI.
$500 billion order visibility and "cloud service providers' (GPUs) are sold out": CFO Colette Kress revealed that from the beginning of this year to the end of 2026, the revenue visibility for the Blackwell and Rubin platforms has reached $500 billion, and this number may continue to grow She emphasized that "cloud service providers are sold out," and GPU utilization is saturated.
Performance guidance exceeds expectations, unaffected by the Chinese market: NVIDIA expects fourth-quarter revenue to reach $65 billion, far exceeding market expectations. Notably, this guidance was made under the assumption of "no data center computing revenue from China."
Winning key new client Anthropic: The company announced a deep technical partnership with important AI model company Anthropic, marking the first time Anthropic has adopted NVIDIA architecture, with an initial computing power commitment of up to 1 gigawatt.
Next-generation platform Rubin progressing smoothly: The next-generation Vera Rubin platform is scheduled to launch in the second half of 2026, achieving another "X-factor" level leap in performance.
Countering the "AI closed-loop economy": Jensen Huang explained that strategic investments in companies like OpenAI and Anthropic are aimed at deepening technical cooperation, expanding the CUDA ecosystem, and acquiring shares in "once-in-a-generation" companies, rather than the market's concerns about "circular trading."
OpenAI's significant deployment: It was disclosed that NVIDIA is assisting OpenAI in building at least 10 gigawatts of AI data centers and supporting its transition from relying solely on cloud vendors to "self-built infrastructure."
Supply bottlenecks are the biggest challenge: NVIDIA acknowledged that supply chain issues (especially CoWoS packaging) and energy are the main factors limiting growth, and it is alleviating this by securing production capacity and local manufacturing (such as Amkor).
Jensen Huang firmly counters the "AI bubble theory": Three major platform transformations are occurring
In his opening statement during the conference call, Jensen Huang stated, "From our perspective, we see very different things." He believes the world is simultaneously experiencing the first three major platform transformations since the advent of Moore's Law.
First, as Moore's Law slows, the computing field is shifting from CPU general computing to GPU-accelerated computing.
Second, AI itself has reached an "explosion point," with generative AI replacing traditional machine learning and reshaping the core businesses of large-scale data centers such as search and recommendation systems.
Finally, a new wave is emerging—agentic AI capable of reasoning, planning, and using tools, along with physical AI, which will give rise to new applications, companies, and products.
Jensen Huang emphasized, "When you consider infrastructure investments, please think about these three fundamental dynamics. Each will contribute to infrastructure growth in the coming years." He pointed out that NVIDIA's single architecture can support all three transformations, which is a key reason for its market selection.
Demand skyrocketing: $500 billion order visibility, "cloud service providers' (GPUs) are sold out"
NVIDIA's Chief Financial Officer Colette Kress substantiated the booming demand with a series of data. She stated that the company currently has $500 billion in visibility for revenue from the Blackwell and Rubin platforms from the beginning of this year until the end of the 2026 calendar year Kress added that this number will continue to grow in the future. She cited the new agreements reached with KSA (Saudi Arabia) and the new collaboration with Anthropic, which have not been fully accounted for, stating, "We definitely have the opportunity to secure more orders on top of the announced $500 billion."
According to the financial report, revenue from the data center business reached a record $51 billion in the third quarter, a year-on-year increase of 66%. Kress emphasized, "The (GPU) of cloud service providers are sold out, and our GPU installed base, whether it's the new generation or previous generations (including Blackwell, Hopper, and Ampere), is fully utilized." This directly addresses market concerns about the sustainability of AI chip demand.
Supply Bottlenecks and Capacity Challenges: It's Not Just a Chip Shortage, But Also a Power Shortage
This is the core risk point that the market is most concerned about during this conference call. Despite the demand side being "off the charts," can the supply side keep up?
-
Capacity Ramp-Up: Management admitted that the production and delivery of Blackwell chips are under extreme pressure, and the supply-demand imbalance will continue for several quarters. To alleviate this bottleneck, NVIDIA is operating its supply chain at full speed.
-
Naming Amkor: To build a more resilient supply chain, the CFO specifically mentioned the collaboration with packaging giant Amkor and emphasized the production layout in Arizona. Following this news, Amkor's stock price surged by 8% in after-hours trading, indicating the market's high sensitivity to NVIDIA's supply chain partners.
-
Power and Physical Limitations: When asked by analysts about "the biggest growth bottleneck," Jensen Huang did not shy away, stating that "power, heat dissipation, and liquid cooling" are all significant challenges. He acknowledged that building gigawatt-level data centers requires not only chips but also complex energy infrastructure.
-
Money for Capacity: Huang boldly stated that NVIDIA's massive cash flow and balance sheet are core weapons in its supply chain management, saying, "Suppliers can take our orders to the bank to secure loans for capacity expansion."
In response to the market focus on "Can supply keep up with demand," management provided detailed answers. Huang stated that NVIDIA's supply chain "essentially includes all technology companies globally," and has engaged in "excellent joint planning" with TSMC, storage suppliers, and system ODM partners. He acknowledged that under the current growth rate and scale, no part of the process is easy, but emphasized that the issues are "solvable."
OpenAI Deployment Details: 10 Gigawatts and "Self-Build" Ambitions
This conference call revealed a significant strategic upgrade in NVIDIA's collaboration with OpenAI, which may be a key incremental information underestimated by the market.
-
10 Gigawatt (10GW) Ambitious Plan: This is not just a simple buyer-seller relationship; NVIDIA announced that it is establishing a strategic partnership with OpenAI to assist in building and deploying at least 10 gigawatts of AI data centers. This is an astonishing figure (1 gigawatt typically corresponds to tens of thousands of GPUs), indicating an exponential leap in OpenAI's computing power scale
-
From Renting to Self-Building: NVIDIA's management mentioned that while they currently mainly serve OpenAI through cloud providers like Azure and OCI, NVIDIA is supporting OpenAI in increasing its "self-built infrastructure." This means OpenAI is trying to reduce its complete reliance on cloud providers and directly hold computing power assets, with NVIDIA being a direct driver of this transformation.
-
Equity Investment: Additionally, NVIDIA confirmed investment opportunities in OpenAI, which are not only financial investments but also aimed at deepening technological collaboration.
New Clients and New Products: Anthropic Embraces NVIDIA for the First Time, Rubin Platform on Track for 2026 Launch
While consolidating existing clients, NVIDIA has also successfully developed new strategic partnerships. The company announced a deep technical collaboration with leading AI company Anthropic, marking the first time Anthropic has adopted NVIDIA's architecture. Kress revealed that Anthropic's computing power commitment initially includes up to 1 gigawatt of capacity from the Grace Blackwell and Vera Rubin systems.
Winning Anthropic as a key client means that NVIDIA's platform now encompasses almost all leading foundational model developers. Jensen Huang proudly stated during the Q&A session, "We are the only platform in the world that can run all AI models... We run OpenAI, we run Anthropic, we run xAI, we run Gemini."
Looking ahead, the progress of the next-generation platform Vera Rubin is also highly anticipated. Kress confirmed that the Rubin platform is on track to launch in the second half of 2026 and will achieve another "X-factor" level performance leap compared to Blackwell. She stated that relevant chip samples have already returned from supply chain partners, and the ecosystem will be ready for the rapid scaling of Rubin.
Countering the "AI Closed Loop Economy": Expanding the CUDA Ecosystem, Not "Circular Trading"
In response to investors' concerns about NVIDIA investing in its clients (referred to by some market participants as "circular trading"), Jensen Huang provided a detailed explanation during the conference call. He stated that the core goal of these strategic investments is to "expand the coverage and ecosystem of CUDA."
Taking the investments in OpenAI and Anthropic as examples, Huang noted that this is not only to establish deeper technological partnerships to support the rapid growth of these companies but also to allow NVIDIA to acquire shares in these "once-in-a-generation" companies. He emphasized, "We invest in OpenAI for deep collaboration... We have acquired shares in their company. I fully expect this investment to translate into extraordinary returns."
He pointed out that through these investments and collaborations, NVIDIA's platform can be optimized to ensure efficient operation of all mainstream AI models, thereby consolidating its market position. This explanation aims to convey a message to the market: these investments are strategic ecosystem building rather than merely financial operations to boost short-term demand
Challenges in the Chinese Market and Performance Guidance
Kress admitted during the conference call that due to geopolitical issues and the increasingly fierce competition in the Chinese market, sales of the H20 chip targeted at the Chinese market this quarter were only about $50 million, and "large purchase orders did not materialize this quarter." She expressed the company's "disappointment" at being unable to ship more competitive products to China.
However, this setback has not affected NVIDIA's overall optimistic outlook. The company provided a revenue guidance of $65 billion (±2%) for the fourth quarter, far exceeding the market's general expectation of about $62 billion. Crucially, Kress explicitly stated, "We are not assuming any data center computing revenue from China in the fourth quarter."
This statement implies that even if the impact of the Chinese market is completely stripped away, NVIDIA's growth engine remains strong, with its growth momentum primarily coming from large-scale data centers, sovereign AI projects, and enterprise customers in the U.S. and other international markets.
Full Transcript of the Earnings Call (translated by AI tools):
NVIDIA Q3 Fiscal Year 2026 Earnings Call
Event Date: November 19, 2025
Company Name: NVIDIA
Source: NVIDIA
Presentation Segment
Conference Operator:
Good afternoon. I’m Sarah, today’s conference operator. At this time, I would like to welcome everyone to NVIDIA's Q3 earnings call. All lines have been muted to prevent any background noise.
After the speakers' remarks, there will be a Q&A session. (Operator instructions) Thank you. Toshiya Hari, you may begin the meeting.
Toshiya Hari, Vice President of Investor Relations and Strategic Finance:
Thank you, everyone.
Good afternoon, and welcome to NVIDIA's Q3 earnings call for fiscal year 2026. Joining me today are NVIDIA President and CEO Jensen Huang, as well as Executive Vice President and CFO Colette Kress. I would like to remind everyone that our conference call is being broadcast live on NVIDIA's Investor Relations website. The webcast will provide a replay until we hold the Q4 earnings call for fiscal year 2026.
The content of today’s conference call is the property of NVIDIA. No part of it may be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These statements involve many significant risks and uncertainties, and our actual results may differ materially.
For a discussion of factors that may affect our future financial performance and business, please refer to today’s earnings press release, our most recent 10-K and 10-Q forms, and disclosures in any 8-K reports we may file with the U.S. Securities and Exchange Commission. All of our statements are based on information as of November 19, 2025. Except as required by law, we have no obligation to update any such statements
In this conference call, we will discuss non-GAAP financial metrics. You can find the comparison table of these non-GAAP financial metrics with GAAP financial metrics in the CFO commentary we published on our official website. Next, I would like to invite Colette to speak.
Colette Kress, Executive Vice President and Chief Financial Officer:
Thank you, Toshiya.
We have achieved another outstanding quarter, with revenue reaching $57 billion, a year-over-year increase of 62%, and a sequential revenue growth of $10 billion, setting a record with a growth rate of 22%.
Our customers continue to commit to the transformation of three platforms, driving exponential growth in accelerated computing, powerful AI models, and intelligent applications. However, we are still in the early stages of these transformations, which will impact our work across various industries. From the beginning of this year to the end of the 2026 calendar year, we currently have revenue visibility of $500 billion for the Blackwell and Rubin platforms. By executing our annual product cadence and expanding our performance leadership through full-stack design, we believe NVIDIA will become the preferred choice for the estimated $30 trillion to $40 trillion AI infrastructure buildout by the end of this century.
The demand for AI infrastructure continues to exceed our expectations. Cloud service providers' capacity is sold out, and our GPU installed base, including both new and older products (such as Blackwell, Hopper, and Ampere), is fully utilized. Data center revenue for the third quarter reached a record $51 billion, a year-over-year increase of 66%, which is a significant achievement at our scale. Computing business grew by 56% year-over-year, primarily due to increased production of GB300; while networking business more than doubled, driven by the start of NVLink expansion and strong double-digit growth in Spectrum-X Ethernet and Quantum-X InfiniBand.
The trillion-dollar global hyperscale cloud service providers are transitioning search, recommendations, and content understanding from classical machine learning to generative AI. NVIDIA CUDA excels in both areas and is the ideal platform for this transformation, driving hundreds of billions in infrastructure investment. At Meta, AI recommendation systems are delivering higher quality and more relevant content, resulting in increased user time spent on applications like Facebook and Threads. Analysts continue to raise their total capital expenditure expectations for top CSPs and hyperscale cloud service providers for 2026, currently around $600 billion, over $200 billion higher than at the beginning of the year.
We see that the current transition of hyperscale workloads towards accelerated computing and generative AI accounts for about half of our long-term opportunities. Another growth pillar is the continued expansion of computing scale driven by foundational model builders (such as Anthropic, Mistral, OpenAI, Reflection, Safe Superintelligence, Thinking Machines Lab, and XAI). They are all actively expanding computing scale to enhance intelligence The three expansion laws—pre-training, post-training, and inference—remain effective. In fact, we are witnessing a positive virtual cycle emerging, where the three expansion laws and the acquisition of computing resources are generating better intelligence, which in turn increases adoption rates and profits.
OpenAI recently shared that its weekly user count has grown to 800 million. The number of enterprise customers has increased to 1 million. Moreover, its gross margin is healthy. Anthropic recently reported that as of last month, its annualized revenue has reached $7 billion, up from $1 billion at the beginning of the year.
We are also seeing a surge in agent AI across various industries and tasks. Companies like Cursor, Anthropic, OpenEvidence, Epic, and Abridge are experiencing a spike in user growth as they empower the existing workforce, providing clear ROI for coders and healthcare professionals. The world's most important enterprise software platforms, such as ServiceNow, CrowdStrike, and SAP, are integrating NVIDIA's accelerated computing and AI stack. Our new partner Palantir is leveraging NVIDIA's CUDA-X libraries and AI models for the first time to enhance its extremely popular Ontology platform.
Previously, like most enterprise software platforms, Ontology only ran on CPUs. Lowe's is utilizing the platform to build supply chain agility, reduce costs, and improve customer satisfaction. The business community is widely leveraging AI to enhance productivity, efficiency, and reduce costs. RBC is significantly improving analyst productivity with agent AI, reducing report generation time from hours to minutes.
AI and digital twins are helping Unilever double the speed of content creation and cut costs by 50%. Salesforce's engineering team has seen at least a 30% increase in productivity for new code development after adopting Cursor. In the past quarter, we announced a total of AI factories and infrastructure projects equivalent to 5 million GPUs. This demand spans every market: CSPs, sovereign entities, modern builders, enterprises, and supercomputing centers, including multiple flagship construction projects.
XAI's Colossus II, the world's first gigawatt-level data center; Eli Lilly's AI factory for drug discovery, the most powerful data center in the pharmaceutical industry. Just today, AWS and Humane expanded their partnership, including the deployment of up to 150,000 AI accelerators, including our GB300. XAI and Humane also announced a partnership to jointly develop a world-class GPU data center network, centered around a flagship 500-megawatt facility.
Blackwell's momentum further strengthened in the third quarter, with GB300 revenue surpassing GB200, accounting for about two-thirds of Blackwell's total revenue. The transition to GB300 has been very smooth, with mass production shipments to most major cloud service providers, hyperscale cloud providers, and GPU clouds, driving their growth. The Hopper platform has entered its 13th quarter since launch, recording approximately $2 billion in revenue in the third quarter H20's sales amounted to approximately $50 million. Due to geopolitical issues and increasingly fierce competition in the Chinese market, large procurement orders were not realized in this quarter.
While we are disappointed with the current situation that prevents us from shipping more competitive data center computing products to China, we are committed to continuing our engagement with the U.S. and Chinese governments.
The Rubin platform is on track to begin ramping up production in the second half of 2026. The Vera Rubin platform, powered by seven chips, will again achieve X times performance improvement relative to Blackwell. We have received chips from our supply chain partners and are pleased to report that NVIDIA's global team is executing the launch work excellently. Rubin is our third-generation rack-scale system that significantly redefines manufacturability while maintaining compatibility with Grace Blackwell. Our supply chain, data center ecosystem, and cloud partners have now mastered the process of building to installation of NVIDIA's rack architecture. Our ecosystem will be ready for rapid Rubin production ramp-up.
Our annual X times performance leaps improve performance per dollar while reducing computing costs for customers. The long lifespan of NVIDIA CUDA GPUs is a significant TCO advantage compared to other accelerators. The compatibility of CUDA and our large installed base extends the lifespan of NVIDIA systems far beyond their initially estimated lifespan. For over twenty years, we have continuously optimized the CUDA ecosystem, improving existing workloads, accelerating new workloads, and increasing throughput with each software version. Most accelerators without CUDA and NVIDIA's proven and versatile architectures become obsolete within a few years as model technologies evolve. Thanks to CUDA, the A100 GPUs we shipped six years ago are still running at full capacity today, thanks to significant improvements in the software stack.
We have evolved from a gaming GPU company into an AI data center infrastructure company over the past 25 years. Our innovation capabilities in CPU, GPU, networking, and software, as well as our ability to ultimately reduce cost per token, are unparalleled in the industry.
Our networking business, built specifically for AI, is now the largest in the world, generating $8.2 billion in revenue, a year-on-year increase of 162%, with NVLink, InfiniBand, and Spectrum-X Ethernet all contributing to the growth. We are winning in the data center networking space, as most AI deployments now include our switches, with the attachment rate of Ethernet GPUs being roughly comparable to InfiniBand. Meta, Microsoft, Oracle, and XAI are using Spectrum-X Ethernet switches to build gigawatt-level AI factories, each running their chosen operating systems, highlighting the flexibility and openness of our platform. We recently launched Spectrum-XGS, a horizontally scalable technology supporting gigawatt-level AI factories NVIDIA is the only company with vertical, horizontal, and cross-platform scaling capabilities in AI, which solidifies our unique position in the market as an AI infrastructure provider. Customer interest in NVLink Fusion continues to grow. In October, we announced a strategic partnership with Fujitsu to integrate Fujitsu's CPUs and NVIDIA's GPUs through NVLink Fusion, connecting our vast ecosystem. We also announced a collaboration with Intel to develop multi-generational custom data center and PC products, using NVLink to connect NVIDIA and Intel's ecosystems.
This week at Supercomputing 25, ARM announced it will integrate NVLink IP for customers to build CPU SoCs connected to NVIDIA. NVLink has now evolved to its fifth generation and is the only verified vertical scaling technology on the market today. In the latest MLPerf training results, Blackwell Ultra's training time is five times faster than Hopper. NVIDIA swept all benchmark tests. Notably, NVIDIA is the only training platform that utilizes FP4 while meeting MLPerf's stringent accuracy standards.
In the semi-analysis inference benchmark tests, Blackwell achieved the highest performance and lowest total cost of ownership across every model and use case. Particularly important is Blackwell NVLink's performance on the mixture of experts model, which is the architecture of the world's most popular inference models. On DeepSeek R1, Blackwell achieved a 10x improvement in performance per watt and a 10x reduction in cost per token compared to H200, a significant generational leap brought about by our extreme collaborative design approach. NVIDIA Dynamo, an open-source low-latency modular inference framework, has now been adopted by all major cloud service providers.
With the support of Dynamo and decomposed inference, performance improvements for complex AI models (such as MOE models) by AWS, Google Cloud, Microsoft Azure, and OCI have enhanced AI inference performance for enterprise cloud customers. We are in discussions for a strategic partnership with OpenAI, focusing on helping them build and deploy at least 10 megawatts of AI data centers. Additionally, we have the opportunity to invest in the company. We serve OpenAI through their cloud partners (Microsoft Azure, OCI, and CoreWeave).
In the foreseeable future, we will continue to do so. As they scale up, we are excited to support the company in increasing its self-built infrastructure, and we are working towards finalizing an agreement and look forward to supporting OpenAI's growth. Yesterday, we celebrated the announcement with Anthropic. This is the first time Anthropic has adopted the NVIDIA platform, and we are establishing a deep technical partnership to support Anthropic's rapid growth
We will collaborate to optimize the Anthropic model for CUDA, providing the best performance, efficiency, and TCO. We will also optimize future NVIDIA architectures for Anthropic's workloads. Anthropic's computing commitment initially includes up to 1 gigawatt of computing capacity from the Grace Blackwell and Vera Rubin systems. Our strategic investments in Anthropic, Mistral, OpenAI, Reflection, Thinking Machines, and others represent partnerships aimed at expanding the NVIDIA CUDA AI ecosystem and enabling each model to achieve optimal performance on NVIDIA platforms. We will continue to maintain strategic investments while adhering to our rigorous cash flow management approach.
Physical AI is already a multi-billion dollar business, corresponding to trillions of dollars in opportunities, and is NVIDIA's next growth pillar. Leading U.S. manufacturers and robotics innovators are leveraging NVIDIA's triple computing architecture: training on NVIDIA platforms, testing on Omniverse computers, and deploying real-world AI on Jetson robotics computers. PTC and Siemens have launched new services to bring Omniverse-driven digital twin workflows to their extensive customer installations. Companies including Belden, Caterpillar, Foxconn, Lucid Motors, Toyota, TSMC, and Wistron are building Omniverse digital twin factories to accelerate AI-driven manufacturing and automation. Agility Robotics, Amazon Robotics, Figure, and Skilled at AI are building on our platform, utilizing products such as NVIDIA Cosmos, foundational models for development, Omniverse for simulation and validation, and Jetson to power the next generation of intelligent robots.
We continue to focus on building resilience and redundancy in the global supply chain. Last month, we celebrated the first Blackwell wafer produced in the U.S. in collaboration with TSMC. We will continue to work with Foxconn, Wistron, Amkor, Spill, and others to expand our presence in the U.S. over the next four years.
Gaming revenue reached $4.3 billion, a 30% year-over-year increase, driven by sustained momentum from Blackwell and strong demand. End-market sales remain robust, and channel inventory levels are normal ahead of the holiday season. Steam recently broke its concurrent user record, reaching 42 million gamers, while thousands of fans celebrated the 25th anniversary of GeForce at the GeForce Gaming Carnival in South Korea.
NVIDIA's professional visualization has evolved into a computer for engineers and developers, whether for graphics or AI. Professional visualization revenue reached $760 million, a 56% year-over-year increase, setting a new record. Growth was driven by DGX Spark, the world's smallest AI supercomputer, built on the small configuration of Grace Blackwell
Automotive revenue was $592 million, a year-on-year increase of 32%, mainly due to autonomous driving solutions. We are collaborating with Uber to expand the world's largest L4-ready autonomous vehicle fleet, which is built on the new NVIDIA Hyperion L4 Robotaxi reference architecture.
Moving on to other parts of the income statement. The GAAP gross margin was 73.4%, and the non-GAAP gross margin was 73.6%, exceeding our expectations. The quarter-on-quarter increase in gross margin was due to our data center business mix, improvements in cycle times, and cost structure.
GAAP operating expenses increased by 8% quarter-on-quarter, and on a non-GAAP basis, they increased by 11%. The growth was driven by infrastructure computing as well as higher compensation and benefits and engineering development costs. The non-GAAP effective tax rate for the third quarter was slightly above 17%, higher than our guidance of 16.5% due to strong performance in U.S. revenue.
On our balance sheet, inventory increased by 32% quarter-on-quarter, while supply commitments increased by 63%. We are preparing for significant growth in the future and are satisfied with our ability to execute on opportunities.
Alright, let me talk about the outlook for the fourth quarter. Total revenue is expected to be $65 billion, plus or minus 2%. At the midpoint, our outlook implies a quarter-on-quarter growth of 14%, driven by continued momentum from the Blackwell architecture. Consistent with the previous quarter, we have not assumed any data center computing revenue from China. GAAP and non-GAAP gross margins are expected to be 74.8% and 75%, respectively, plus or minus 50 basis points.
Looking ahead to fiscal year 2027, input costs are rising, but we are working to maintain gross margins in the mid-70% range. GAAP and non-GAAP operating expenses are expected to be approximately $6.7 billion and $5 billion, respectively. GAAP and non-GAAP other income and expenses are expected to yield approximately $500 million in gains, excluding gains and losses from non-listed and publicly held equity securities. GAAP and non-GAAP tax rates are expected to be 17%, plus or minus 1%, excluding any discrete items.
At this point, let me hand the call over to Jensen Huang to say a few words.
Jensen Huang, Founder, President, and CEO:
Thank you, Colette.
There has been a lot of discussion about the AI bubble. From our perspective, the situation we see is quite different. Just a reminder, NVIDIA is unlike any other accelerator. We are performing exceptionally well at every stage of AI, from pre-training, post-training to inference.
With our two decades of investment in the CUDA-X accelerated libraries, we are also excelling in scientific and engineering simulations, computer graphics, structured data processing, and classical machine learning. The world is simultaneously experiencing three massive platform transformations. This is the first time since the advent of Moore's Law. NVIDIA uniquely addresses each of these three transformations.
The first transformation is the shift from CPU general-purpose computing to GPU-accelerated computing, as Moore's Law is slowing down. The world has made significant investments in non-AI software, from data processing to scientific and engineering simulations, representing hundreds of billions of dollars in computing and cloud computing expenditures each year Many applications that used to run solely on CPUs are now rapidly transitioning to CUDA GPUs. Accelerated computing has reached a critical point.
Secondly, AI has also reached a critical point, transforming existing applications while giving rise to entirely new ones. For existing applications, generative AI is replacing classical machine learning, applied to foundational areas of large-scale infrastructure such as search ranking, recommendation systems, ad targeting, click-through rate prediction, and content moderation. Meta's GEM, a foundational model for ad recommendations trained on large GPU clusters, is an example of this shift. In the second quarter, Meta reported that, thanks to the generative AI-based GEM, Instagram's ad conversion rate increased by over 5%, and Facebook's news feed ad conversion rate improved by 3%. The transition to generative AI represents a massive revenue growth opportunity for large-scale cloud service providers.
Now, a new wave is emerging: Agentic AI systems that can reason, plan, and use tools. From coding assistants like Cursor and QuadCode to radiology tools like iDoc, legal assistants like Harvey, and AI drivers like Tesla FSD and Waymo, these systems mark the next frontier of computing. The fastest-growing companies in the world, including OpenAI, Anthropic, xAI, Google, Cursor, Lovable, Replit, Cognition AI, Open Evidence, Abridged, and Tesla, are pioneering Agentic AI.
Thus, there are three large-scale platform transformations. The transition to accelerated computing is foundational and necessary, crucial in the post-Moore's Law era. The transition to generative AI is transformative and essential, empowering existing applications and business models. The transition to Agentic and physical AI will be revolutionary, giving rise to new applications, companies, products, and services.
When considering infrastructure investments, please take these three fundamental dynamics into account. Each will drive infrastructure growth in the coming years. NVIDIA has been selected because our singular architecture can achieve all three transformations and is applicable to any form and modality of AI, across all industries, spanning every stage of AI, addressing all diverse computing needs in the cloud, and extending from the cloud to enterprises and robotics.
One architecture.
Toshiya, over to you.
Toshiya Hari, Vice President of Investor Relations and Strategic Finance:
We will now open the call for questions. Operator, please collect the questions.
Q&A Session
Conference Operator:
Q&A Session
Conference Operator:
Thank you.
(Operator Instructions) Thank you. Your first question comes from Joseph Moore of Morgan Stanley. Your line is open
Joseph Moore:
Very good.
Thank you. I would like to ask you to provide an update. You mentioned at GTC that Blackwell plus Rubin would generate $500 billion in revenue in 2025 and 2026. At that time, you mentioned that $150 billion had already been shipped. As the quarter ends, are these overall parameters still valid, meaning there is still $350 billion in the next 14 months or so? I assume you haven't seen all the demand during this period.
Is there a possibility of upward adjustments to these numbers in the future?
Colette Kress, Executive Vice President and Chief Financial Officer:
Yes. Thank you, Joe. Let me respond to that first. Yes, that's correct.
We are working towards the $500 billion forecast. As several quarters have completed, we are on track. Now, we have several quarters ahead of us that will take us to the end of the 2026 calendar year. This number will grow.
I am confident that we will achieve additional compute demand that can be shipped in fiscal year 2026. We shipped $50 billion this quarter, but it wouldn't be complete if we didn't say we might receive more orders. For example, just today, we announced with Saudi Arabia that this agreement alone adds 400,000 to 600,000 GPUs over three years. Anthropic's is also brand new.
So, there is absolutely an opportunity to gain more beyond the $500 billion we announced.
Conference Operator:
The next question comes from CJ Muse of Cantor Fitzgerald. Your line is open.
CJ Muse:
Yes, good afternoon.
Thank you for taking my question. Clearly, there are significant concerns around the scale of AI infrastructure build-out and the ability to fund these plans as well as the return on investment. However, at the same time, you talk about products being sold out, with every deployed GPU being utilized. The AI world has not yet seen huge returns from B300, let alone Rubin, and Gemini 3 has just been announced, with GROK 5 coming soon. So the question is, given this context, do you see a realistic path for supply to catch up with demand in the next 12 to 18 months? Or do you think this might extend into a later timeframe?
Jensen Huang, Founder, President, and CEO:
Well, as you know, we have been doing very well in planning our supply chain. NVIDIA's supply chain essentially includes all the tech companies in the world. TSMC and its packaging, our memory suppliers and memory partners, as well as all our system ODMs have been doing great planning with us. We have been preparing for a big year.
For some time now, we have seen the three transformations I just mentioned: the shift from general computing to accelerated computing. It is very important to recognize that AI is not just agent AI; generative AI is changing the way hyperscale cloud providers have done work on CPUs in the past Generative AI enables them to shift search, recommendation systems, advertising recommendations, and targeting to generative AI. This transformation is still ongoing.
So, whether you are installing NVIDIA GPUs for data processing, or installing for recommendation systems for generative AI, or building infrastructure for agent chatbots and the type of AI most people think of when they think of AI, all these applications are accelerated by NVIDIA. Therefore, when you look at total spending, it is very important to consider every aspect. They are all growing. They are related but not the same. But the wonderful thing is that they all run on NVIDIA GPUs.
At the same time, as the quality of AI models is improving at an astonishing rate, their adoption in different use cases, whether in code assistance (which NVIDIA also uses extensively), and beyond us. I mean, the fastest-growing applications in history, the combination of Cursor, Cloud Code, OpenAI's Codex, and GitHub Copilot. These applications are the fastest-growing in history. And it's not just being used by software engineers.
Due to natural language coding, it is being used by engineers, marketers, and supply chain planners across companies. So I think this is just one example, and there are many such examples, whether it's OpenEvidence and its work in healthcare, or the work done in digital video editing, Runway. I mean, many very exciting startups are leveraging generative AI and agent AI, and they are growing quite rapidly. Not to mention, all of us are using it more.
So, all this exponential growth, not to mention that just today I read a message from Demis, who said that the pre-training and post-training laws are fully effective, and Gemini 3 leverages the scaling laws and achieves a huge quality improvement in model performance. So we see all this exponential growth happening simultaneously. Always going back to first principles, thinking about what is happening with every dynamic I mentioned earlier: from general computing to accelerated computing, generative AI replacing classical machine learning, and of course, agent AI, which is a whole new category.
Conference Operator:
The next question comes from Vivek Arya of Bank of America Securities. Your line is open.
Vivek Arya:
Thank you for taking my question. I am curious, in that $500 billion number, what is your assumption for NVIDIA content per terawatt? Because we hear numbers as low as $25 billion per terawatt and as high as $30 billion or $40 billion per terawatt. So I am curious, as part of that $500 billion number, what power and dollars per terawatt do you assume?
Then in the long term, Jensen, you mentioned a $3 trillion to $4 trillion data center market by 2030. How much of that do you think needs vendor financing, and how much can be supported by the cash flows of your large customers, governments, or enterprises? Thank you
Jensen Huang, Founder, President, and CEO:
From Ampere to Hopper, from Hopper to Blackwell, and from Blackwell to Rubin, our share in the data center is increasing with each generation. The Hopper generation is around 20 to 25. The Blackwell generation, especially Grace Blackwell, is about 30—let's say 30 plus or minus. Then Rubin might be even higher. In each generation, the acceleration is X times, so the TCO for customers improves by X times. Most importantly, you still only have one thousand megawatts of power, one thousand megawatts of data center, one thousand megawatts of electricity, so performance per watt and architectural efficiency are extremely important.
Your architectural efficiency cannot be achieved through brute force. There is no brute force involved. That one thousand megawatts directly translates, and your performance per watt directly and absolutely translates into your revenue. That’s why choosing the right architecture is so important now; the world has nothing extra to waste.
So we must collaborate on design across the entire stack, spanning frameworks and models, across the entire data center, even power and cooling, optimizing the entire supply chain and ecosystem. Therefore, with each generation, our economic contribution will be greater. The value we deliver will be greater, but most importantly, the energy efficiency of each generation will be outstanding.
Regarding sustained growth, the financing of our customers is determined by themselves. We see opportunities for continued growth for some time. Remember, most of the focus today is on hyperscale cloud service providers. One truly misunderstood area regarding hyperscale cloud service providers is that the investment in NVIDIA GPUs not only improves their scale, speed, and cost in general computing, which is the first point, but also because Moore's Law has indeed slowed down. Moore's Law is about driving costs down. It’s about the astonishing deflation of computing costs over time, but that has slowed down, so a new approach is needed to continue driving costs down, and turning to NVIDIA GPU computing is indeed the best way to do that.
The second is to enhance revenue in their current business models. Recommendation systems drive the world’s hyperscale cloud service providers, whether it’s watching short videos, recommending books, suggesting the next item in a shopping cart, recommending ads, or recommending news. The internet has trillions of content. How could they figure out what content to display on your small screen unless they have very sophisticated recommendation systems to do so? Well, this has now shifted to generative AI.
So the first two points I just mentioned will require hundreds of billions of dollars in capital expenditure. This is entirely funded by cash flow. On top of that is agent AI. This portion of revenue is entirely new, new consumption, but also new applications, some of which I mentioned earlier, but these new applications are also the fastest-growing applications in history.
Alright. So I think once people start to understand what’s actually happening beneath the surface, from the simple perspective of capital expenditure investment, they will recognize that there are these three dynamics. Finally, remember that we were just talking about CSPs in the United States Every country will fund its own infrastructure. You have multiple countries, and you have multiple industries. Most industries in the world have not yet truly ventured into AI agents, but they are about to start—all the names of the companies we collaborate with, whether they are autonomous vehicle companies, digital twins for physical AI in factories, the number of factories and warehouses being built around the world, and the number of digital biology startups receiving funding to accelerate drug discovery. All these different industries are now getting involved, and they will finance their own sectors.
So, don’t just look at hyperscale cloud service providers as the future way of building. You must look globally, and you must see all the different industries; enterprise computing will fund their own sectors.
Conference operator:
The next question comes from Ben Reitzes of Melius. Your line is open.
Ben Reitzes:
Hi, thank you very much.
Jensen, I want to ask you about cash. Speaking of $500 billion, you might generate about $500 billion in free cash flow over the next few years. What are your plans for this cash? How much will go towards stock buybacks, and how much will be invested in the ecosystem? How do you view investments in the ecosystem? I think there is a lot of confusion out there about how these deals work and the criteria you use for such deals, like investments in Anthropic, OpenAI, etc. Thank you very much.
Jensen Huang, Founder, President, and CEO:
Yes, thank you for the question. Of course, using cash to fund our growth. No company can grow at the scale we are talking about and have supply chain connections, depth, and breadth like NVIDIA.
The reason all our customers can trust us is that we have built an extremely resilient supply chain, and we have a strong balance sheet to support them. When we make purchases, our suppliers can take it to the bank. When we make forecasts and plan together with suppliers, it is because of our balance sheet that they take us seriously... So, to achieve this, to support such a scale and speed of growth, it indeed requires an extremely strong balance sheet.
We do not fabricate our purchase commitments. We know what our commitments are. Because they have been planning with us for so many years, our purchasing power, our reputation, and credibility are incredible. So, it takes a very strong balance sheet to support this level of growth, speed of growth, and scale of development. That’s the first point.
The second point is, of course, we will continue to do stock buybacks. We will continue to do so. But regarding investments, this is very important work we do. All the investments we have made so far, well, have always been related to expanding the coverage of CUDA and expanding the ecosystem. Look at the work we have done, the investment in OpenAI, of course, we have built this relationship since 2016
I delivered the world's first AI supercomputer to OpenAI. Since then, we have maintained a close and positive relationship with OpenAI. Everything OpenAI does today runs on NVIDIA. So all the cloud they deploy, whether for training or inference, runs on NVIDIA. We love working with them. Our partnership with them is aimed at allowing us to collaborate more deeply from a technical perspective so that we can support their accelerated growth.
This is a company that is growing extremely fast. Don't just look at the media reports. Look at all the ecosystem partners connected to OpenAI and all the developers. They are all driving its consumption. And the quality of AI produced has made a huge leap since a year ago.
So the quality of the responses is outstanding. Therefore, our investment in OpenAI is to establish a deep partnership and co-development to expand our ecosystem and support their growth. And, of course, instead of giving up part of our company's shares, we obtained a part of their company shares. We invested in them, one of the most influential companies of a generation, and we own a stake in it. So I fully expect this investment to translate into extraordinary returns.
Now, in the case of Anthropic, this is the first time Anthropic has adopted NVIDIA's architecture. Anthropic has adopted NVIDIA's architecture for the first time, which is the second most successful AI in the world by total users. But in the enterprise space, they are doing very well. Cloud Code is doing very well. The Cloud is doing very well, spread across global enterprises. Now we have the opportunity to establish a deep partnership with them to bring Cloud to the NVIDIA platform.
So what do we have now? Stepping back, NVIDIA's architecture, NVIDIA's platform is the single platform running every AI model in the world. We run OpenAI. We run Anthropic. We run xAI. Due to our deep partnership with Elon and xAI, we are able to bring this opportunity to Saudi Arabia, to KSA, so that Humane can also provide hosting opportunities for XAI. We run XAI, we run Gemini, we run thinking machines. Let's see, what else do we run? We run everything. Not to mention, we run scientific models, biological models, DNA models, gene models, chemical models, and all different fields around the world.
Not just the cognitive AI used by the world, AI is impacting every industry. Through ecosystem investments, we have the ability to collaborate deeply on a technical basis with some of the best and most outstanding companies in the world. We are expanding the reach of our ecosystem, and we are making an investment in a company that will be very successful. Typically, a company that comes along only once in a generation.
This is our investment philosophy.
Conference Operator:
The next question comes from Goldman Sachs' Jim Schneider. Your line is open.
Jim Schneider:
Good afternoon.
Thank you for answering my question. In the past, you mentioned that about 40% of shipments are related to AI inference. I would like to know, looking ahead to next year, what percentage you expect that to reach a year from now? Additionally, could you talk about the Rubin CPX product you expect to launch next year and contextualize it? What share of the overall TAM do you expect it to capture, and perhaps discuss some target customer applications for that specific product? Thank you.
Jensen Huang, Founder, President, and CEO:
The CPX is designed for long-context type workloads.
Long context essentially means that before you start generating answers, you have to read a lot of content, basically long context. It could be a bunch of PDFs, watching a bunch of videos, studying 3D images, and so on. You have to absorb the context. The CPX is designed for long-context type workloads. Its performance per dollar is outstanding. Its performance per watt is outstanding. That makes me forget the first part of the question.
Unnamed Speaker:
Inference.
Jensen Huang, Founder, President, and CEO:
Inference. Yes, there are three scaling laws that are scaling simultaneously. The first scaling law is called pre-training, which continues to be very effective. The second is post-training.
Post-training has essentially found amazing algorithms that can improve AI's ability to decompose problems and solve them step by step. Post-training and post-training are scaling exponentially. Essentially, the more computation you apply to a model, the smarter it becomes, the higher its intelligence level. Then there is the third, inference.
Inference, due to chain of thought and reasoning capabilities, AI is essentially reading, thinking, and then answering. Due to these three things, the amount of computation required has grown exponentially. I think it's hard to know exactly what the percentage is at any given point in time and who it is. But of course, our hope is that inference becomes a very large part of the market.
Because if inference scales significantly, it indicates that people are using it in more applications and more frequently. We should all hope that inference scales significantly. This is where Grace Blackwell is an order of magnitude ahead of any other platform in the world. The second-best platform is H200.
It is now very clear that the GB300, GB200, and GB300, due to NVLink 72, have achieved... Colette has mentioned this in semi-analysis benchmarking, this is the largest single inference benchmark in history. The performance of the GB200 with NVLink 72 is 10 to 15 times higher. So this is a huge advancement. It will take a long time for anyone to catch up
Our leadership position there has certainly been built over many years. I hope inference can become a big deal. Our leadership in inference is extraordinary.
Conference Operator:
The next question comes from UBS's Timothy Arcuri. Your line is open.
Timothy Arcuri:
Thank you very much. Jensen, many of your customers are seeking self-supply power, but what is the single biggest bottleneck that you are most concerned may limit your growth? Is it power, financing, or other factors like memory or even fabs? Thank you very much.
Jensen Huang, Founder, President, and CEO:
Well, these are all issues and constraints. The reason is that when you grow at the speed and scale that we do, how can anything be easy? What NVIDIA is doing is clearly unprecedented; we are creating an entirely new industry.
On one hand, we are transitioning computing from general and classic or traditional computing to accelerated computing and AI. That’s one aspect. On the other hand, we are creating an entirely new industry called AI factories. The idea is that to run software, you need these factories to generate it, generating every token, rather than retrieving pre-created information. So I think this entire transformation requires extraordinary scale.
Starting with the supply chain, of course, we have better visibility and control over the supply chain because we are clearly very good at managing it. We have excellent partners we have collaborated with for 33 years. So in terms of the supply chain, we are quite confident. Now looking at our downstream supply chain, we have established partnerships with many participants in land, power, shells, and of course, financing.
None of these things are easy, but they are all manageable and solvable. The most important thing we must do is to plan well. We plan the upstream supply chain and the downstream supply chain. We have built a lot of partnerships.
So, we have many pathways to market. It is very important that our architecture provides the best value for our customers. So at this point, I am very confident that NVIDIA's architecture has the best performance per TCO. It has the best performance per watt. Therefore, for any amount of energy provided, our architecture will drive the most revenue.
I believe our speed of success is accelerating, and I think we are more successful at this time this year than we were at this time last year. The number of customers and platforms coming to us after exploring other platforms is increasing, not decreasing. So I think, I think everything I have been telling you over the years is coming true and becoming evident.
Conference Operator:
The next question comes from Stacy Rasgon of Bernstein Research. Your line is open.
Stacy Rasgon:
Question. Colette, I have a few questions about margins. You mentioned that you are working to maintain it in the mid-70s next year. So, first, what is the biggest cost increase? Is it just memory or other reasons? What measures are you taking to achieve this goal? How much of it involves cost optimization, pre-purchasing, or pricing? Additionally, considering that revenue seems likely to see substantial growth compared to now, how should we think about the growth of operating expenses next year?
Unnamed spokesperson:
Thank you, Stacey. Let me see if I can recall from our current fiscal year. Remember earlier this year, we stated that through cost improvements and portfolio, we would achieve a gross margin in the mid-70% range by the end of the year. We have achieved this and are prepared to execute it in the fourth quarter as well.
So now is the time to communicate our current plans for next year. Next year, there are some well-known input price increases in the industry that we need to address. Our system is by no means easy. There are a large number of components, many different parts that we need to consider.
So we are taking all these issues into account. But we do believe that if we commit again to cost improvements, cycle times, and portfolio, we will strive to maintain the gross margin in the mid-70% range. This is our overall plan for gross margin. Your second question is about operating expenses.
Currently, our goal for operating expenses is to ensure that we innovate with the engineering team and all business teams to create increasingly more systems for this market. As you know, we have a new architecture about to be launched. This means they are very busy achieving this goal. So we will continue to see our investments in innovation, including software, systems, and hardware.
If Jensen wants to add a few words, I will hand it over to him.
Jensen Huang, Founder, President, and CEO:
Yes, I think that was well said. The only point I would like to add is to remember that we forecast, plan, and negotiate with the supply chain long in advance. Our supply chain has known our demands for a long time.
They have known our demands for a long time. We have been working and negotiating with them for a long time. So I think the recent surge is obviously quite significant. But remember, our supply chain has been working with us for a long time.
So in many cases, we have already secured a large amount of supply for ourselves, as they are obviously working with some of the largest companies in the world. We have also been working closely with them to address financial issues, ensuring forecasts and planning, and so on. So I think all of this is progressing well for us.
Conference operator:
The last question comes from Aaron Rakers of Wells Fargo. Your line is open.
Aaron Rakers:
Yes, thank you for taking my question. Jensen, this question is for you. Given the announced Anthropic deal and the overall breadth of your customers, I’m curious if your view on the role of AI ASICs or dedicated XPUs in these architecture builds has changed? I believe you have maintained in the past that some of these projects were never really deployed, but I’m curious if we have reached a point where this may even lean more towards GPU architectures. Thank you
Unnamed spokesperson:
Yes, thank you very much, I really appreciate this question.
First of all, you are not competing with the team. As a company, you are competing with the team, and there are not that many teams in the world that are good at building these extremely complex things. Going back to the days of Hopper and Ampere, we would build a GPU. That defines accelerating AI systems. But today, we have to build the entire rack—three different types of switches: vertical scaling, horizontal scaling, and cross-platform scaling switches.
Building a compute node no longer just requires a chip. Everything about that computing system, because AI needs memory, AI didn't have memory in the past, and now it must remember things. The amount of memory and context is enormous. The impact of memory architecture is astonishing.
The diversity of models, from mixture of experts to dense models, to diffusion models, to autoregressive models, not to mention biological models that comply with physical laws. The list of different types of models has exploded in the past few years. So the challenge is that the complexity of the problems is much higher. The diversity of AI models is very, very large.
So this is what I would say, if I had to mention five things that make us stand out. The first thing I would say that makes us special is that we accelerate every stage of that transformation. This is the first stage. CUDA enables us to have CUDA-X for transitioning from general computing to accelerated computing.
We are very good at generative AI. We are very good at agent AI. So at every stage of that transformation, at every level, we excel. You can invest in an architecture that is applicable to all scenarios. You can use one architecture without worrying about workload changes across those three stages. That’s the first point.
The second point is that we excel at every stage of AI. Everyone has always known that we are very good at pre-training. We are obviously very good at post-training. And it turns out that we are also very good at inference, because inference is really very difficult. Thinking is supposed to be easy, right? People think inference is a one-time thing, so it’s easy.
Anyone can enter the market that way. But it turns out that this is the hardest part because thinking, because the results of thinking are quite difficult. We are great at every stage of AI, and that’s the second point.
The third point is that we are now the only architecture in the world that can run every AI model, every cutting-edge AI model. We run open-source AI models exceptionally well. We run scientific models, biological models, robotic models. We run every model. We are the only architecture in the world that can claim this.
Whether you are autoregressive or diffusion-based, we run everything. As I mentioned earlier, we run it for every major platform. So we run every model.
And then the fourth point I want to make is that we exist in every cloud. The reason developers love us is that we are indeed everywhere. We exist in every cloud, we exist in every... we can even create a small cloud for you called DGX Spark. We exist on every computer, we are everywhere, from cloud to on-premises deployment, to robotic systems, edge devices, PCs, everything A framework allows things to work.
This is incredible.
And finally, the last point, which may be the most important, the fifth point is that if you are a cloud service provider, if you are a new company like Humane, if you are a new company like CoreWeave, Nscale, Niveus, or OCI, the reason NVIDIA is the best platform for you is because our procurement volume is so diverse. We can help you manage procurement volume. This is not just about putting random ASICs into data centers.
Where does the procurement volume come from? Where does the diversity come from? Where does the resilience come from? Where does the multifunctionality of the architecture come from? Where does the capability diversity come from? NVIDIA's procurement volume is excellent because our ecosystem is so vast.
So these five points: every stage of acceleration and transformation, every stage of AI, every model, from cloud to on-premises deployment, and of course, in the end, all of this leads to procurement volume.
Conference operator:
Thank you.
I will now turn the call over to Toshiya Hari for closing remarks.
Toshiya Hari, Vice President of Investor Relations and Strategic Finance:
Finally, please note that we will be attending the UBS Global Technology and AI Conference on December 2, and our earnings call discussing Q4 FY2026 results is scheduled for February 25. Thank you all for participating today. Operator, please continue to close the call.
Conference operator:
Thank you. Today's conference call has concluded. You may now disconnect. The event has ended.
---
This transcript may not be 100% accurate and may contain spelling errors and other inaccuracies

