
AMD (Minutes): No new orders disclosed, MI450 focuses on rack-level solutions
The following are the Minutes of AMD's Q3 2025 earnings call organized by Dolphin Research. For earnings interpretation, please refer to "AMD: "Partnering with Open AI? Not a 'Spare Tire' Lifeline""
I. Review of AMD's Core Financial Data
Overall Performance for the Third Quarter: Revenue increased by 36% year-over-year to $9.2 billion, excluding revenue from MI308 sales to China; net profit grew by 31%; free cash flow increased more than threefold; diluted earnings per share were $1.20, up 30% from $0.92 in the same period last year.
Revenue by Business Segment: Data center business revenue increased by 22% year-over-year to $4.3 billion (record high). Client and gaming business revenue increased by 73% year-over-year to $4 billion (record high), with client business revenue at $2.8 billion (record high) and gaming business revenue at $1.3 billion. Embedded business revenue decreased by 8% year-over-year to $857 million.
Other Financial Metrics: Free cash flow was $1.5 billion (record high); $89 million was returned to shareholders through stock repurchases; total stock repurchases for the first three quarters of 2025 amounted to $1.3 billion.
Fourth Quarter Outlook: Revenue is expected to be approximately $9.6 billion (with a fluctuation of $300 million), with a midpoint year-over-year growth of about 25%.
Non-GAAP gross margin is expected to be approximately 54.5%, and non-GAAP operating expenses are expected to be approximately $2.8 billion. Diluted shares outstanding are expected to be approximately 1.65 billion.
II.$AMD(AMD.US) Detailed Information from the Earnings Call
2.1 Key Information from Executive Statements
Strategy and Outlook: Growth is driven by broad demand for data center AI, server, and PC businesses. Record third-quarter performance reflects a significant improvement in growth trajectory, with computing and data center AI businesses jointly driving revenue and profit growth.
AI business is entering a new growth phase, with a target of achieving annual revenue of several billion dollars by 2027. Collaboration with OpenAI in the coming years is expected to generate over $100 billion in revenue.
Key Information on AI GPUs and Data Center GPUs:
Instinct GPU business: Revenue achieved year-over-year growth driven by a surge in MI350 series GPU sales and widespread deployment of the MI300 series. MI350 series GPU production is advancing, with deployment in collaboration with several large cloud service providers and AI suppliers; MI300 series deployment among AI developers is expanding for training and inference workloads.
Progress of MI355/MI400 Series:
MI350 series: Oracle becomes the first hyperscale cloud service provider to publicly offer MI355X instances; emerging cloud service providers (such as Cruso, DigitalOcean) expand MI350 series supply.
MI400 series: To be launched in 2026, integrating new computing engines, leading memory capacity, and advanced networking capabilities, bringing performance leaps for AI training and inference workloads. Integrated with the Helios large-scale AI platform, optimized for next-generation AI infrastructure.
A multi-year agreement with OpenAI will deploy 6 gigawatts of Instinct GPUs, with the first batch of 1 gigawatt MI450 accelerators planned to be operational in the second half of 2026; Oracle will be the launch partner for the MI450 series, planning to deploy tens of thousands of MI450 GPUs in Oracle Cloud Infrastructure starting in 2026.
Software and Ecosystem: Rockham 7 version released, improving inference performance by 4.6 times and training performance by 3 times; open software strategy receives widespread response from developers.
New Customer Orders and Partnerships: A multi-year comprehensive agreement with OpenAI to deploy 6 gigawatts of Instinct GPUs, establishing AMD as a core computing supplier for OpenAI.
Oracle plans to deploy tens of thousands of MI450 GPUs starting in 2026.
Sovereign AI projects: UAE's Cisco and G42 deploy large AI clusters driven by MI350X GPUs; collaboration with the U.S. Department of Energy and Oak Ridge National Laboratory to build the Lux AI factory using MI350 series GPUs and Epic CPUs.
Enterprise customers: Secured new orders from several Fortune 500 companies this quarter, covering technology, telecommunications, financial services, and other fields.
Next Quarter Business Growth Outlook:
Data Center Business: Expected to achieve double-digit sequential growth, driven by strong server business growth and continued production of MI350 series GPUs.
Client and Gaming Business: Expected to decline overall sequentially, with client revenue growth but gaming revenue declining by double digits.
Embedded Business: Expected to achieve double-digit sequential growth, benefiting from improved terminal market demand.
Other Key Information:
Server CPU Revenue: Achieved a record high this quarter, with this processor accounting for nearly 50% of Epic processor total revenue this quarter.
AI Revenue: Instinct platform accelerates adoption, driving data center AI business growth; collaboration with OpenAI is expected to generate over $100 billion in revenue in the coming years.
Product Roadmap: Next-generation 2nm Venice processor is scheduled to launch in 2026; MI400 series accelerators and Helios solutions will be launched in 2026.
Manufacturing and Collaboration: Completed the sale of ZT manufacturing business to Samina and established a strategic partnership to accelerate AI solution deployment.
2.2 Q&A Session
Q: Regarding the CPU and GPU mix for Q3 and Q4, and tactical management of the transition from MI355 to MI400. Can growth continue from Q4 levels in the first half of 2026, or will there be a pause or digestion period before customers adopt the MI400 series?
A: Q3 data center business was very strong, with excellent performance in both server and data center AI businesses, and no MI308 sales. MI355 ramped up quickly in the third quarter, and server CPU sales also strengthened, with visibility into customer demand for the next few quarters. Q4 data center business achieved double-digit sequential growth, with growth in both server CPU and AI businesses. For 2026, although guidance has not been provided, the demand environment is favorable, with MI355 expected to continue to ramp up in the first half of 2026, MI450 series to go live in the second half of 2026, and data center AI business expected to accelerate in the second half of 2026.
Q: Regarding OpenAI's ability to collaborate with three suppliers simultaneously, and your visibility into this initial collaboration and its expansion into 2027. How should we model allocation or think about visibility for this important customer?
A: Our relationship with OpenAI is very important. Current AI computing demand is huge. We are planning multiple quarters with OpenAI to ensure power and supply chain readiness. The first 1 gigawatt deployment will begin in the second half of 2026, and related work is progressing smoothly. We are working closely with OpenAI and cloud service providers to ensure Helios technology can be deployed as planned. We have good visibility into the ramp-up of the MI450 series, and progress is very smooth.
Q: Regarding customer interactions after the OCP Summit, what is your outlook for component sales versus system sales next year? When is the crossover expected to occur? What initial feedback did customers provide after the event?
A: MI450 and Helios have generated great interest, and the response at the OCP Summit was very enthusiastic. Many customers brought engineering teams to gain a deeper understanding of the system and its construction. We are proud of the design of Helios for this rack-level system complexity, which has all the expected features, functionality, reliability, performance, and energy efficiency. With our announcements with OpenAI, OCI, and Meta's news at OCP, interest in MI450 and Helios has been expanding in recent weeks. Development work and customer interactions are progressing very well. For early MI450 customers, we expect the focus to be primarily on rack-level solutions, although the MI450 series will also have other form factor products, but currently, customers have a strong interest in complete rack solutions.
Q: Considering the huge power demand in early announcements next year, and issues with interconnect memory and other components, what do you think will be the limiting factors? Will it be component supply, or will data center infrastructure and power become bottlenecks for deployment?
A: The entire ecosystem must plan together, which is what we are doing. We are working with customers to plan power, silicon wafers, memory, packaging, and component supply chains for the next two years. We are working with supply chain partners to ensure capacity. Based on our visibility, our strong supply chain is ready to support significant growth and large computing demand. The overall situation will be tight, but from capital expenditure, there is a willingness to deploy more computing, and the ecosystem is working closely to address tight conditions. As more power and supply are obtained, the situation will improve. Overall, we are poised for significant growth as we transition to the second half of 2026 and 2027 with MI450 and Helios.
Q: Regarding the data center CPU business, is the recent strong trend sustainable? Are there supply chain constraints? And should this business be viewed as seasonal in the first half of next year?
A: Positive signals for CPU demand have been observed over the past few quarters and have continued to expand in 2025. Currently, several large hyperscale customers are forecasting significant CPU builds in 2026. This is a positive demand environment because AI requires a lot of general-purpose computing. We have strong demand for Turin processors and strong demand for the Genoa product line. Looking ahead to 2026, the CPU demand environment is expected to remain positive, and we believe this is a sustainable phenomenon that will last for multiple quarters, rather than a short-term trend. On the supply side, we have sufficient supply chain to support growth, especially with capacity increases prepared for 2026.
Q: Regarding the progress of Rockham software, where do you currently stand in terms of competitive position? How broad is the support you can provide to the developer community? And what work remains to be done to narrow the competitive gap?
A: We have made great progress on Rockham. Rockham 7 is a major advancement in performance and support for all frameworks. For us, it is very important to achieve same-day support for all the latest models and native support for all the latest frameworks. Most customers now starting to use AMD have had a very smooth experience migrating workloads to the AMD platform. Clearly, there is always more work to be done, and we are continuing to enhance our libraries and overall environment, especially in addressing some newer workloads such as training and inference combined with reinforcement learning. Overall, Rockham's progress is very strong, and we will continue to invest in this area because it is crucial to make the customer development experience as smooth as possible.
Q: In transitioning from MI355 to MI400 and moving to full rack scale, how should we construct an analytical framework for gross margin in 2026?
A: As we have said in the past, the gross margin of the data center GPU business will experience a transition period at the beginning of the ramp-up of new generation products, after which it will return to normal. We have not provided guidance for 2026, but the priority for this business is to expand revenue and gross profit amount. Of course, during this process, we will also continue to drive improvements in gross margin percentage.
Q: Regarding growth expectations for 2026 and beyond, you mentioned a revenue target of several billion dollars by 2027. Can you elaborate on your views on OpenAI and other large customers, and how should we understand AMD's customer penetration breadth in 2026 and 2027?
A: We are very excited about our roadmap and have gained significant traction among large customers. Our relationship with OpenAI is crucial, and being able to discuss multi-gigawatt cooperation reflects the scale we believe we can deliver to the market. Additionally, we are deeply collaborating with many other customers, such as OCI and several important system projects with the U.S. Department of Energy. It is foreseeable that with the MI450 generation of products, we will achieve very large-scale deployments with multiple customers. This is the breadth of customer cooperation we have established, and it is the layout we have made to ensure the supply chain can support cooperation with OpenAI and many other smoothly advancing partnerships.
Q: In this quarter's data center business, from dollar revenue and percentage, did server business or data center AI business grow more year-over-year?
A: Both areas of the data center business grew well year-over-year. Directionally, both grew similarly, but server business was slightly better.
Q: Regarding performance guidance, you mentioned double-digit sequential growth for the overall data center, and server business will achieve "strong double-digit growth." What does this specifically mean? Does it mean more than 20%? How should "strong double-digit growth" be understood?
A: The guidance we provided is that the data center business will achieve double-digit percentage growth sequentially, with growth in both server and data center AI businesses. We are satisfied with the performance of both businesses. "Strong double-digit percentage" comment may apply to year-over-year expression.
Q: After the announcement of cooperation with OpenAI, what impact has this had on your market position with other customers? For example, have you reached some customers that you might not have been able to reach otherwise? Additionally, some analysts believe OpenAI may account for half of your data center GPU revenue between 2027-2028. How do you view the risk posed by this single customer?
A: The OpenAI deal has been in preparation for a long time. Being able to broadly discuss this multi-year, multi-gigawatt cooperation, all aspects are very positive. Over the past period, multiple factors, including the OpenAI deal and the full-featured display of the Helios rack, have collectively increased market interest. We have seen increased and accelerated customer interest, and cooperation scale is generally larger, which is a good thing. Regarding customer concentration, a key foundation of our business is having a broad customer base. We have always worked with many customers. The supply chain planning we are doing will ensure sufficient supply to support multiple customers of similar scale in the 2027-2028 timeframe, which is certainly our goal.
Q: In the current server growth, how should we analyze the contribution of unit growth versus average selling price (ASP) expansion? How do you view future trends?
A: In server CPUs, Turin processors clearly bring higher content value (ASP). As Turin ramps up, we see ASP growing. But at the same time, the demand mix for Genoa processors remains very good because hyperscale customers cannot immediately migrate all business to the latest generation products. Therefore, from our perspective, this is a broad CPU demand based on multiple workloads. This is partly a server refresh cycle, but based on communication with customers, the broad demand for these workloads is due to AI workloads spawning more traditional computing, thus requiring more construction. Looking ahead, one trend we see is a stronger market desire for the latest generation products. We are satisfied with Turin's ramp-up, and we also see strong demand traction and a lot of early interaction for Venice processors, which fully illustrates the importance of dedicated computing today.
Q: You have consistently mentioned the total available market (TAM) for AI chips is about $500 billion, and clearly surpassing this level. Considering these large megawatt deployments, what is your updated view on the TAM for AI chips?
A: As you said, we don't want to reveal too much about what will be discussed next week. We will fully explain our view of the market next week, but it is certain that based on everything we see, we believe the TAM for AI computing is continuously growing. We will provide some updated numbers. Our view is that although it sounded like a lot when we first mentioned $500 billion, we believe there is a greater market opportunity in the coming years, which is very exciting.
Q: Does the evolving cooperation with OpenAI provide tailwinds for your software stack development? Can you talk about how this cooperation works in practice and whether this partnership has contributed to making Rockham more robust?
A: Yes. All our large customers have contributed to the breadth and depth of our software stack. We plan to deeply cooperate with OpenAI in hardware, software, systems, and future roadmaps, and the work we are doing together on Triton is very valuable. Not only that, but our work with all the largest customers is very helpful in strengthening the software stack. We have invested a lot of new resources, not only serving the largest customers but also working with many AI-native companies actively developing the Rockham stack. We have received a lot of feedback and made significant progress in the training and inference stack, and will continue to double down on investment. The more customers use AMD, the more Rockham can be enhanced. We are actually also using AI to help us accelerate some Rockham kernel development and the progress of the entire ecosystem.
Q: Regarding the lifespan of GPUs, most cloud service providers (CSPs) depreciate over five to six years, but in your exchanges with them, have you seen or heard any early signs that they may actually plan to extend the usage time of these GPUs?
A: We have indeed seen some early signs. The key point is that when building new data center infrastructure, the market is clearly eager to adopt the latest, most powerful GPUs, such as the MI355 and MI450 series, which typically enter new liquid-cooled facilities. But we also see another trend, which is the demand for more AI computing capacity. From this perspective, some older generation products, such as the MI300X, still perform quite well in the deployments and usage we see, especially in inference. Therefore, I think both situations coexist.
Q: Regarding MI308, if there is a relief situation that allows shipment, what is your company's readiness? Can you explain how much of a change factor this might bring?
A: The situation with MI308 remains quite dynamic and variable, which is why we have not included any MI308 revenue in the fourth-quarter guidance. We have obtained some MI308 licenses, for which we are grateful for government support. We are still communicating with customers about the demand environment and overall opportunities and will provide more updates in the coming months.
Q: But if the market does open, do you have ready products to support it, or do you need to start rebuilding inventory for that market?
A: We have some work-in-progress inventory. We believe we will continue to maintain these work-in-progress items, but we need to observe how the demand environment will form.
Q: In the competitive environment where large customers like OpenAI sign multi-gigawatt orders with multiple GPU and ASIC suppliers, how does AMD differentiate itself to secure and maintain initial orders like 6 gigawatts and potentially more cooperation?
A: The world currently needs more AI computing capacity, and OpenAI is leading this demand, but it is not alone. Looking ahead to the next few years, all large customers have huge demand for AI computing. Our respective product positioning has different advantages. The MI450 series is an extremely powerful product and solution, and we believe it is ideally positioned for inference and training tasks in terms of computing performance and memory performance. The key is product launch timing, total cost of ownership, and deep partnership, and not only considering the MI450 series but also planning the subsequent product roadmap. We have had in-depth discussions about the MI500 and subsequent series. We believe we have the ability not only to participate but to meet the current demand environment in a very meaningful way. Over the past few years, through our AI roadmap, we have learned a lot and made significant progress in understanding the needs of the largest customers from a workload perspective. Therefore, I am quite optimistic about our ability to capture an important share of the market in the future.
Q: You adopted a unique structure of granting warrants in this transaction. Do you consider this a relatively unique agreement? Considering the global demand for computing power, is AMD willing to adopt similar creative equity cooperation methods with other customers in the future?
A: From this unique period in the AI field, this is indeed a unique agreement. Our goal is to establish deep, multi-year, multi-generational, large-scale cooperation, and we have achieved this goal. This structure ensures high alignment of incentives, achieving a win-win for all parties, including us, OpenAI, and shareholders, all of which will support the overall roadmap. Looking ahead, we are developing many very interesting partnerships, whether with the largest AI users or sovereign AI opportunities. We view each as a unique opportunity, and we will bring AMD's technology and full capabilities into the cooperation. The cooperation with OpenAI is quite unique, but I believe we have many other opportunities to bring our capabilities into the ecosystem and make significant contributions.
Risk Disclosure and Statement for this Article:Dolphin Research Disclaimer and General Disclosure
