Microsoft upgrades its self-developed AI chips to reduce dependence on NVIDIA, claiming to outperform Amazon's Trainium and surpass Google's TPU

Wallstreetcn
2026.01.26 21:40
portai
I'm PortAI, I can summarize articles.

Maia 200 uses TSMC's 3-nanometer process and is Microsoft's most efficient inference system to date, "the highest-performing self-developed chip among all hyperscale cloud service providers," with a performance improvement of 30% per dollar compared to the latest hardware currently used by Microsoft. Its performance at FP4 precision is three times that of the third-generation Amazon Trainium chip, and its FP8 performance exceeds that of Google's seventh-generation TPU. It will support OpenAI's GPT-5.2 model. Microsoft has opened a preview version of the chip's software toolkit to developers and plans to offer cloud service rentals for the chip to more customers in the future, and is already designing the next-generation Maia 300