
"AI Closed Loop" Holiday Goes Viral! Understand the North American Data Center Supply Chain in One Article

In this unprecedented gold rush ignited by AI, those supply chain giants that master core cooling and power supply technologies, capable of "cooling down" and "feeding" massive computing power, will undoubtedly become the real winners
The essence of the artificial intelligence competition is a contest over physical infrastructure. Behind every smooth AI interaction on the screen are tens of thousands of servers operating at high speed within data centers, all supported by a trillion-level entity industry that is expanding at an astonishing rate—data centers.
According to Bank of America (BofA), global capital expenditure on data centers is expected to exceed $400 billion in 2024 and reach $506 billion in 2025, with IT equipment expenditure at $418 billion and infrastructure expenditure at $88 billion. Driven by AI demand, this market is projected to expand at an astonishing compound annual growth rate (CAGR) of up to 23% between 2024 and 2028, ultimately forming a massive market exceeding $900 billion by 2028.

So, in this unprecedented construction boom, where does the real value chain lie? Who will become the biggest beneficiaries?
This article will delve into the panoramic view of the data center market ignited by AI, revealing the core logic of technological transformation, and systematically dismantling its complex supply chain to present a complete picture of the North American data center supply chain, identifying the true "shovel sellers" in this gold rush.

1. $500 Billion Market Panorama
The growth of the data center market is no longer driven by traditional enterprises building and using their own facilities. Since 2017, the total capacity of cloud service providers and colocation companies has surpassed that of enterprise-built data centers for the first time, with almost all new capacity coming from two types of players: "hyperscale" cloud service providers represented by Amazon AWS and Microsoft Azure, and "colocation" companies that provide leasing services for them or other clients.

From a global capacity distribution perspective, the Americas account for more than half of the global power capacity, with Northern Virginia on the U.S. East Coast becoming the undisputed largest single aggregation area globally, accounting for nearly 15% of the world's hyperscale data center capacity. Following closely is Beijing, China, which accounts for about 7%.

What drives the continuous influx of capital is the clear and substantial return model of data centers as a high-value infrastructure asset. Taking a typical wholesale colocation new project as an example, its unit investment economics are as follows:
-
Initial Investment: Building a 1 megawatt (MW) capacity data center, with preliminary costs such as land and power access amounting to approximately $2 million, while the cost of the "powered shell," including construction, electromechanical, and cooling facilities, is about $11 million. The total investment per megawatt is approximately $13 million.
-
Revenue and Profitability: Each megawatt can generate rental income of $2 million to $3 million per year. After deducting operating costs such as electricity (the average industrial electricity price in the U.S. is about $0.08 per kilowatt-hour), labor (about 2 full-time employees per megawatt), and property taxes (approximately 1% of property value), the EBITDA (Earnings Before Interest, Taxes, Depreciation, and Amortization) profit margin can typically reach a robust level of 40% to 50%.
-
Return on Investment: In a typical 20-year holding period model, combined with project financing (assuming a loan-to-value ratio of 46%, a debt interest rate of 6%, and an equity cost of 10%), the project's Internal Rate of Return (IRR) can reach 11.0%. This is highly attractive for infrastructure investors seeking long-term stable cash flows.
It is this high certainty business model that constitutes the financial cornerstone of the entire data center industry's expansion.

II. The Arrival of the Technological Singularity: The "Density Revolution" from Chips to Racks
The starting point of all infrastructure transformations within data centers originates from AI chips. Its core evolution logic can be precisely summarized as a "density revolution."
The root of the revolution is the exponential surge in power consumption of a single chip. From NVIDIA's first-generation Volta architecture to the current Blackwell architecture, the power consumption of a single GPU has quadrupled in just a few years. The physical law behind this is simple yet brutal: integrating more transistors on a chip and operating at higher clock frequencies inevitably leads to linear growth in power consumption.

The direct chain reaction is the sharp increase in server rack power density. In AI training clusters, network latency is a fatal bottleneck for performance. To minimize data transfer latency between GPUs, engineers' solution is to tightly integrate as many GPUs as possible within the same server rack, communicating via high-speed internal interconnects (such as NVLink). The inevitable result of this architectural optimization is an explosive growth in rack power density. In 2021, the average rack density in data centers was still less than 10 kilowatts (kW); Currently, a standard NVIDIA Hopper (H200) rack has a power consumption of 35kW, while the latest Blackwell (B200) rack reaches as high as 120kW. According to NVIDIA's published roadmap, the Rubin Ultra platform, scheduled for release in the second half of 2027, will have an unprecedented single rack power consumption of 600kW. AMD's MI350 and the future MI400, as well as Intel's Gaudi series, are following the same trajectory.

At the same time, the global infrastructure construction of existing data centers is severely lagging behind. According to a 2024 survey by the authoritative Uptime Institute, only 5% of existing data centers globally have an average rack density that exceeds 30kW. This means that 95% of data centers are unable to support NVIDIA's previous generation Hopper chips, let alone the higher power-consuming Blackwell. Therefore, the deployment of AI computing power must rely on a large-scale upgrade and renovation of existing data centers and the construction of a massive number of new ones.

It is worth emphasizing that this is not an exclusive path for GPUs. Cloud giants, including Google (TPU), Microsoft (Maia 100), and Amazon (Trainium), have also announced that their latest generation of self-developed ASIC chips, despite being more energy-efficient for specific tasks, must adopt liquid cooling to pursue extreme performance. This also confirms from another perspective that the cooling challenges brought by high-density computing may be an irreversible common trend in the entire AI hardware industry.
3. Reshaping Infrastructure: A Transformation of "Water and Electricity"
The "density revolution" triggered by chips is launching an impact on data center infrastructure from the bottom up, with the core battleground concentrated in two areas: Cooling Systems (Water) and Power Supply Systems (Electricity).
(1) First Battleground: Cooling - Migration from Air Cooling to Liquid Cooling
Traditional data centers have long relied on air for cooling. However, even the most optimized air cooling systems have a physical limit to their cooling capacity of around 60-70kW per rack. Faced with AI racks that can easily exceed hundreds of kilowatts, air cooling is no longer sufficient. Liquid cooling, a technology that briefly appeared during the mainframe era, is making a comeback in an undeniable manner.
Among the many liquid cooling technology routes, the current mainstream choice in the industry is "Direct-to-Chip" (D2C).
This technology directly covers major heat-generating chips such as GPUs and CPUs with a metal "cold plate" that contains microchannels, efficiently removing heat through the cooling liquid (usually a mixture of water and glycol) flowing inside. The core device of the system is the "Coolant Distribution Unit" (CDU), which is responsible for driving the cooling liquid for heat exchange and circulation between the "secondary loop" inside the server and the "primary loop" outside the data center.

Although the CDU market is expected to be only about $1.2 billion in scale in 2024, it is experiencing explosive growth.
Currently, there are over 30 suppliers in the market. However, the extreme pursuit of "Uptime" by data center operators determines that they are extremely conservative and tend to choose suppliers with mature technology and reliable services. This gives established manufacturers with mature product lines and global service networks a natural moat. Vertiv (which enhanced its strength by acquiring CoolTera in 2023), Schneider Electric (which is positioning itself by acquiring Motivair by February 2025), Delta Electronics, and nVent are regarded as the first-tier leaders in this field.

(2) The Second Battlefield: Power Supply - A Structural Revolution from AC to High Voltage DC
The power supply chain of traditional data centers is long and suffers from energy loss: medium and high voltage alternating current (AC) from the power grid is stepped down through transformers, distributed through switchgear, enters an uninterruptible power supply (UPS) for "AC-DC-AC" dual conversion for backup, and is then sent to the rack via a power distribution unit (PDU) or busway, ultimately completing the final "AC-DC" conversion inside the server by the power supply unit (PSU).
As the total power of AI racks approaches over 100kW, the drawbacks of traditional low-voltage AC power supply architecture become apparent: the enormous current requires very thick copper cables, which are not only costly but also occupy valuable rack space and affect heat dissipation. Therefore, a revolution towards "High Voltage DC" architecture has already begun.
- The 400V DC solution was first proposed by companies such as Microsoft and Meta in the "Open Compute Project."
- NVIDIA announced an 800V DC solution to support future million-watt server racks, with plans for deployment in 2027.
The core advantage of high-voltage direct current lies in the physical principle (Power = Voltage × Current); when transmitting the same power, increasing the voltage by an order of magnitude allows the current to be reduced by an order of magnitude. This means that power transmission can be achieved with thinner, lower-cost cables, significantly reducing the use of expensive and bulky copper materials within the rack. According to Schneider Electric's calculations, a 400V system can reduce copper wire weight by 52% compared to traditional 208V AC systems.
This transformation will profoundly reshape power supply systems:
-
UPS Simplification: In a DC architecture, the UPS no longer needs the "inverter" stage to convert the battery's DC power into AC power, theoretically reducing costs by 10-20% (although this may initially be offset by the cost of high-voltage safety equipment).
-
Power Supply Relocation: The PSU, which was originally located inside the server and occupied a large amount of space, will be moved out of the server rack, forming an independent "power side car" configuration, freeing up more space for computing units.
NVIDIA has clearly stated that its 800V DC architecture will be developed in collaboration with industry leaders such as Vertiv and Eaton, further confirming the core position of incumbents in the transformation of industry standards.

4. Breakdown of Core Supply Chain Links: Who are the "Shovel Sellers" in the Gold Rush?
The rise of AI is significantly driving up the unit construction costs of data centers. The total cost (All-in Cost) of a traditional data center is approximately $39 million per megawatt, while a data center using next-generation AI architecture (assuming chip-level liquid cooling and high-voltage DC) will see its cost jump by 33% to $52 million per megawatt. The increase is primarily due to more expensive AI servers, but infrastructure upgrades also contribute significantly to the cost increment.
In this vast and intricate supply chain, the "shovel sellers" at various stages are sharing in the dividends of the era:
-
Thermal Systems: This is a market worth approximately $10 billion (2024), with Vertiv recognized as the market share leader. Its core products include chillers, cooling towers, and precision air conditioning for data centers (CRAH). Traditional HVAC giants such as Johnson Controls (through the acquisition of Silent-Aire), Carrier, and Trane are also important participants

-
Electrical: This is a market of approximately $18 billion (2024), with Schneider Electric holding a leading position. Its product line covers uninterruptible power supplies (UPS, new installation market approximately $7 billion), switchgear (market approximately $5-5.5 billion), bus ducts, and other distribution equipment (market approximately $4.2-4.7 billion). Industrial electrical giants such as Eaton, ABB, and Siemens are also core players in this field.

- Backup Power: Diesel generators are the last line of defense to ensure the highest level of reliability for data centers. The market size for generator equipment alone is expected to reach approximately $7.2 billion in 2024, with global leader Cummins.

- IT Equipment: This is the largest segment of investment in data centers. In 2024, the global server market size is approximately $280 billion, with AI servers accounting for a significant portion of the revenue. The network equipment market size is approximately $36 billion, primarily dominated by Cisco and Arista.

- Construction & Services: The transformation of data centers from blueprints to reality relies on professional engineering design and construction. The engineering design market (accounting for 4.5-6.5% of infrastructure costs) is approximately $4 billion, with major players including Jacobs Solutions and Fluor. The construction market is even larger (approximately $65-80 billion, including significant material and equipment cost transfers), with participants including international construction giants such as Balfour Beatty and Skanska

Conclusion
What we see on the screen as "AI closed loop" and its astonishing generative capabilities is not only a breakthrough in AI technology but also a physical world infrastructure competition concerning concrete, copper cables, and cooling fluids.
A striking fact is that in the coming months, global data center construction spending will historically exceed the total construction of all general office buildings.

In this unprecedented gold rush ignited by AI, those supply chain giants that master core cooling and power supply technologies, capable of "cooling" and "feeding" massive computing power, will undoubtedly become the silent yet true winners of this era

