
Morgan Stanley: The market has underestimated the potential "major benefits of AI" next year, but there are key uncertainties

Morgan Stanley's report points out that by the end of 2025, AI training computing power is expected to grow by about 10 times, which may trigger a "non-linear" improvement in model capabilities in the first half of 2026. This is a potential catalyst that the market has not fully recognized. However, the biggest risk lies in the possibility that AI development may encounter a "scale wall," where investing more computing power yields diminishing returns, leading to significant uncertainty in the final outcomes
Author: Long Yue
Source: Hard AI
A leap in AI capabilities driven by computing power may be brewing.
According to Hard AI, Morgan Stanley stated in a recent report that the market may be severely underestimating a significant boon in the field of artificial intelligence expected to emerge in 2026—a "non-linear" leap in model capabilities driven by exponential growth in computing power.
According to the report written by analysts including Stephen C Byrd, several major developers of large language models (LLMs) in the United States plan to increase their computing power used for training cutting-edge models by about ten times by the end of 2025. This unprecedented investment in computing power is expected to yield results in the first half of 2026, constituting an "underappreciated catalyst."
The report cites the views of Tesla CEO Elon Musk, who stated that a tenfold increase in computing power could potentially double the "intelligence" level of models. The report notes that if the current "scale laws" continue, the consequences could be "seismic consequences," broadly impacting asset valuations across various sectors from AI infrastructure to global supply chains.
However, this optimistic outlook is not set in stone. The report emphasizes that the core uncertainty it faces is whether AI development will hit a "Scaling Wall." This refers to the disappointing outcome of rapidly diminishing returns in model capabilities after massive investments in computing power.
Tenfold Growth in Computing Power May Foster a Leap in AI Capabilities
The report suggests that investors need to prepare for a potential stepwise enhancement in AI capabilities that may occur in 2026.
The report describes the upcoming scale of computing power: a 1000-megawatt data center composed of Blackwell GPUs, whose computing power will exceed 5000 exaFLOPs (five hundred quintillion floating-point operations per second). In contrast, a U.S. government supercomputer named "Frontier" has a computing power slightly above 1 exaFLOP. This magnitude of computing power growth is the core basis for the market's expectation of a non-linear enhancement in AI capabilities.
The report states that while many LLM developers generally agree that increased computing power will lead to enhanced capabilities, there are also skeptics who believe that the intelligence, creativity, and problem-solving abilities of cutting-edge models may have limits.
The Debate on the "Scaling Wall": A Key Uncertainty in AI Progress
Despite the exciting prospects, the report also clearly points out a key risk—the existence of the "Scaling Wall."
This concept refers to the phenomenon where, after a certain threshold of computing power investment is reached, the improvements in intelligence, creativity, and problem-solving abilities of models rapidly diminish, even becoming disappointing. This is currently the greatest uncertainty in the AI field. Many skeptics believe that simply increasing computing power may not sustain leaps in intelligence.
However, the report also mentions some positive signals. A recent research paper titled "Unveiling Synthetic Data in LLM Pre-training," co-authored by teams from Meta, Virginia Tech, and Cerebras Systems, found that no observable performance degradation patterns were noted within foreseeable scales when using synthetic data for large-scale training, known as "model collapse." Phenomenon.
This finding is encouraging because it suggests that after a significant increase in computing power, there is still room for continuous improvement in model capabilities, and the risk of hitting a "scale wall" may be lower than expected.
In addition, the report also lists other key risks, including financing challenges for AI infrastructure, regulatory pressures in regions such as the EU, power bottlenecks faced by data centers, and the potential for LLMs to be misused or weaponized.
How Will Global Asset Valuation Be Reshaped?
If AI capabilities indeed achieve a nonlinear leap, how will asset values be reshaped? The report suggests that investors should begin to assess the multifaceted impacts on asset valuation and points out four core directions:
First is AI infrastructure stocks, particularly those companies that can alleviate growth bottlenecks in data centers; the report believes that if AI can solve more problems in global GDP at lower costs and higher performance, then the infrastructure supporting this value creation will also significantly appreciate.
Second is the China-U.S. supply chain; the intensification of the AI competition may prompt the U.S. to accelerate "decoupling" in critical minerals and other areas.
Third are AI adopters stocks with pricing power; the report analyzes that AI applications will create approximately $13 trillion to $16 trillion in market value for the S&P 500 index. However, not all companies will benefit equally. Those with strong pricing power can convert the efficiency gains and cost savings brought by AI into tangible profits, thereby retaining most of the gains.
Finally, from a longer-term perspective, those hard assets that cannot be "cheaply replicated" by AI, such as land, energy, and specific infrastructure, may see their relative value rise.
Physically scarce assets: Such as waterfront real estate, land in specific geographical locations, energy and power assets (especially power plants that can support data centers), transportation infrastructure (airports, ports), minerals, and water resources.
Regulatory scarce assets: Such as various protected licenses and franchises.
Proprietary data and brands: Strong IP portfolios and unique brand images.
Unique luxury goods and human experiences: Such as sports events, music performances, etc.
This article is from WeChat Official Account "Hard AI". For more cutting-edge AI news, please click here.


