
Google AI Chief: Beyond Chips, the US-China AI Competition is About Energy

Omar Shams, head of Google's AI division, believes that while chips are important, energy supply is the key constraint for the long-term development of AI. He argues that the expansion of the U.S. power grid is slow, while China's new electricity generation capacity each year exceeds that of the UK and France combined. He even proposed the idea of deploying solar power stations on the moon or in space to provide energy support for AI computing power
On July 3rd, Omar Shams, head of Google's AI division, shared insights on the forefront of the AI industry in an exclusive interview with Steve Hsu, a professor of computational mathematics at Michigan State University.
Here are the key points:
The scarce value of theoretical physicists. Omar Shams illustrated the importance of physical intuition in AI research through his own experience of transitioning from string theory research to AI entrepreneurship. The optimization process of loss functions is akin to a sphere rolling on an energy manifold, and the KL divergence in information theory has a direct correspondence with the Hamiltonian in physics.
While chips are important, energy supply is the key constraint for the long-term development of AI. He believes that the expansion of the U.S. power grid is slow, while China's annual increase in power generation capacity exceeds that of the UK and France combined, highlighting the growing energy gap in the competition for AI infrastructure between China and the U.S. He even proposed the idea of deploying solar power stations on the moon or in space to provide energy support for AI computing power.
There are no secrets in the AI field, but there is invaluable tacit knowledge. Through social interactions and job-hopping, the research directions and methodologies of various companies have become largely similar. However, when building and training large-scale models, the hard-to-quantify intuitions, experiences, and judgment skills in the face of countless variables and subtle issues are the core competitiveness of top AI talents.
The talent demand in the software development industry is being restructured. AI tools are expected to lead to a 30% unemployment rate among programmers within two years, with entry-level engineering positions facing the risk of being replaced by agents, fundamentally changing the logic of hiring in companies.
Commercial breakthroughs in AI agent technology. Current AI agent technology is moving from the proof-of-concept stage to practical application. In the software development field, AI agents are already capable of autonomously completing complex multi-step tasks. Similar breakthroughs have also emerged in the legal services sector, with AI legal assistant companies like Harvey generating substantial revenue.
Here are the highlights:
Beyond chips, the competition in AI between China and the U.S. is about energy
Host: Speaking of AI development, the competition is very fierce today. The stories of DeepMind and OpenAI are examples. What do you think is the biggest bottleneck in the current AI race?
Omar Shams: There are two. The first is the well-known issue of chips. The second, which is becoming increasingly prominent, is energy. The power supply for data centers is becoming a hard constraint.
Host: When I talk about the AI competition between China and the U.S., two questions arise. One is the showdown between NVIDIA and Huawei, but the other is how you will power these data centers. In the U.S., it is very difficult to increase the power supply of the grid, while the annual increase in power generation in China is equivalent to the total of the UK or France.
Omar Shams: And the U.S. is every seven years.
Host: Exactly. And they are now twice our capacity. So, if power ultimately becomes the foundational element for intelligence, how will you compete with them? Omar Shams: I don't want to digress too much. I wrote a speculative article called "The Moon Should Be a Computer." If the energy consumption on Earth increases by two orders of magnitude, it will create a thermal effect that impacts the atmosphere. The real issue is that the increase in the basic load power supply of the U.S. power grid is too slow, possibly due to regulatory restrictions or insufficient construction capacity.
Omar Shams: So I speculated, can we solve this problem in space or even on the Moon? This idea came up during a conversation with friends. Although it sounds crazy, some smart people think it's a good idea. I also found out that Eric Schmidt is now the CEO of Relativity Space, and one of the reasons he wants to put computers in space is also due to limited energy on Earth.
Host: Is his space project powered by solar panels or nuclear reactors in orbit?
Omar Shams: I guess it's solar energy because space nuclear power would violate too many treaties, and if a rocket launch fails, it would be a dirty bomb. I calculated that to obtain one thousand megawatts of power, it might require one square kilometer or even more solar panels.
Host: Sending so much stuff into orbit requires enormous carrying capacity, and it can't be placed in low orbit; otherwise, debris after disintegration would be very dangerous.
Omar Shams: Right, it must be placed at the Lagrange points. Fortunately, there is enough space in space. (The so-called Lagrange points are special positions within the solar system, or between any two celestial bodies, where an object can maintain a stable orbit relative to these two bodies.)
The Talent Competition in AI
Host: This raises an interesting question: Since everyone's technological routes and information are relatively transparent, why is Meta willing to spend a fortune to poach a top talent? If "there are no secrets," what is this money buying?
Omar Shams: The value of a top talent lies in their precise judgment based on deep experience, which can save a lot of trial and error costs and win valuable time in the race towards AGI (Artificial General Intelligence). It's like a team can be without wings, but it cannot be without an engine. Top talent is that engine.
Host: So, the normal distribution of individual abilities, through the amplification effect of the industry, ultimately reflects the power-law distribution of company output.
Omar Shams: Exactly.
Host: But did Zuckerberg also rely on intuition when forming the super-intelligent team?
Omar Shams: I wouldn't dare to comment on that, but it must be acknowledged that Zuckerberg is indeed a very outstanding founder. Speaking of his decisions, I think this is a very bold gamble—this kind of gamble can only be made by founders and CEOs like him who have super voting rights. After all, Meta has very ample cash flow, and compared to some other money-burning projects, investing in AGI (Artificial General Intelligence) is a relatively wise choice. I think it's too early to judge now; we can wait a while and see the results Host: If I were Mark Zuckerberg and had his resources, I would also wonder why not use our idle cash flow to build the best team we can assemble? Why not put them here? So I'm not questioning that strategic decision. What I question is, if you're going to spend $100 million to get what is supposedly the "best talent," is that the right strategy? Maybe you have to do it, you could argue, because there are only so many people who really understand the field. But the counterargument is, no, there are many people who understand the field.
Omar Shams: That's a good question. I think what they are buying is not "secrets," but "tacit knowledge" and "taste." When building large-scale AI systems, there are countless subtle engineering and theoretical choices to be made.
The value of these talents lies in their judgment and intuition accumulated through practical work, helping the company avoid common mistakes and take fewer detours. For example, Zuckerberg may have learned lessons from Meta's Llama project. Developing AI is like building an airplane; even if you master all the theories, you need someone to guide you on "which screw to tighten first."
The era of AGI is approaching, and Zuckerberg would rather spend more money than miss the opportunity. After all, Meta has the capacity to bear the costs, and the potential returns could be enormous.
The Reality and Future of AI Agents, Layoff Waves Are Coming
Host: As the head of AI agents, how do you view the current "hype versus reality" in this field? It's clear that AI tools have brought productivity improvements. But I want to distinguish between AI tools and agents. I consider sending queries to ChatGPT or having ChatGPT modify or write a draft as "tools."
I don't think that's an agent. I believe an agent is something more autonomous, capable of taking multiple steps independently without human supervision, rather than a tool that requires humans to carefully check each output in single or few interactions. So, is there an example, like if I want to write a function in my codebase, I let the agent do it freely, and it does a bunch of non-trivial things and then returns the result? Is that something that exists now?
Omar Shams: In the field of software development, agents have become a reality. For example, in projects I am involved in, tools like Cursor and GitHub Copilot have completely changed the way programmers work. Nowadays, even startups have significantly raised the standards for software quality, and low-quality code can no longer easily pass. This pressure has driven progress across the entire industry.
In the legal field, AI companies like Harvey have already begun to generate substantial revenue. Although progress in other industries may be slower, the introduction of AI assistants in white-collar jobs has become an inevitable trend. While I cannot determine the specific impact of this trend on the job market, it is certain that workflows will undergo significant changes—AI assistants will either assist human work or directly replace some jobs This has also led to higher standards in the software industry. Entry-level software engineering positions are facing challenges, as AI is already capable of handling most basic tasks. The future role of engineers will be more like "technical supervisors" managing teams of AI agents.
Host: This is not good news for computer science graduates.
Omar Shams: This is indeed a structural change. A few years ago, almost anyone with a little programming knowledge could get an offer, but this bubble is clearly unsustainable.
From a more fundamental perspective, the disconnect between the computer education system and AI development is also a major issue. Most university courses still focus on traditional content like discrete mathematics and algorithm theory, neglecting the cultivation of practical software development skills. I believe this will force education and personal development to pay more attention to "proactivity" and "practical ability." A person with rich project experience who can actively solve real problems will be more valuable than a graduate with just a degree.
Omar Shams: Regarding the impact of AI on employment, I also want to discuss the prediction by Anthropic CEO Dario Amodei, who believes that with the development of AI, there will be large-scale layoffs within the next two years. He believes that the layoff rate two years from now could reach 30%.
He thinks that companies like Tesla, even though they are already quite streamlined, may face layoffs in the future. However, I personally believe that a 30% layoff rate might be a bit too high, but even so, industry insiders like Amodei think the impact of AI is much greater than we expect.
From String Theory to AI Entrepreneurship: Physics Intuition as the Key Driving Force
Host: Omar, you transitioned from a physics/mathematics major at Carnegie Mellon to studying string theory, and finally to AI. What initially ignited your passion for physics?
Omar Shams: My first love was physics. At 15, I saw the "twin paradox" in a physics textbook—one twin travels in space and returns younger than his brother. I thought it must be made up, but the teacher told me it was real. At that moment, it felt like discovering that "magic" really exists. I decided then that I had to learn this magic. For the next ten years, I immersed myself in the world of physics, studying deep theories like the holographic principle and non-commutative geometry.
Host: I can relate to that story. The beauty of physics lies in the fact that with just simple algebra knowledge, one can derive revolutionary conclusions like the Lorentz transformation. I've never understood why not every smart person is passionate about physics. The physical intuition you mentioned, that feeling of playing a movie in your mind, is precisely the key difference between physicists and pure mathematicians.
Omar Shams: Absolutely right. For me, physics problems are not cold formulas, but an action movie in my mind. This visual and intuitive way of thinking has had a profound impact on my later AI research Host: When did you start seriously considering the shift from physics to AI?
Omar Shams: It was in the later stages of my graduate studies. I began to delve into genomics, and at that time, I tried to use Principal Component Analysis (PCA) to process human mitochondrial DNA data. For the first time, I intuitively felt the powerful ability of machine learning techniques to transform data and reveal patterns. This, combined with my previous summer project—calculating Lattice Quantum Chromodynamics (Lattice QCD)—opened my eyes to a whole new field full of possibilities. My first formal job was building a music recommendation engine.
Host: So, you didn't completely abandon the way of thinking in physics, but rather brought it into a new arena.
Omar Shams: Yes, especially with my startup Mutable. We developed a tool called "Auto-Wiki" that can automatically generate Wikipedia-style documentation for a large codebase. The inspiration for this idea actually partly comes from the "renormalization group" in physics—extracting macro, key structures and information from microscopic details through continuous "coarse-graining" operations. This process not only helps humans understand code but also provides excellent context for large language models (LLM), greatly enhancing the efficiency of code question-and-answer systems.
Host: There are many physicists in the AI field, from Hinton to Karpathy. What "superpowers" do you think a physics background gives you?
Omar Shams: I think there are three points. First is physical intuition; we are accustomed to visualizing and systematizing abstract problems. The optimization process of AI's loss function is like a small ball rolling on an energy manifold, and physicists can intuitively "see" and understand this process.
Second is mastery of continuous mathematics. Physics training makes us proficient in handling the mathematical tools for continuous systems, approximations, and probability distributions, such as path integrals and partition functions, which align closely with the mathematical nature of large-scale neural networks.
Finally, it's the experience of dealing with "emergent" phenomena. Physics is full of examples where complex phenomena emerge from simple rules, such as phase transitions. The "emergent capabilities" of AI are similar. We are used to looking for patterns at different scales and have a deep understanding of the phenomenon of "qualitative change caused by quantitative change."
Host: What about the weaknesses of physicists?
Omar Shams: It might be a lack of sensitivity to discrete algorithms and engineering details. But overall, when the scale of the problem becomes large enough, continuous physical thinking often becomes more effective.
Host: So, if you were to give Zuckerberg a piece of advice, it would be to hire more theoretical physicists?
Omar Shams: (laughs) I think that would be a very wise investment

