Altman Deep Interview: Aggressive Bets on Infrastructure, Aiming for Vertical Integration of the AI Industry Chain

Wallstreetcn
2025.10.09 04:14
portai
I'm PortAI, I can summarize articles.

OpenAI may be transitioning towards a vertically integrated "AI empire." Altman revealed that the company is making "aggressive infrastructure bets," the scale of which requires the participation of the entire industry. After recent collaborations with giants like NVIDIA, Oracle, and AMD, he hinted that more deals will be announced, aimed at leveraging the entire AI industry chain. Altman emphasized that the development of AI is closely linked to energy, and the AI competition may already be a comprehensive contest of computing power, capital, and energy

OpenAI is transitioning from a research lab to a vertically integrated "AI empire."

On October 8, OpenAI CEO Sam Altman revealed in a recent conversation with Ben Horowitz, co-founder of the well-known venture capital firm a16z, that OpenAI has decided to make a "very aggressive infrastructure bet," the scale of which requires the entire industry to participate.

He explained that this decision is based on strong confidence in the model capabilities expected in the next one to two years, as they foresee that the upcoming models will create significant economic value, while the current pace of expansion cannot meet future demand.

This strategy directly explains the series of collaborations OpenAI has recently established with tech giants such as NVIDIA, Oracle, and AMD. Altman hinted that more such collaborations will be announced in the coming months, indicating that they are trying to leverage the entire industry chain "from electronics to model distribution."

This may also mean that the AI race is shifting from algorithms to an all-encompassing struggle concerning computing power, capital, and energy.

Altman also directly linked the future of AI with the future of energy, pointing out that the exponential growth of AI will require cheaper and more abundant energy. He predicted that the long-term solution will be a combination of solar energy with storage and advanced nuclear energy, asserting that the cost of nuclear energy will be a key variable in determining whether it can be rapidly adopted and subsequently support AI development.

When discussing the company's vision, Altman stated that OpenAI is not just a research lab, but a complex that integrates consumer AI subscription services, large-scale infrastructure operations, and cutting-edge AI research, dedicated to building artificial general intelligence (AGI) and making it beneficial to humanity.

Key points from Sam Altman's interview:

  • Aggressive infrastructure bet: Altman revealed that the company is making a "very aggressive infrastructure bet," the scale of which requires collaborative support from the entire industry. This massive investment is based on strong confidence in the model capabilities expected in the next one to two years, rather than the current models, and he forecasted that more industry collaborations will be announced in the coming months.

  • Energy future blueprint: Altman pointed out that AI and energy have "merged into one," and the exponential growth of AI will depend on cheaper and more abundant energy. He predicts that future energy will be dominated by "solar energy + storage" and nuclear energy. He believes that once nuclear energy demonstrates "overwhelming economic advantages," its development will be extremely rapid, and he called the past limitations on nuclear energy "an extremely foolish decision."

  • Strategic position of Sora: Sora is not just a video generation tool, but also a strategic tool for building a "world model" to advance AGI and help society adapt to AI development.

  • "AI scientists" are coming: Altman predicts that AI models will be able to make significant scientific discoveries within the next two years, which he sees as a true sign of AI changing the world He revealed that GPT-5 has begun to demonstrate the ability to make small, novel scientific discoveries.

  • Strategic Shift to Vertical Integration: Altman admitted that his past views on vertical integration were incorrect and now believes it is a necessary path for OpenAI to achieve its mission, analogous to the success of Apple's iPhone.

  • New Copyright Model: He foresees that in the future, AI training may be considered fair use, but generating content using specific IP will give rise to new business models. Some copyright holders are even more concerned about their IP not being fully utilized by AI rather than being overused.

  • Commercialization and Trust: Regarding commercialization, Altman holds an open but cautious attitude towards advertising, emphasizing that user trust in ChatGPT must never be compromised. He believes that recommending paid products instead of the best products would destroy this trust relationship.

“A Very Aggressive Bet”: Infrastructure Expansion Based on Future Demand

Amid current doubts about whether AI is in a bubble, Altman's statements undoubtedly add fuel to the market's enthusiasm. He bluntly stated, "We have decided that it is time to make a very aggressive infrastructure bet."

This decision is not based on the current demand for products like ChatGPT but stems from a strong optimism about the future. Altman revealed: “The reason we are so aggressive is not because of the existing models we have... We can see (the future model capabilities) one to two years in advance.” He believes that the economic value created by the upcoming models will far exceed expectations, thus necessitating early positioning.

The scale of this bet is enormous, requiring support from the entire industry. Altman stated, "To bet at this scale, we somewhat need the entire industry... to support it." This encompasses "everything from electronics to model distribution and everything in between."

This also explains why OpenAI is actively establishing partnerships with companies like AMD, Oracle, and NVIDIA. He further hinted that this is just the beginning, "In the coming months, you should see us making more moves."

The Lifeblood of AI: The Future of Nuclear and Solar Energy

Altman admitted that the two areas he is most concerned about in his career—AI and energy—have now "merged into one." He believes that looking back in history, the factors that have most significantly improved the quality of human life are cheaper and more abundant energy. The enormous computational demands of AI are pushing energy issues to the forefront.

Regarding how to address future energy demands, Altman provided a clear roadmap. He anticipates that in the short term, the new base load energy in the U.S. will mainly come from natural gas. However, in the long run, he believes that "the two dominant energy sources will be solar energy with storage and nuclear energy." The nuclear energy he refers to includes the entire advanced nuclear technology stack, including small modular reactors (SMRs) and nuclear fusion.

On the development of nuclear energy, Altman presented a key economic perspective. He believes that the speed of nuclear energy adoption entirely depends on its cost. "If it has an overwhelming economic advantage over everything else, then I expect it to happen very quickly." He added that the enormous political pressure at that time would drive regulators to act quickly. Conversely, if its costs are similar to other energy sources, anti-nuclear sentiment could make its development process extremely lengthy. He did not hesitate to call the past decisions of many regions to ban nuclear energy "an extremely foolish decision."

The Strategic Value of Sora: More Than Just Video Generation

Regarding the recently released text-to-video model Sora, Altman also elaborated on its multiple roles in the company's strategy. He believes that Sora may seem unrelated to Artificial General Intelligence (AGI) on the surface, but he is convinced that building robust "world models" is far more crucial for achieving AGI than people realize.

Moreover, Sora is also an important tool for OpenAI to guide the "co-evolution" of society and technology. Altman stated that just as ChatGPT made the world take large language models seriously, Sora can help society anticipate the impacts and opportunities that powerful video models will soon bring. "It is very important to let the world understand the direction of video technology development," he said.

Of course, Sora also brings new commercialization challenges. Altman observed that users not only use it for professional creation but also to make fun memes and share them. The contradiction between high generation costs and frequent entertainment use means that OpenAI needs to explore a business model for Sora that is completely different from ChatGPT.

The Catalyst for AGI: The "AI Scientist" is Coming

Among all potential applications of AI, Altman stated that he is "most excited" about the "AI scientist." He believes that when AI can independently make scientific discoveries, the world will undergo real change.

"We have seen small examples of this happening for the first time in GPT-5," he revealed. He noted that the model has already been able to make some novel mathematical discoveries or achieve minor progress in physics and biological research, and he believes this trend will go further.

He predicts that within the next two years, the model will be able to undertake larger chunks of scientific work and make significant discoveries, which will have a "major impact" on the world.

From Investor to Operator: A Cognitive Shift Towards Vertical Integration

Altman also reflected on his transition from investor to company operator during the interview, as well as its impact on OpenAI's strategy. He admitted that he had always opposed vertical integration, but "now I think I was wrong at that time."

He attributed this cognitive shift to the practical experience of operating a company. As an investor, he was more inclined towards the theoretical market efficiency, where each company only does one thing. But as CEO, he realized that to achieve the company's mission, he must personally do more than expected. The tremendous success of Apple's iPhone is an extreme example of vertical integration, which Altman referred to as "the most incredible product in the history of the tech industry."

This shift in thinking from "advising" to "executing" explains why OpenAI has gradually expanded from a purely research lab to building its own large-scale infrastructure, striving to control the full-stack capability from underlying computing power to upper-level applications, laying the foundation for its grand AI empire

The following is the full text of the interview (translated by AI tools):

Sam Altman

At that time, we felt like we had stumbled upon a huge secret, which was that we had discovered the scaling laws of language models. It felt like an incredible victory. I thought to myself, we might never be this lucky again. And deep learning is like a miracle that keeps bringing surprises. We keep achieving breakthrough after breakthrough. Similarly, when we made breakthroughs in reasoning models, I also felt that we would never have such breakthroughs again. This technology is so effective that it seems too incredible. But perhaps, when you make significant scientific breakthroughs, it always feels this way. If it is truly significant, it is quite foundational and will continue to be effective.

a16z partner Erik Torenberg

Sam, welcome to the Daisy podcast. Thank you.

Sam Altman

Thank you for the invitation. Alright.

Erik

In another interview, you described OpenAI as: a company competing with companies, a consumer technology company, a hyperscale infrastructure operator, a research lab, and all new businesses, including planned hardware devices. What is the purpose of all these layouts, from hardware to application integration, to the job market and business? What is OpenAI's vision?

Sam Altman

Well, maybe it can be counted as three, or four by our own version, which roughly corresponds to a research lab of this scale in the traditional sense. But there are three core directions: We want to be people's personal AI subscription service. I think most people will have one, some will have several, you will use it in some of our self-operated consumer products, but you will also log into many other services, and you might even use it through dedicated devices. At some point, you will have an AI that understands you and is very useful to you, and that is what we want to do. It turns out that to support this goal, we also have to build a lot of infrastructure. The goal there, the real mission, is to build this AGI (Artificial General Intelligence) and make it very useful for people.

a16z co-founder Ben Horowitz

So the infrastructure, do you think it will ultimately... yes, it is necessary for the main goal. But will it also independently become another business? Or is it really just to serve personal AI? Or is it currently unknown?

Sam Altman

Do you mean, for example, will we sell it to other companies? Referring to the infrastructure aspect?

Ben

Will you sell to other companies? Or, you know, this is such a huge thing, will it do other things?

Sam Altman

For me, it feels like there will be some other things to do in the future. But I don't know. We currently have no plans; right now it's just to support the services and research we want to provide. No, there isn't.

Ben

That makes sense.

Erik

The scale is...

Sam Altman

The predictions are scary enough that you have to be open to doing other things.

Ben

If you're building the largest data center in human history...

Sam Altman

The largest infrastructure project.

Erik

Many years ago, before ChatGPT came out, in a strictly VC interview, you talked about early OpenAI, and they asked, hey, what's the business model? You said, oh, we'll ask the AI, and it will come up with ways for everyone. (laughs) But...

Sam Altman

There have been many times, and recently again, we asked the latest model at the time, you know, "What should we do?" It gave us profound answers that we had overlooked before. So... I think when we say those things, people don't take us seriously or literally. But maybe the answer is you should do both...

Ben

Well, no, as someone in an operating organization, I often ask AI what I should do. Sometimes it gives some quite interesting answers. Sometimes it really does. You know, you have to give it enough background information. But...

Erik

What is the core argument for connecting these layouts (besides more distribution, more computation)? How do we...

Sam Altman

I mean, research enables us to create great products, and infrastructure enables us to conduct research. So it's kind of like a vertical tech stack... You can use ChatGPT or other services to get advice about operating organizations. But to do that requires excellent research and a lot of infrastructure. So it's really one and the same; it is...

Ben

Do you think there will be a point where all of this becomes completely horizontal? Or will it remain vertically integrated in the foreseeable future?

Sam Altman

I used to be against vertical integration, and now I think I was just wrong in the past.

Ben

Yes, interesting. There's a kind of...

Sam Altman

Because you would think the market is theoretically efficient, companies should only do one thing, and then that should work. I would like to think so, yes, but at least in our case, that's not entirely true. I mean, in some ways it is. For example, companies like NVIDIA make great chips that many people can use But OpenAI's story is definitely evolving in the direction of "we must do more than we initially imagined to fulfill our mission," right?

Ben

You know, the history of the computer industry is somewhat a tug-of-war, you know, with Wang An's word processor, then personal computers, and the Blackberry before smartphones appeared. So, you know, there has indeed been this situation of vertical integration and then disintegration, but the iPhone is also an example, and the degree of vertical integration...

Sam Altman

The iPhone, I believe, is the most incredible product ever produced in the tech industry. And it is extremely vertically integrated.

Ben

It's remarkably integrated.

Erik

Interesting. Which layouts would you say are the drivers of AGI, and which are hedges against uncertainty?

Sam Altman

Well, you might say that on the surface, for example, Sora seems unrelated to AGI. But I bet if we can build a truly powerful world model, its importance to AGI will far exceed people's imagination. Many people once thought ChatGPT was unrelated to AGI. But it has been very helpful to us, not only in building better models and understanding how society hopes to use this technology, but also in keeping society up to speed and truly realizing "we now have to take this thing seriously." For a long time before ChatGPT, when we talked about AGI, people's attitude was "this won't happen, we don't care." Then suddenly, they really cared. I believe, not to mention the benefits brought by research, that society and technology must evolve together. Yes, you can't just throw things out at the end; that won't work. It's a continuous, back-and-forth interactive process.

Erik

Talk more about how Sora fits into your strategy. Because there has been some chatter on X, like, "Hey, why allocate precious GPU resources to Sora?" But is this a trade-off between short-term and long-term? Or do we...

Ben

Then the new version also added social networking features, which is a very interesting twist. I’m curious about your thoughts on this, and did Meta call you to express dissatisfaction? Or what was the reaction you expected?

Sam Altman

I think if either of our two companies feels that the other is more targeting them, it shouldn't be them calling us.

Ben

Well, I'm not clear on the history, but...

Sam Altman

You see, we won't... First of all, I think making great products is cool, and people love the new Sora. At the same time, I also believe that in terms of "co-evolution," it's important for society to experience what's coming. So soon, the world will have to deal with amazing video models that can deeply fake anyone and show any content you want, which overall will be a good thing, but society also needs to go through some adjustments Just like when ChatGPT first emerged, we feel that the world needs to understand where this technology has developed. I believe it is crucial for the world to quickly grasp the direction of video technology development. Because video can evoke emotional resonance more than text. Soon we will find ourselves in a world where this technology will be ubiquitous. So I think there are some things worth pondering here. As I mentioned, I believe this will aid our research agenda, which is on the path to AGI. Yes, you know, it can't all be about making people coldly efficient, or AI solving all our problems. There must be some fun, joy, and surprises in the process, but we won't invest massive computational resources, not even a small fraction of the existing resources.

Ben

It is an absolute "massive" amount, but relatively speaking, it is not.

Erik

I want to talk about the future of AI-human interaction. Because you mentioned in August that the model has saturated the application scenario of chatting. So what will the future AI human-computer interaction interface look like, whether in hardware or software? Is this vision aimed at creating a super app like WeChat?

Sam Altman

The model has very narrowly solved "the chatting thing," meaning that if you want to have the most basic, standard conversation, it is already very good. But what a chat interface can do for you is far from saturated, because you can ask a chat interface, "Please cure cancer." The model certainly can't do that yet. I think the text interaction style still has a long way to go. Even for casual chat use cases, the model is already excellent, but there are certainly better interaction methods to achieve...

Sam Altman

In fact, there is something really cool about Sora. You can imagine a world where the interaction interface is just continuously rendered video in real-time, what possibilities that would bring. That’s really cool. You can imagine new types of hardware devices that can always sense the surrounding environment. Rather than bombarding you with text notifications like your phone, it truly understands your context and knows when to show you what information. We still have a long way to go in this regard.

Erik

What things will models be able to do in the next few years that they can't do today? Will it be deeper white-collar job replacements? AI scientists? Humanoid robots? I...

Sam Altman

Many things, but you mentioned the one I am most excited about, which is AI scientists. It’s crazy that we are seriously discussing this here. I know there is controversy over the literal definition of the Turing Test, but the general concept of the Turing Test has quietly passed.

Ben

Yes, that was so fast.

Sam Altman

You know, we have long viewed it as the most important test for AI. It once seemed far away, and then suddenly it was passed. The world panicked for a week, two weeks, and then it was like, "Well, it seems computers can do it now." And then everything went back to normal. I think the same thing is happening in the field of science In my personal view, the equivalent of the Turing Test has always been: when AI can do science. That is the true change for the world. And for the first time, with the emergence of GPT-5, we are starting to see some small examples indicating that this is happening.

Sam Altman

You see these messages on Twitter: it made this new mathematical discovery, completed this small task. And my, you know, my physics research, my biology research, everything we see indicates that this will go further. So I believe that within two years, models will be able to complete larger chunks of scientific research and make significant discoveries. This is a crazy thing, and it will have a major impact on the world.

Sam Altman

I firmly believe that fundamentally, scientific progress is the reason why the world becomes better over time. If we are about to have more scientific progress, that will be...

Ben

A huge change. Interestingly, this is a positive change that people rarely talk about. The discussion has been too much trapped in the realm of negative changes, such as AI becoming extremely intelligent...

Sam Altman

But at the same time, it can also spread all diseases. Or like...

Ben

We can leverage more science. Yes, that's right. I think Alan Turing said this. Someone asked him, do you really think computers will be smarter than those clever minds? He said, they don't have to be smarter than clever minds, just smarter than mediocre minds... like the principal of a certain school. We might also want to leverage this more.

Erik

We just witnessed the release of Periodic last week, you know, launched by OpenAI. Yes, speaking of that, seeing the innovations you all are doing, and those teams coming out of OpenAI, it feels like they are also creating extraordinary things, which is amazing.

Sam Altman

We certainly hope so.

Erik

I want to ask you some broader reflections, what has surprised you about the diffusion or development of AI by 2025, or what has changed your worldview since the release of ChatGPT?

Sam Altman

Similarly, many things, but perhaps the most interesting point is how many new things we have discovered. We thought we had stumbled upon this huge secret, that we found the scaling laws of language models, which felt like an incredible victory to the extent that I thought we might never be this lucky again. And deep learning is like a miracle that keeps bringing surprises. We keep achieving breakthrough after breakthrough. Similarly, when we made breakthroughs in reasoning models, I also felt that we would never have such breakthroughs again. This technology is so effective, it seems too incredible. But perhaps, when you make significant scientific breakthroughs, it always feels this way. If it is truly significant, it is quite fundamental and will continue to have an impact

Sam Altman

The degree of progress... If you go back and use GPT-3.5 when ChatGPT was released, you would think, I can't believe anyone used this thing. And now we are in a world where the accumulation of capabilities is so immense, and most people in the world are still only considering what ChatGPT can do. Then there are some people in Silicon Valley using Codex, and they would think, wow, those people have no idea what's going on. Then there are some scientists who would say that those using Codex have no idea what's going on. But the accumulation of capabilities has arrived, and now it is so great that we have made significant progress in what the models can do.

Erik

Regarding further development, how far can we go with LLMs (large language models)? When do we need new architectures? What breakthroughs do you think are needed? I...

Sam Altman

I think we can go far enough to create something that can identify the next breakthrough with existing technology. This is a very self-referential answer. However, if LLM-based technology can develop to the point of conducting better research than all of OpenAI combined...

Ben

That might be good enough. That would be a huge breakthrough, a very big breakthrough. So, on a more mundane level, you know, one thing people have started to complain about, I think South Park even did a whole episode on it, is AI, especially the overly polite issues with ChatGPT. How difficult is it to address this issue? Is it not too difficult, or is it a fundamental problem? Oh, not at all.

Sam Altman

It's not difficult to handle at all. Many users actually want that. Well, if you look at people's reviews of ChatGPT online, yes, many people really want that (polite) feeling back. And, you know, yes. So technically, it's not difficult to handle at all. One thing that is not surprising is that the range of user expectations is extremely broad. Yes, regarding how they want the chatbot to behave, whether in big ways or small details.

Ben

So ultimately, does it have to be configured with personality? Do you think that's the solution? I think...

Sam Altman

So, I mean, ideally, if you chat with ChatGPT for a while, it would somewhat "interview" you while also observing what you like and dislike.

Ben

And then ChatGPT figures it out on its own.

Sam Altman

Figures it out. But in the short term, you might just choose a (preset personality).

Ben

Got it. Yes, no, that makes sense. Very interesting. Actually, there's one thing I want to ask, yes, like

Sam Altman

I think we had a very naive idea before, you know, like... thinking that you could create something that could converse with billions of people, and that everyone would want to talk to the same "person," which is itself quite strange. However, that was our implicit assumption for a long time.

Ben

Because people have...

Sam Altman

Very different friends. So we are trying to solve that problem now.

Ben

And there are also different friends, different interests, different levels of intelligence. So you don't always want to talk to the same thing. One of the great things about AI is that you can say, "Explain it to me like I'm five." Maybe I don't even want to specify that in advance. Maybe I always want you to talk that way, especially when teaching me things.

Ben

Interesting. I want to ask you a question that's a bit like one between CEOs, observing you is interesting for me, which is that you just made this deal with AMD. Of course, the company is in a different position now, and you have more leverage in these matters. But how has your thinking changed, if at all, since you made that initial deal?

Sam Altman

At that time, I had almost no operational experience. I had almost no management experience. I'm not naturally someone who enjoys managing accounts. I'm well-suited to being an investor. That's what I did before. I thought that would be my career.

Ben

Although you were also a CEO before. I...

Sam Altman

I wasn't an experienced CEO back then. And so I felt my mindset at that time was more like an investor giving advice to the company. Now, I understand what it feels like to actually operate a company. Yes, that's right. So they are very different. I've learned a lot about how... yes, you know, for example, what operational capabilities you need, how... what it takes to execute deals over time, and...

Ben

The implications of agreements, not just, "Oh, we'll gain distribution channels and funding." Yes, that makes sense. No, because I really, I'm just saying, I'm very impressed by the improvements in deal structure.

Erik

More broadly, you know, just in the past few weeks, you've mentioned AMD, as well as Oracle and NVIDIA, and you've chosen to establish deals and partnerships with these companies, where you collaborate in certain areas but may also compete in others. How do you decide when to collaborate and when not to? Or how do you view...

Sam Altman

We have decided that it is time to make a very aggressive infrastructure bet. And I... have never been so confident about the research roadmap ahead and the economic value that using these models will bring. However, to make a bet of this scale, we need more or less the entire industry, or a large part of the industry, to support it.

This includes, you know, everything from the electronic level to model distribution and everything in between, which involves many aspects. So we will collaborate with many people. In fact, you should expect more (such collaborations) to be announced in the coming months.

Ben

Elaborate on this. Because when you talk about scale, it feels like in your mind, its limits are infinite, as if you would keep investing until...

Sam Altman

There is definitely a limit. For example, the total amount of global GDP is finite. Well, you know, a portion of that is knowledge work, and we are (currently) not doing robotics.

Ben

Yes. But the limit is still a long way off.

Sam Altman

It feels like the limit is very far from where we are today, if our judgment is correct. So I shouldn't say it's far from us... If our judgment about the model's capabilities developing in the direction we expect is correct, then the economic value that exists there can extend very far, right?

Ben

So you wouldn't make that kind of scale investment with today's models. But no, it's a combination.

Sam Altman

I mean, we will still expand because we can see how much demand we cannot meet even with today's models. But if we only had today's models, we wouldn't make such aggressive investments, right? We can see those... yes, like interesting chats... a year in advance...

Erik

ChatGPT has about 800 million weekly active users, accounting for about 10% of the world's population. It seems to be the fastest-growing consumer product in history. How do you...

Ben

Faster than any product I've ever seen.

Erik

How do you balance, on one hand optimizing the number of active users, while also being a research... both a product company and a research company, how do you further...

Sam Altman

When resource constraints arise? This almost always happens, and we almost always prioritize allocating GPUs to research rather than supporting products. One reason we build such large capacity is to avoid making such painful decisions. There are also strange times, you know, like when a new feature is released and becomes very popular, the research department will temporarily sacrifice some GPUs. But overall, we are here to build AGI. Yes, research has priority.

Erik

You mentioned in your interview with your brother Jack that other companies might try to imitate the product, or buy their way in, you know, or poach...

Sam Altman

Poach your intellectual property.

Erik

Things like that. But they can't buy the culture, or replicate that kind of repeatable, you know, one could say machine, innovative culture. How do you do that, or what are you doing? Can you talk about this culture of innovation?

Sam Altman

I think this is very useful coming from an investor background. A truly excellent research culture looks more like running a great seed-stage investment company, embedded in the founders, you know, it has that feel, rather than running a product company. So I think having that kind of experience is very helpful for the culture we are building.

Erik

It's a bit like how I view, you know, Ben Horowitz, in a way, we, you know, your CEO, we also have this portfolio, you know, with an investor mindset, right? It's like I'm going in the opposite direction. Yes, the CEO becomes an investor. He is an investor who becomes a CEO.

Sam Altman

That direction is indeed unusual.

Ben

Well, never. Um, I think you are the only one I've seen who has taken this path and...

Sam Altman

Workday is like that, right?

Ben

Yes, but Anil was an operator before becoming an investor. I mean, he really was an operator. I mean, people’s office approvals...

Erik

Why is that? Is it because once people become investors, they don't want to operate anymore?

Ben

No, I think investors generally, if you're good at investing, you're not necessarily good at organizational dynamics, conflict resolution, or, you know, all the deep psychology of strange interpersonal relationships, and then you know how politics forms. It's all of that, and those intricate tasks. The job of being an operator or CEO is so complex and not as intellectually stimulating as investing; you can never talk about the specifics of operations at a cocktail party. So when you're an investor, you feel like, oh, everyone thinks I'm smart, you know, because you understand everything, you see all the companies, and so on. That’s a nice feeling. Then being a CEO often feels like a bad feeling. So going from a good feeling to a bad feeling, I can really only say that it's hard.

Sam Altman

I'm shocked by how big the difference is between them, and I'm also shocked by how big the difference is between them as a good job and a bad job. Yes, it's like...

Ben

Yes, yes, yes. You know, it's hard. It's tough. I mean, I can't even believe I'm managing this company. It's like, I know better (not to be CEO). And he can't believe he's managing OpenAI. He knows (not to be CEO).

Erik

Back to today's progress, in a world where benchmark scores are gradually saturated and manipulated, are they still useful? What is now the best way to measure model capabilities?

Sam Altman

Well, we talk about scientific discovery. I think that will be a long-term effective metric. Revenue is an interesting metric, but I feel that static, email-like benchmark scores are less interesting And those have also been crazily manipulated.

Erik

More broadly, it seems...

Ben

For us, this is all that remains. I can tell you more.

Erik

More broadly, the culture seems... on Twitter/X, the AGI hype seems weaker than when AI 2027 came out about a year ago. Some people point out that, you know, GPT-5 hasn't shown the kind of obvious... Clearly, in many ways, there are many advancements beneath the surface that are not as apparent as people expected. But should people lower their expectations for AGI? Or is this just the sentiment on Twitter?

Sam Altman

Well, in terms of timelines... I mean, just like when we talk about the Turing test, AGI will come. It will pass quietly, yes, the changes in the world won't be as dramatic as you imagine or think they should be.

Ben

It won't actually be a singularity, it won't.

Sam Altman

Even if it's doing some kind of crazy AI research, like the speed of social learning will be faster. A somewhat retrospective observation is that humans and societies are much more adaptable than we imagine. You know, it's like thinking that AGI requirements are a significant cognitive update. You roughly went through that process. You need to think of new things. You reconciled with it. It turns out it will be more continuous than we imagine. She's fine. She's really fine.

Ben

I don't think it will be a "big bang."

Erik

Well, but in that regard, how have your thoughts evolved? You mentioned a change in your view on vertical integration. What are your latest thoughts on AI governance and safety? What are your latest thoughts in that area?

Sam Altman

I still do think there will be some very strange or scary moments. So far, this technology hasn't produced a truly terrifying huge risk, but that doesn't mean it never will. At the same time, just like we talked about, having billions of people talking to the same "brain" is strange in itself. There may have already been some strange social skill issues that, while not frightening in the big picture, just feel different. But I expect... I expect some very bad things to happen because of this technology, which has also happened with previous technologies, and...

Ben

It can be traced back to fire.

Sam Altman

And I think we as a society will build some guardrails around it.

Erik

What are your latest thoughts on the right mental models we should hold, or the regulatory frameworks we should think about, or the frameworks we shouldn't consider? I think...

Sam Altman

For the most part, I think the right thing is that I believe most regulation could have a lot of negative impacts. One thing I hope for is that when models become truly, extremely beyond human capabilities. I think those models, and only those models, may warrant some... very cautious safety testing as the technology frontier advances. I also don't want a "big bang"; you can see many ways that could lead to very serious mistakes. I hope we only focus the regulatory burden on those things, rather than all the wonderful things that weaker models can do, otherwise you might completely stifle innovation like in Europe, which would be very bad.

Ben

Yes, there seems to be this thought experiment, well, in the future there will be a model of superhuman intelligence that can, you know, do some sort of takeoff or something. Do we really need to wait until we get there, or at least close to that scale? Because nothing is going to jump out of your lab next week to do that. I think that's where our industry confuses regulators.

Sam Altman

It's also for the world. Extremely interesting, extremely...

Ben

Much more dangerous than regulating something we don't even know how to regulate yet.

Erik

Do you want to talk about copyright as well?

Ben

Yes, so, um, this is a topic shift, but when you think about, um, I guess, how do you see the evolution of copyright issues? Because you've done some very interesting things, like exit mechanisms, and, you know, when you see people selling rights, do you think they will sell them exclusively? Or is it like I can sell to anyone who wants to contact me? How do you think things will develop?

Sam Altman

This is my current guess, when it comes to the co-evolution of society and technology, as technology evolves in different directions, we see a different example: the responses from copyright holders to video models are very different from those to image generation models. Yes, so you'll see this situation continue to evolve. Forced to guess from where we are today, I would say society recognizes training (using data) as fair use. But for generating content with a certain style, or using a certain IP, etc., there will be a new model. So, you know, anyone can read, just like a human author can; anyone can read a novel and get some inspiration, but you can't replicate that novel in your own work.

Ben

And you shouldn't talk about Harry Potter, but you can't just spit it out directly.

Sam Altman

Yes. That's right. Although I think there's one more thing that will change. In the case of Sora, we've heard a lot of concerned voices from copyright holders, and there are quite a few well-known...

Ben

And like...

Sam Altman

Many copyright holders say, what I'm worried about is that you don't use my characters often enough. Yes, I certainly want to limit it, but like, I'm, you know, whoever I am, I have this character, I don't want this character to say something crazy and offensive, but I want people to interact with it Because that's how they develop relationships, and that way my franchise will become more valuable. If you always choose his character or my character, I don't like that. So I can completely imagine a world where under the decision-making framework of copyright holders, they would be more upset with us for not generating their characters often enough rather than generating too many. It's like, this is not an obvious thing, yes, I only recently realized that things could develop this way. But.

Ben

Yes, this is very interesting in Hollywood. We've seen, for example, in the music industry, there's one thing I never quite understood, which is, well, if you play this song in a restaurant, or at a game, etc., you have to pay them, they are very strong on this, and obviously playing your song at a game is the biggest advertisement in the world for them, beneficial for everything you do, your concerts, your...

Sam Altman

Yes, that really feels unreasonable. It's like...

Ben

This, I would just say, the industry is completely likely to do irrational things just because of how those industries are organized, or at least the traditional creative industries. This comes from, for example, in the music industry, I think it comes from the structure, you have publishers, they are just, well, you know, basically chasing everyone. Yes, you know, their whole job is to stop you from playing music. Yes, and every artist wants you to play.

Sam Altman

So I think, if I had to guess, some people would say no, it's the other way around. But (in the video/character space) it's not like the music industry where rights are concentrated in the hands of a few people. So people will try many different setups here to see what...

Ben

Works. Yes, maybe this is a way for new creators to bring new characters to life, and you can never use Daffy Duck...

Erik

Speaking of which. Yes, I want to talk about open source, because there has been some evolution in this area as well. GPT-3 did not open weights, but you released a very powerful open model earlier this year. What are your latest thoughts, what is this evolution process like?

Sam Altman

I think open source is good. I mean, I'm glad to see that people really like GPT-oas, that makes me happy.

Ben

So what do you think, strategically, what are the dangers of DeepSeek becoming the dominant open source model?

Sam Altman

I mean, who knows what people will put in these open source models over time.

Ben

Like what the weights will actually look like. Yes, that would help me. Yes, it's really hard. So you give the interpretive power of everything to others. And by the way, we see, I mean, you know, let me tell you, we are really grateful that you released a very good open source model because what we are seeing now is that all universities are using Chinese models

Erik

You mentioned that the most concerning thing for you professionally is AI...

Sam Altman

And energy. I didn't realize at the time that they would ultimately become the same thing. Well, I mean, they were once two separate interests. They really merged.

Erik

Yes, talk more about how your interest in energy began and how you chose to get involved. Then we can talk about...

Ben

Your career, right? Because your career started with studying physics.

Sam Altman

Yes, I studied physics. Well, I never really had a career. I studied physics, and my first job was something related to computer science, like one, yes, this is an oversimplification, but broadly speaking, I think if you look back in history, the most influential things that improve people's quality of life are cheaper and more abundant energy. So, pushing that further seems like a good idea. I don't know, I just... people have different perspectives on the world, but I see energy (importance) everywhere.

Ben

So let's dive deeper, because here in the West, I feel like we've kind of backed ourselves into a corner on energy issues, on one hand long-term rejecting nuclear energy...

Sam Altman

That was a very stupid decision.

Ben

And then, you know, there are a lot of policy restrictions on energy. You know, it's worse in Europe than in the U.S., but there are dangers here too. Now that AI is here, it feels like we need energy from all possible sources. How do you see its development in terms of policy and technology? Which will become the main sources? How will these curves intersect? And then what is the right policy stance around drilling, fracking, all these things?

Sam Altman

I expect that in the short term, in the U.S., most net new (energy) will come from natural gas, at least relative to baseline load energy. In the long term, I expect it will be, I don't know the ratio, but the two main sources will be solar plus storage and nuclear energy. I think some combination of those two will be in the future, like far in the future...

Ben

Not long-term, right?

Sam Altman

And advanced nuclear, meaning small modular reactors (SMRs), fusion, the whole technology stack.

Ben

How fast do you think the development in nuclear energy will be, when can we reach true scale? Because, you know, obviously there are many being built. But we have to fully legalize it and so on. I...

Sam Altman

I think it largely depends on the price. If it economically crushes all other energy sources, then I expect it to develop quite quickly. Yes, again, if you study the history of energy, when you have a much cheaper source of energy and achieve significant transformation, the speed at which the world turns is quite fast. Yes, energy costs are too important. So if nuclear energy becomes much cheaper than anything else we can do, I expect there will be significant political pressure for the Nuclear Regulatory Commission (NRC) to act quickly, and we will find ways to build quickly If its price is similar to other sources, I expect anti-nuclear sentiment will prevail and it will take a very long time.

Ben

It should be cheaper.

Sam Altman

It should be the cheapest form of energy on Earth. In any case.

Ben

Yes, cheap and clean. What’s not to like? Obviously, there are many (who don’t like it).

Erik

Regarding OpenAI, what are the latest thoughts on monetization? Whether it’s certain experiments or different models where you see yourself spending more or less time, what excites you?

Sam Altman

The primary thing on my mind right now, just because it has just launched and usage is very high, is what we are doing for Sora. One thing you learn after launching something like this is the difference between how people use it and how you think they will use it. Yes, people are certainly using Sora in the ways we anticipated, but they are also using it in very different ways. For example, people are generating funny memes of themselves and their friends and sharing them in group chats. This will require a very different… Sora video production costs are high. Well. So for people doing this hundreds of times a day, it will require a very different monetization approach.

Sam Altman

Those are the things we are considering, and I think the core argument for Sora is cool, that people actually want to create a lot of content. It’s not to say, you know, the traditional naive saying is that 1% of users create content, 10% comment, and 100% watch; maybe when creating content becomes easier, more people will participate. I think that’s a very cool shift, but it does mean we have to come up with a very different monetization model for that. Then we think, if people want to create so much, I assume it’s some version: when the costs are so high, you have to charge people per generation, but this is something we’ve never really had to consider before.

Erik

What’s your view on long-tail users using ads?

Sam Altman

I’m open to that. Like many people, I find ads a bit off-putting, but not unacceptable. And there are some ads I like. One day I gave a lot of praise to Meta; the ads on Instagram add net value for me. I like the ads on Instagram, and I never feel like, you know, on Google, I feel like I know what I’m looking for, the first result might be better, and ads are a nuisance for me. On Instagram, it’s like, I don’t know I want this thing. It’s cool. I’ve never heard of it. I never thought to search for it. I want this thing. So it’s like that, there’s that kind of thing, but people have a very high level of trust with ChatGPT, even if it messes up, even if it hallucinates, even if it makes mistakes. People feel like it’s trying to help them, it’s trying to do the right thing If we break that trust, like you said, "What coffee machine should I buy?" We recommend one, but it's not the best recommendation we could make; it's the one we get paid for, and that trust disappears. So that kind of advertising doesn't work. I can imagine some other ads might be perfectly fine, but they need to be very careful to avoid obvious pitfalls.

Ben

Then, how big is the problem, you know, extending Google's example, yes, like false content being absorbed by the model, and then they recommend the wrong coffee machine just because someone fabricated 1,000 positive reviews about that coffee machine.

Sam Altman

So all of this is changing very quickly for us. This is one of those examples where people are doing these crazy things, maybe not even fake reviews, but hiring a bunch of people who look like real people, yes, really trying to figure out...

Ben

Writing good reviews with ChatGPT. Give me a review that ChatGPT would like.

Sam Altman

So this is... indeed. So this is a very sudden shift. About six months ago, 12 months ago, we had never heard of this. Yes, definitely not. Now it feels like a real "cottage industry" has sprung up overnight doing this.

Ben

No, they are very clever.

Sam Altman

So I still don't know how we will respond, but people will come up with ways.

Ben

This ties into another thing we've been worried about. You know, we're trying to figure out potential solutions like blockchain and so on. But there is a problem; the motivation for creating content on the internet before was, you know, people would come to see my content, they would read, like, if I wrote a blog, people would read, and so on. With ChatGPT, if I just ask ChatGPT without browsing the internet, then who will create content? Why? Is there a theory of incentives, or do you need something to not break the contract of the internet, which is I create something, and then I get rewarded for it, either with attention or money or something like that?

Sam Altman

Our theory leans more towards the idea that if we make content creation easier and don't break the fundamental way you can get some reward for it, then (content creation) will happen more.

Sam Altman

Taking Sora as the simplest example, since we've been talking about it, making an interesting video is much easier than before. Maybe at some point, doing this will earn you revenue sharing. Right now, what you get is... like internet likes, which is still very motivating for some people. Yes, people are creating much more content now than they ever did in any other type of video application. So

Ben

But does this mean the end of the text?

Sam Altman

I don't think so. Just like people also...

Ben

Is it humans who are creating?

Sam Altman

Human-generated... people find that you have to verify the proportions. So it's like completely handmade. Is it tool-assisted?

Ben

Yes, I get it. Yes, probably nothing is completely without (AI) tool involvement.

Erik

Interesting. We just praised Meta. So I think I can ask you this question: the great talent war of 2025 has occurred, and the OpenAI team remains intact, as strong as ever, continuously launching incredible products. What can you say about everything that has happened this year, especially in this regard? I mean...

Sam Altman

Every year is exhausting. Yes, I remember the first few years of running OpenAI were the happiest years of my career, far surpassing other times. I felt it was unbelievable.

Sam Altman

Running a research lab. Yes, doing this amazing, yes, historically significant work with the smartest people. I got to observe that, which was very cool. Then we released ChatGPT, and everyone was congratulating me. I thought my life was about to be completely disrupted. And it indeed was. And it felt like the whole process has always been crazy. Now it's been almost three years. I feel like it has indeed become a bit crazier over time, but I'm more accustomed to it. So it feels about the same.

Erik

We've talked a lot about OpenAI, but you have several other companies, like Retro Biosciences (in the longevity field), and energy companies like Helion and Oklo. Did you have an overall plan ten years ago to make some significant investments in these major areas? Or how should we view Sam Altman's approach to this layout?

Sam Altman

No, I just wanted to use my capital to fund interesting things I believe in. It's like I don't feel... it feels... yes, it feels like a good use of capital, and it's more interesting and meaningful to me than buying a bunch of art or something, and of course, the returns are better.

Erik

What about the so-called "human algorithm"? What do you think future AI will be most fascinated by?

Sam Altman

I mean, almost everything, I bet it's everything, my intuition is that AI will be fascinated by everything else that can be researched and observed. And you know, just like...

Erik

Finally, I love that you have an insight when you talk about the next OpenAI... A common mistake investors make is pattern matching based on previous breakthroughs, just trying to figure out, oh, what is the next Facebook, or what is the next OpenAI, and the next, you know, potential trillion-dollar company won't look exactly like OpenAI It will be built on the breakthroughs achieved with the help of OpenAI, which is near-free AGI at scale, just as OpenAI was built on previous breakthroughs. So for founders and investors who are listening and trying to gain insight into the future, in a world where OpenAI has fulfilled its mission and near-free AGI exists, what types of company-building or investment opportunities do you think might arise that could potentially excite you? When you put on the investor's hat instead of the company builder's hat.

Sam Altman

I don't know. I mean, I have some guesses, but they are like... I've learned...

Ben

You always...

Sam Altman

Wrong. I, you’ve learned that you’re always wrong. I’ve learned profound humility at this point. I think, about... I think if you try to talk about it on paper, you’ll just say things that sound smart but that almost everyone is saying. And it’s hard to get that kind of conviction right. I know the only way to do this is to get deep into the front lines, explore ideas, like talk to a lot of people. And I don’t have time to do that anymore. Yes, I can only think about one thing right now. So I’ll just be repeating what others say or stating the obvious.

Sam Altman

But I think it’s a very important question, like if you’re an investor. Or a founder, I think it’s the most important question. And the way you figure it out is by building things, playing with technology, talking to people, and immersing yourself in the world. I’ve always been very disappointed in the extent to which investors are willing to support this kind of thing, even though it has always been a proven method. You (referring to the interviewer) have done a lot, but most companies are just chasing what’s current, yes, the hot stuff. Most founders are the same. So I hope people will try.

Erik

We talked about how silly things like five-year plans can be in a constantly changing world. It feels like when I ask about your overall planning, you know, your career trajectory has always been to follow your curiosity, stay, you know, closely connected with the smartest people, closely connected with technology, and identify opportunities in an organic and incremental way.

Sam Altman

Yes, but AI has always been what I really wanted to do. I went to college to... I studied AI, and worked in an AI lab during the summer between my freshman and sophomore years. At that time, it was completely useless. So I was like not enough... I didn’t want to study something that was completely unworkable. I was very clear that AI was completely unworkable at that time. But I’ve been an AI enthusiast since I was young, just like that.

Ben

It’s amazing, you know, with enough GPUs, enough data, and then suddenly it just clicks.

Sam Altman

How hated it was at that time, when people started, oh my god, when we started to figure it out, people just absolutely didn’t believe it. The field was very much hated at that time Yes, I have looked into it. This is not, somehow, an attractive answer to the question.

Ben

"The bitter lesson."

Erik

The rest is history; perhaps we should end here. We are fortunate to have been partners on this journey. Sam, thank you very much for coming to the podcast. Thank you very much.

Sam Altman

Yes, thank you. Thank you