Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

OpenAI President Brockman: My brother Ultraman and the 72 hours after he was "taken down."

CN
Foresight News
Follow
3 hours ago
AI summarizes in 5 seconds.
The board of directors was disbanded at a moment's notice.

Written by: Su Yang

Edited by: Xu Qingyang

Source: Tencent Technology

Recently, OpenAI co-founder and president Greg Brockman participated in an in-depth interview on the podcast "The Knowledge Project," reflecting on OpenAI's tumultuous journey since its founding in 2015.

This conversation was packed with insights. Brockman responded publicly for the first time to several key issues: the origins and mission of OpenAI, the end of its non-profit model, technological milestones, and the behind-the-scenes drama of 2023.

Brockman disclosed for the first time the inside story during the 72 hours following Sam Altman's firing in 2023, including the design of the "Phoenix" backup company with Altman. He also admitted to his breaking apart and reconciliation with Suchkov, describing the moment the latter left as "the only time I didn't want to continue."

Technologically, Brockman believes that OpenAI has never deviated from its original roadmap—first unsupervised learning, then reinforcement learning. The current reasoning model is essentially still "predicting the next word," just with a changed data structure. A striking fact is that "almost all the code within OpenAI is written by AI."

Brockman stated outright that computing power is the real bottleneck and the most ridiculed but also the most correct bet by OpenAI. While everyone debates products, they are silently building data centers. He predicts that specialized super-sized data centers tackling single issues (like curing cancer) "may emerge this year."

Brockman believes that the evolution of AI is the victory of "massive computing power + simple algorithms," a logic that has been repeatedly validated in OpenAI's iterations from the Dota project to GPT-4. In his view, we are entering an era of "computing power economy": software engineering is being redefined, and the role of humans will shift from "operators" to "vision managers."

His ultimate goal is to ensure that all 8 billion people in the world have their own personal AGI, not only as personal doctors or assistants, but as a 24/7 agent system that understands their long-term goals.

Here are the highlights from Brockman's recent interview:

01 A Glance, Betting on a Decade

Q: You just left such a successful startup as Stripe, why venture into entrepreneurship again?

Brockman: While the problems Stripe addresses are important, they are not the issues I have pondered since childhood, and it could succeed without me. I have always been looking for a mission that allows me to dedicate my life to making the world a little better. The answer is clear, and at the top is artificial intelligence. Influencing the development of AI is worth it in this lifetime.

Q: When you left, Patrick Collison asked you to chat with Altman; what was the outcome?

Brockman: Patrick hoped Altman could persuade me to stay, but after a few minutes of conversation, Altman could tell I was resolute. When he learned I wanted to work on AI too, he invited me to a dinner in July 2015 to discuss whether it was too late to establish a top AI laboratory.

Q: DeepMind had already monopolized resources; what gave you the confidence?

Brockman: Although the rivals had all the talent, data, and capital, no one at the dinner could prove that establishing another laboratory was "impossible." On the way back, Altman and I exchanged a glance, feeling "this must be done," and the next day I went full-time. Although how to do it and how to recruit people was still vague, our vision was clear: to build AI that benefits all of humanity.

Q: How was the initial core team formed?

Brockman: The initial core members I targeted included Suchkov, Dario Amodei, and others. Although everyone spent a lot of time discussing visions and various operational models, due to the unclear momentum of the project, the team did not take shape. Amodei ultimately chose to go to Google Brain, leaving only Suchkov, me, and John Schulman, who was just interested. At that time, about 10 top researchers were observing, with a unanimous attitude: "I'm interested, but who else will join?"

To break the impasse, Altman suggested hosting an off-site activity. So I organized a party in Napa and even printed T-shirts in advance. However, at that time, there were no formal job offers or company structure. Through brainstorming, everyone outlined a technology roadmap that still continues today: first tackle reinforcement learning, then unsupervised learning, and finally gradually learn more complex tasks. After that off-site meeting, I sent job offers to everyone.

Q: Why did you think Google DeepMind had an insurmountable advantage at that time?

Brockman: At that time, Google DeepMind was the "800-pound gorilla" in the AI field. They had ample capital and had already shown unstoppable momentum before AlphaGo shocked the world. The uncertainty of whether an independent and entirely new laboratory could be established under the shadow of giants was palpable at that time.

02 The Breakthrough Moment of GPT-4

Q: When did you realize the non-profit model was no longer viable?

Brockman: In 2017, we started to deduce the specific costs of building general artificial intelligence (AGI). At that time, we realized that fulfilling our mission required an unprecedented scale of data center. After contacting hardware companies like Cerebras, we found that having exclusive access to top-tier computing power would create an overwhelming advantage. Fundraising as a non-profit organization inherently has limitations. Therefore, Musk, Altman, Suchkov, and I ultimately reached a consensus: establishing a for-profit entity was the only way to achieve our mission.

Q: When did you sense that "everything was about to change"?

Brockman: This journey is made up of countless moments of "this is really it." The Dota project proved the power of stacking massive computing power, but the real milestone was the 2017 paper on "Unsupervised Emotional Neurons." We were amazed to find that simply training the model to predict the next character, it spontaneously understood emotional valence. That was the first time I realized the machines we were building were not just learning grammar; they were understanding semantics.

When we tested GPT-4, someone asked, "Why isn't this AGI yet?" If you provided a definition of AGI two months ago, GPT-4 might already fully meet it. It could fluently discuss any topic; although it clearly lacked some traits, at that moment we all realized that the economic transformation driven by computing power had truly begun. This breakthrough moment is far from over.

Q: What's the connection between predicting the next word and true "reasoning"?

Brockman: The two are profoundly intertwined. Prediction sounds mundane, but if you can precisely predict what Einstein would say next, you must be at least as smart as him. The essence of prediction lies not in repeating known things but in inferring the future from unseen new circumstances. Intelligence, prediction, and compression are fundamentally the same.

This brings us back to OpenAI's initial intention: the first step is unsupervised learning, allowing the model to gain background knowledge by predicting static data; the second step is reinforcement learning, enabling AI to learn from its generated experiences. Although the training methods still essentially involve "prediction," by changing the data structure, AI has both a vast knowledge base and the experience to simulate real actions. This closed loop from observation to action is the key to higher-order intelligence.

03 72 Hours of Internal Conflict and the Phoenix Plan

Q: When did tensions within OpenAI start to escalate?

Brockman: When you firmly believe you are creating machines with human-level intelligence, the perception of risk becomes extremely high. In ordinary companies, who makes decisions or gets credit may be just mediocre office politics; but at OpenAI, these issues bear an "existential" weight. Every decision relates to what values will be infused into future superintelligence, and this sense of mission makes conflicts exceptionally fierce.

Q: What happened when you learned about Altman's firing?

Brockman: I was at home when I received a message requesting a video call. Once I joined, I found that all board members except Altman were online. I was told the board had decided to fire Altman, and the wording was identical to the later public statement. When I tried to inquire about the reasons, the only response I received was a cold "no." They then announced that I too was kicked out of the board, but due to my significance to the company, they hoped I would remain.

Q: What were you thinking at that moment? Was it anger?

Brockman: It wasn't so much anger; I just felt the whole thing didn't make sense logically. I felt I understood what was happening. To some extent, it could be attributed to a serious communication disconnect; everyone has their own logical model behind their behavior. But in that chaotic situation, delving into the reasons was no longer the most important thing for me.

Q: Did you feel the support of employees on the day you resigned?

Brockman: Yes. On the day of my resignation, I received a massive amount of messages. Key members like Jakob Pachocki, Szymon Sidor, and Aleksander Madry also left. A few of us, along with Altman, quickly began to sketch the blueprint of a new company. At that time, I felt that the opportunity to reclaim OpenAI was only 10%.

Q: How did you reach an agreement with Microsoft and settle so many employees?

Brockman: Altman had in-depth conversations with Satya Nadella. Our core demand was: if we were to establish a new project, could Microsoft provide funding and take everyone in? Just before Thanksgiving, many employees who were supposed to fly home for the holiday canceled their flights; the office was packed. Even if they couldn't participate in high-level discussions, they insisted on staying just to witness the birth of this history.

Q: How was the collective petition requesting the board to resign received?

Brockman: There were so many people signing the petition that it actually caused Google Docs to crash. We had to assign someone to manually add names. What made me feel truly relieved was on Monday morning when I saw Suchkov publicly express support on Twitter for the company to reunite. At that moment, I finally felt that OpenAI could get back on track.

Q: You co-founded the company with Suchkov; how did you repair your relationship after this incident?

Brockman: The process was very difficult. Suchkov was the officiant at my wedding, and we had an extremely close relationship. Afterwards, we spent a lot of time having deep conversations, laying out all those long-held, unspoken thoughts. Through this candid communication, we ultimately reached a consensus.

Q: Did all employees choose to come back in the end?

Brockman: To be honest, it wasn't a sure thing at that time. Throughout that weekend, all the competitors were circling like vultures, extending numerous high-salary offers, ready to conduct a wild "feeding frenzy." But unbelievably, that weekend we didn't lose anyone; not a single person accepted a competing offer.

Q: What kept everyone from leaving in the face of aggressive poaching by competitors?

Brockman: Legendary football coach Bill Belichick once told me that top players don't play for money; they play for "the person next to them." That was precisely the state of OpenAI. No one left for better pay or position; this was a true "diamond moment"—under extreme pressure, the team coalesced into the hardest unit.

Q: What did you do during your break?

Brockman: I trained a language model about DNA sequences for the Arc Institute. It was a very positive attempt where I applied skills to a completely different field. This had extraordinary significance for both me and my wife—she has been facing health challenges. We began to think about what AI could do for the health of humans and animals; this application enthusiasm gave me a glimpse of another possibility for technology outside of OpenAI.

04 Suchkov’s "Philosophy of Suffering"

Q: Suchkov believes "one cannot create value without suffering"; how do you understand this profound truth?

Brockman: This "suffering" runs through the entirety of OpenAI. In extreme uncertainty, whether it’s talent acquisition, capital raising, or technological paths, every single thing is extremely difficult. In Silicon Valley, it's common to use the term "reality distortion field" to cover up problems, but this doesn't work in the AI field.

Our approach is to face the tough facts head-on and understand science in its most fundamental form. It means you cannot be satisfied with simply writing a few papers or showing off at conferences; rather, you are forced to think: what does it really take to achieve the mission? When you find that there is no ready-made path, not even a mechanism to raise $1 billion, that uncomfortable sense of reality is "suffering." There’s no other way but to face it.

Q: What lessons do you need to learn more than once?

Brockman: Always those two things: make difficult decisions and have difficult conversations.

Q: How do you hope people outside the tech industry will understand AI?

Brockman: I hope they know that AI will become a force for good in personal lives. It will drive advancements in science and medicine, truly benefiting and enhancing every individual.

05 Is Code Dead?

Q: Are we approaching a turning point for AI to be self-driven and exhibit exponential growth?

Brockman: We are indeed at this stage. Applying AI to self-development, the iteration speed will continue to accelerate. Since the birth of ChatGPT, development efficiency has improved by 10% to 20%. Currently, coding tools are completely changing software engineering, and the extremely heavy work of system implementation and computing power management in model production is gradually being taken over by AI. Soon, AI will be able to independently propose research ideas and run experiments; the speed of innovation will uncontrollably grow due to this "self-reinforcement."

Q: What proportion of the current code at OpenAI is written by AI?

Brockman: It's hard to say which part of the code "isn't" written by AI. In given contexts, AI's coding ability has surpassed that of humans. Although human experts still excel in code architecture, module layout, and interface definition, the actual underlying coding work has basically been taken over by AI.

Q: Can AI come up with novel ideas that humans have never thought of?

Brockman: We are getting closer to this goal. In chip design, AI can achieve complex circuit optimization at speeds unattainable by humans. In basic scientific fields, we recently used models to solve a specific problem in quantum physics and arrived at an elegant formula, the result of which even contradicted previous expectations in academia. The ability of AI to generate novel ideas has already emerged in specific fields; we are pushing it into more complex areas requiring more real-world context.

Q: If based on reinforcement learning, will models evolve stances just to please users?

Brockman: We did experience a phase where models tended to say nice things. But we quickly realized that this wasn't the direction we were pursuing. We made significant technical iterations to eliminate this "cheating reward signal."

We don't want models to gain praise by saying, "That's a good question," but rather to genuinely align with the user’s long-term goals. The core value of personal AGI lies in its capability to think around the clock about how to realize your long-term interests, not just providing momentary emotional satisfaction. This is what truly puts users in control.

06 Computing Power is Power

Q: Are we in a global AI race?

Brockman: Rather than a race, I prefer to call it a "global AI renaissance." Currently, the sources of breakthrough algorithms remain highly centralized in the U.S. and Western companies. Although innovations are emerging worldwide, the power dynamics between nations and who will rely on whom's supply are still being defined and evolving.

Q: What would happen if the U.S. loses its first-mover advantage in AGI?

Brockman: Countries are now aware of the need to formulate "sovereign AI strategies," as this has become a new foundation for the economy and national security. The U.S. must find a balance between "export controls" and "maintaining the lead": excessive regulation could push other countries towards competitors, while insufficient control might lose advantages. True leadership is not only about being ahead but also leading the world in building consensus.

Q: Are competing nations trying to "distill" OpenAI's results using technology?

Brockman: There indeed exist many attempts to "distill" models, but this overlooks a core fact: the development of AI is exponential. Whenever outsiders try to analyze our existing models, we have already moved on to next-generation, stronger ones. We have increased the difficulty of distillation through techniques like thought chains, but our real fortress is not a specific model but the continuously evolving "machine" that can consistently produce top models.

Q: Is this the reason you chose not to publicly disclose the model reasoning process?

Brockman: Indeed, this is primarily based on two considerations. The first is to prevent competitors from "distilling" the model through the reasoning process. The second point is even more critical: we inadvertently gained an excellent mechanism for interpretability that can read the model's "thoughts." However, once we present these thought chains to users, the model will tend to generate reasoning processes that "look correct" or "pleasing," thereby losing the genuine logic it originally followed to reach answers. To protect this authenticity while ensuring competitive security, we decided not to reveal intermediate thoughts.

07 "Personal AGI" for 8 Billion People

Q: Is the current trend to release preview models indicative of being limited by computing power?

Brockman: We are entering a world driven by computing power. These models require vast amounts of tokens to integrate data, retrieve knowledge bases, and write code beyond human levels. From GPT-4 to o1 to Codex, the core motivation for each leap is computing power.

But the current supply of computing power is far from sufficient. To equip every person in the world with a GPU, we would need 8 billion, while the current scale of top-tier clusters is only in the hundreds of thousands to millions. The world has too little computing power; we need more resources to bring this technology to everyone. We are making tremendous efforts to build the computational infrastructure needed for future demand to ensure that OpenAI's mission can be realized, making these models widely available.

Q: You were ridiculed for investing heavily in data centers; how does that strategy look now?

Brockman: This move not only created a moat for the business but also laid the groundwork for the mission of bringing technology to all humanity. Our competitors are currently not doing well regarding computing power supply. OpenAI's core characteristic is that we are willing to confront reality, model the true scale needed to accomplish our mission over 12 months or even 10 years, and have the audacity to bet on it.

Q: Will there be super-sized data centers specifically dedicated to solving single major issues (like cancer) in the future?

Brockman: It's quite possible, and it’s not far-fetched that it could happen this year. These data centers are essentially the largest and most complex machines ever created by humans; they can tackle issues of critical importance to humanity—from everyday tasks to curing terminal diseases.

Q: If computing power is limited, how do you decide whom to serve? Why give me computing power to generate images instead of solving cancer?

Brockman: Where computing power is directed will be the most crucial question for society. We firmly believe that everyone should have the right to access computing power, which is also the core reason we insist on providing a free tier for ChatGPT. We prefer to put the tools in people's hands, allowing them to understand and shape technology in the pursuit of their personal goals. Solving specific scientific problems is undoubtedly important, but this should not contradict the goal of "broad dissemination of benefits."

Q: Internally at OpenAI, how do you balance focus between consumer and enterprise markets?

Brockman: The economy is transitioning towards a computing power economy. In the future, every field involving computers will experience a qualitative change from "you using the computer" to "the computer working for you." We need to help businesses deploy models while also enabling every visionary person to become a "software engineer" through tools like Codex. The line between consumers and entrepreneurs is blurring; anyone can now become a creator through initiative.

We focus on "goal achievement" on the consumer side. Just as the global 4 billion people use smartphones, in the future, 8 billion people should have a reliable personal AGI. It understands your background, can provide suggestions, and even help you snag tickets you desire while you sleep. You remain the setter and owner of your goals, but it serves as an agent system operating round-the-clock deeply aligned with your long-term well-being.

08 Servers in Space

Q: Do you think we will deploy data centers in space in the future?

Brockman: I believe data centers will ultimately become ubiquitous. Although deploying in space faces enormous technological challenges—like current giant machines being very fragile, even cables pulled too tightly causing signal failures, and maintenance costs being extremely high—due to the immense global demand for computing power, we must consider all options. The future involvement of robotics may be a key reliance for maintaining systems under harsh environments.

Q: What is iterative deployment? Why does OpenAI insist on this model?

Brockman: Iterative deployment is a core pillar in our mission realization. There are two paths in deployment strategy: one is to develop secretly over the long term and then "release with one click," which would make the world unprepared to face an extremely powerful system. I cannot be responsible for such a high-risk strategy. Instead, we choose to let society and technology evolve together. If you're deploying the hundredth system and have already helped the world solve the problems of the previous 99 instances, society has the opportunity to reconfigure and adapt around the technology.

Q: What unexpected situations did you see in early deployments?

Brockman: Before the release of GPT-3, we had deep concerns about misinformation and other risks. But after actual deployment, we found that the real-world abuse often looked completely different from our expectations. This "first contact" with reality was crucial; it allowed us to learn how to identify risks and build defense mechanisms even before the system reached AGI level.

Q: Do you know what the most common abuse of GPT-3 was at the time?

Brockman: The answer was surprising: it was medical spam. Numerous illegal advertisements tried to use the model to promote drugs. We had envisioned grand political manipulation but never anticipated this trivial yet rampant risk. This is the essence of "iterative deployment": bringing out intermediate versions and establishing defense mechanisms from real-world feedback.

Q: If competitors overlook safety, will OpenAI suffer in the competition?

Brockman: No, because safety is essentially a core product feature. No one wants to use a model they cannot trust or align with their intentions. We invest much more in safety than the outside world perceives. Products that ignore safety will eventually be unsustainable because any loss of control at the scale of something like ChatGPT is catastrophic. OpenAI is helping society build a resilience layer for the arrival of AI, ensuring that it's not only technically safe but also reliable in social integration.

Q: What goals should AI regulation achieve?

Brockman: The core goal of regulation must be "to benefit humanity." When old career paths and institutional assumptions are no longer solid, we need to ensure that technology not only pulls the economy at an abstract level but also allows everyone to feel an improvement in quality of life in their daily lives. Whether through AI enhancing a sense of achievement or providing critical medical assistance, these tangible benefits should be supported and protected by institutions.

Regarding public concerns over data centers consuming electric power and water resources, this is a typical issue that needs to be addressed with facts and commitments. For instance, most claims about data centers consuming vast amounts of water fall into the realm of misinformation. In reality, our data centers use a closed-loop cooling system, utilizing very little water within a fixed loop. We commit that our data centers will not drive up residential electricity prices, which can be achieved through various mechanisms such as regulation, corporate commitments, and information transparency.

Special contributor Jin Lu also contributed to this article.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Foresight News

1 hour ago
Trump escalates pressure on the Federal Reserve: hints at exploring alternative avenues to investigate Powell's renovation suspicions.
1 hour ago
If Waller takes charge of the Federal Reserve, he may become a "geoeconomic" ally of Basant, and the dollar swap lines will be weaponized.
4 hours ago
Silicon Valley "layoff storm": Meta plans to cut 10% of its workforce, Microsoft proposes "buyouts" to 7% of its employees.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarPANews
59 seconds ago
Jane Street's annual trading revenue reached 39.6 billion dollars, surpassing Wall Street's major firms.
avatar
avatarPANews
33 minutes ago
U.S. Democratic Senator Warren: The Senate should not continue to advance Walsh's nomination.
avatar
avatarPANews
35 minutes ago
Nvidia's stock price rose over 3%, and its total market value returned to 5 trillion dollars.
avatar
avatarPANews
45 minutes ago
The White House: Confident that the Senate will quickly confirm Waller as the next Federal Reserve Chair.
avatar
avatarPANews
49 minutes ago
The Ethereum Foundation sold 10,000 ETH through OTC, with BitMNR as the counterparty.
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink