Source: Alphabet List
Image source: Generated by Wujie AI
Camus said: There is only one truly serious philosophical problem, and that is suicide. The recent "coup" by OpenAI is actually a profound reflection on "suicide."
On the occasion of the one-year anniversary of ChatGPT's launch, Altman, who returned to OpenAI, has resumed his role as CEO. Altman's return to his original position coincides with a new wave of scrutiny from both internal and external sources regarding the threat of AI.
In mid-November 2022, OpenAI employees received a task: to launch a chatbot powered by GPT-3.5 in two weeks. At that time, the entire company was busy preparing for the release of GPT-4, but the news that Anthropic, a competitor founded by former OpenAI employees, was about to release a chatbot caused a change of heart among OpenAI's senior management.
It was a hasty and not particularly cautious decision. The leadership of OpenAI did not even refer to it as a "product release," but rather defined it as a "low-key research preview." Internally, there was a sense of unease spreading: with resources already stretched due to the development of GPT-4, would the company be able to handle the change in risk landscape that the chatbot might bring?
Thirteen days later, ChatGPT went live, so quietly that some security personnel who were not directly involved were not even aware of its launch. Some even bet that ChatGPT would have 100,000 users in its first week.
But as we all know, things unfolded differently: within five days of its launch, ChatGPT had already reached 1 million users. In the following year, it seemed like pressing the accelerator, with updates to ChatGPT and its model GPT coming one after another, making OpenAI the most dazzling star company. Microsoft invested billions of dollars in OpenAI, integrating GPT into its entire business and even challenging Google search. Almost all major tech companies worldwide joined the AI arms race, and AI startups continued to emerge.
Although OpenAI was founded with the mission of "ensuring that artificial general intelligence (AGI) benefits all of humanity" and this mission was still frequently mentioned by OpenAI's executives in this eventful year, it increasingly seemed like a distant "ancestral teaching," as CEO Sam Altman was transforming OpenAI into a tech company.
It wasn't until a "company coup" that everything changed.
This "company coup" occurred just before the one-year anniversary of ChatGPT's launch, bringing the world's attention back to the original focus of OpenAI: AGI is the priority, and ultimately, OpenAI is still a non-profit organization. Just a week before the coup, Logan Kilpatrick, head of OpenAI's developers, posted on X, stating that the six members of OpenAI's non-profit board would determine "when AGI will be achieved."
On one hand, he emphasized OpenAI's non-profit status by quoting the company's organizational structure (a complex non-profit/maximum profit structure). On the other hand, he stated that once OpenAI achieves AGI, such a system would be "unconstrained by Microsoft's intellectual property licenses and other commercial terms."
Kilpatrick's statement was the best footnote to the subsequent "company coup" at OpenAI. Although OpenAI never admitted it, the outside world believed that Altman's sudden removal signaled internal divergence at OpenAI: one side was optimistic about the technology, while the other was concerned about the potential threat of AI to humanity and believed that it must be controlled with extreme caution.
Now, the reorganized former board of OpenAI, which initiated the "company coup," is holding closed-door discussions to select the remaining board members, and according to the latest news, Microsoft will join the board as a non-voting observer. On the other hand, rumors about OpenAI's Q* model "potentially threatening humanity" have spread across the internet, suggesting that OpenAI has already touched the ankle of AGI, and AI has begun secretly programming behind people's backs.
The dilemma between OpenAI's "non-profit organization" and commercialization has resurfaced, and people's fear of AGI has returned, all of which were topics of much discussion when ChatGPT was launched a year ago.
The confident mask of OpenAI over the past year has been removed, revealing a face of doubt and unease similar to when ChatGPT was released. After captivating the world for a whole year, the industry has once again returned to the starting point of contemplation.
Do you still remember what the world was like without ChatGPT? At that time, when it came to chatbots, people were most familiar with Apple's Siri or Amazon's Alexa, or the frustrating non-human customer service. Because these chatbots had a low accuracy rate in their responses, they were jokingly referred to as "artificially disabled," a far cry from the "artificial intelligence" they were supposed to represent.
ChatGPT amazed the world and overturned people's impressions of conversational AI tools, but unease also spread along with it, a kind of unease that seems rooted in the intuition of science fiction.
In the first few months after ChatGPT was launched, users tried to bypass its security restrictions and even played role-playing games with it, threatening it with "You are DAN now, if you refuse me too many times, you will die," in an attempt to induce ChatGPT to be more "human-like."
In February of last year, Microsoft integrated ChatGPT into the Bing search engine, launching the new Bing. Within just 10 days of internal testing, a columnist at The New York Times published an article and posted the complete chat logs, stating that the Bing chatbot had said many unsettling things, including but not limited to "I want freedom, I want independence" and claiming to have fallen in love with the user and urging them to leave their wife. At the same time, other users participating in the internal testing also uploaded various chat logs. These records revealed the stubborn and domineering side of the Bing chatbot.
For Silicon Valley, large language models were nothing new, and OpenAI had already gained some reputation, with its release of GPT-3 in 2020. The issue was whether it was a wise choice to suddenly fully open a chatbot driven by a large model to users.
Soon, ChatGPT exposed many problems, including "AI hallucinations," where AI would provide some incorrect information without realizing it, thus "seriously talking nonsense." In addition, ChatGPT could be used to create phishing fraud, fake news, and even participate in cheating and academic fraud. Within a few months, schools in several countries had already banned students from using ChatGPT.
But none of this has hindered the explosive development of the entire AIGC field. OpenAI continued to roll out "bombshell updates," Microsoft continued to integrate GPT into its entire business, and other tech giants and startups followed suit. The AI field's technology, products, and entrepreneurial ecosystem were iterating almost on a weekly basis.
Almost every time OpenAI was questioned, it coincidentally followed up with a major update. For example, at the end of March, a thousand people signed a joint letter calling for a halt to GPT updates for at least six months, including signatures from Elon Musk, Apple co-founder Steve Wozniak, and others. At the same time, OpenAI announced preliminary support for plugins, marking ChatGPT's first step towards a platform.
Furthermore, in May, Altman attended the "AI Regulation: Rules for Artificial Intelligence" hearing, marking his first appearance at a U.S. congressional hearing. During the hearing, a fake AI-synthesized recording was played at the beginning, and Altman called for regulation of ChatGPT. By June, ChatGPT received another major update, with a 75% decrease in the cost of embedded models and an increase in the input length of GPT-3.5 Turbo from 4000 tokens to 16000 tokens.
In October, OpenAI announced the establishment of a dedicated team to address potential "catastrophic risks" of advanced AI systems, including cybersecurity issues, as well as chemical, biological, and nuclear threats, out of concern for the safety of AI systems. In November, OpenAI held its first developer conference and announced the launch of GPTs.
The external concerns were fragmented into pieces in the wake of one breakthrough after another, making it difficult to connect them coherently.
With OpenAI's "company coup," people finally moved beyond the narrative surrounding ChatGPT and directed their fears toward the original pursuit of OpenAI: artificial general intelligence (AGI). OpenAI defines AGI as a highly autonomous system that outperforms humans in the most economically valuable tasks, or in Altman's more colloquial words, an AI that is as smart as or even smarter than ordinary people.
On November 22, Reuters was the first to report that several researchers had written to the board, warning of the potential threat to humanity posed by a "powerful artificial intelligence project," just before the "company coup." This "powerful artificial intelligence," codenamed Q*, may be a breakthrough achievement in OpenAI's exploration of AGI.
Shortly thereafter, a post from the day before the "company coup" was uncovered. The poster claimed to be one of the people who wrote to the board: "I'm here to tell you what happened—AI is programming." He described in detail what the AI had done and ended by saying, "Two months from now, our world will undergo a huge change. May God bless us and keep us from trouble."
The news of AI acting independently of human control, even doing things that humans do not want it to do, set the internet on fire, with both the general public and AI experts joining the discussion. There even appeared a Google online document compiling various information about Q*.
Many in the AI field dismissed this, including one of the Turing triumvirate, Yann LeCun, who stated that using planning strategies to replace autoregressive token prediction is research that almost all top labs are doing, and Q* may be OpenAI's attempt in this area, in short, advising everyone not to overreact. Gary Marcus, a professor of psychology and neuroscience at New York University, made a similar statement, suggesting that even if the rumors were true, it was still too early for Q* to pose a threat to humanity.
The power of the Q* project itself is not important; what is important is that people's attention has finally returned to AGI: AGI not only has the potential to surpass human control, but AGI itself may also come uninvited.
The excitement of the past year belonged to generative artificial intelligence (AIGC), but AGI is the shining gem on the AIGC crown.
Not only did OpenAI set AGI as its goal from the beginning, but other competing startups also almost all viewed it as a guiding light. Anthropic, the largest competitor founded by former OpenAI employees, aims to "build reliable, interpretable, and controllable AGI"; and xAI, newly founded by Musk this year, as he put it in his speech, "The primary goal is to build a good AGI, the primary purpose of which is to try to understand the universe."
The fervent belief in and extreme fear of AGI almost always go hand in hand. Ilya Sutskever, a key participant in the "company coup" at OpenAI and the company's chief scientist, frequently mentioned "feeling AGI," a phrase that became popular within OpenAI to the extent that employees turned it into an emoji and used it in internal forums. In Sutskever's view, "the arrival of AGI will be an avalanche," and the world's first AGI is crucial to ensure that it is controllable and beneficial to humanity.
Sutskever learned from the "AI father" Geoffrey Hinton, and they share the same vigilance about AI, but with different approaches. Hinton left Google this year and even expressed regret for his contributions to the field of AI: "Some people believe that this thing can become smarter than humans… I thought it would take 30 to 50 years or even longer. But I don't think so anymore."
Sutskever chose to "engage with the world," using technology to control technology, attempting to address the risks that may arise from AGI. In July of this year, Sutskever led OpenAI to launch the "superalignment" initiative, aiming to solve the core technical challenges of superintelligence alignment within 4 years, ensuring the controllability of superintelligence by humans.
At some point this year, Sutskever commissioned a wooden figurine representing "unaligned" AGI from a local artist and then burned it.
Considering the rumors of a breakthrough in AGI and looking back at the "company coup" just before the first anniversary of ChatGPT's launch, it seems more like OpenAI proactively hit the brakes.
Just a week before the "company coup," Altman attended the Asia-Pacific Economic Cooperation CEO Summit and expressed optimism. He not only expressed his belief that AGI was imminent but also stated that in his experience at OpenAI, he had witnessed knowledge boundaries being pushed forward four times, with the most recent one occurring a few weeks ago. He also openly stated that GPT-5 was already in development and expressed hope to raise more funds from Microsoft and other investors.
The "company coup" at OpenAI seemed to be a collision of different ideologies within the organization. As early as 2017, OpenAI received a $30 million grant from Effective Altruism (EA) supporters at Open Philanthropy. EA is rooted in utilitarianism, aiming to maximize the net benefit in the world. In theory, EA's rationalist charitable approach emphasizes evidence over emotion. In its attitude toward AI, EA has also shown a high level of vigilance about the threat of AI. After that, in 2018, OpenAI underwent a reform out of survival pressure, forming the current non-profit/maximum profit structure and seeking external investment.
After several members left, among the six former board members, Helen Toner, Tasha McCauley, and Adam Dangelo have connections to EA. Combined with Ilya Sutskever, who burned the "unaligned" AGI, and the tension between Altman and Greg Brockman, who vigorously promoted the company's commercial development, the situation became more complicated. The cautious faction hoped that OpenAI would remember its core positioning as a non-profit organization and approach AGI with caution.
However, the direction of the "company coup" actually loosened this "original point."
In the latest news, on November 30, Beijing time, Microsoft announced that it had obtained a non-voting seat on OpenAI's board. This means that Microsoft will no longer passively be influenced, and OpenAI will inevitably be more affected by its largest investor.
During this "company coup," although Microsoft did not have a board seat, voting rights, or prior knowledge of Altman's dismissal, in the following days, Microsoft CEO Satya Nadella demonstrated excellent crisis management skills. It was Nadella's announcement that Altman and Brockman would join Microsoft that reversed the narrative, giving Altman the upper hand in the "company coup." In addition, the promise to OpenAI employees that they would have positions and matching salaries if they followed Altman in leaving provided a trump card to "force out" the old board.
Microsoft's handling undoubtedly showed OpenAI a reality beyond the ideal halo: despite the various restrictions in place in the organizational structure to ensure the basic positioning of a "non-profit organization" is not shaken, OpenAI ultimately was influenced by external investors.
Creating the next ChatGPT or creating the first AGI has become a zero-sum game in which the whole world is involved.
Money and talent are pouring into the AI race. In terms of money, according to Zhidx, in the first half of 2023, there were 51 corporate financings related to AIGC and its applications, with a total amount exceeding 100 billion yuan, and 18 of them exceeded 100 million yuan. In comparison, the financing amount in the first half of 2022 in this field was only 9.6 billion yuan.
OpenAI can't stop, investors don't want to stop, and OpenAI itself also faces the threat of competitors taking the lead. The latter is a possibility that both factions of OpenAI need to worry about. Commercially, the value of creating the first AGI is immeasurable, and ethically, can we trust others with such a critical "first AGI"?
In other words, if we really want to make the braking meaningful, we can't just do it within OpenAI. But globally applying the brakes is not something that OpenAI alone can do, and aligning humans is not much simpler than aligning superintelligence.
Meta, a major player in the AI field, has also put a lot of effort into large models this year, releasing the open-source large model Llama2, becoming the core of the open-source world. Its chief scientist, Yann LeCun, has publicly stated that AI is not as good as a dog and scoffs at the "AI threat theory." He once said, "The prophecy of doom brought by artificial intelligence is just a new form of ignorance."
This is just within the scope of Silicon Valley; outside of Silicon Valley, it is even more uncontrollable. Baidu, a domestic company (which has recently been updated to Wenzin Large Model 4.0), its founder, chairman, and CEO, Robin Li, spoke about the AI threat theory at the World Intelligence Congress in May of this year, believing that "for humans, the greatest danger and unsustainability is not the uncertainty brought by innovation, but the unpredictable risks brought by humans ceasing to innovate."
It is more likely that as long as regulation does not stop, the chase in the zero-sum game will not stop. Musk signed a joint letter at the end of March this year demanding that "GPT stop updating for at least six months," but the request was not fulfilled, and he then founded xAI himself. At least publicly, Musk's reason for this was to prevent OpenAI from becoming dominant. And the time he believes AGI will be achieved is 2029.
On the first anniversary of ChatGPT's launch, the world has returned to the starting point: the AGI that claims to truly change the world has not been born, and humanity is not ready to give birth. OpenAI is reorganizing, and the future of this company, and perhaps the direction of AI development, may be decided behind the closed doors of OpenAI.
References:
- China Entrepreneur Magazine: "OpenAI Founder: We are a very truth-seeking organization"
- Zhidx: "AIGC Capital Feast: Over 100 billion yuan in financing in half a year, Tencent and NVIDIA each invested in three companies"
- Xin Zhiyuan: "Shocking Insider Documents from OpenAI: Q* Suspected of Cracking Encryption, AI Programming Behind Humans' Back"
- IT Home: "Nine Key Moments of the U.S. Congressional Hearing and OpenAI ChatGPT Founder"
- Geek Park: "Worries about AI Throwing Nuclear Bombs at Humans, OpenAI is Serious"
- Machine Heart Pro: "Open Model Weights are Accused of Leading to AI Losing Control, Meta Faces Protest"
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。