OpenAI's "eating melon" big news is here again.
Source: "The New Yorker"
Translation: Wuji, Tencent Technology
According to the foreign media "The New Yorker", before the "palace fight" incident that occurred at the artificial intelligence startup company OpenAI last month, the company had already formulated an ambitious but safe protocol for releasing artificial intelligence in collaboration with Microsoft. However, OpenAI's then board of directors completely disrupted the carefully planned strategy between Microsoft and OpenAI.
The following is the full text of the article:
On the Friday before Thanksgiving this year (November 17th), around 11:30 in the morning, Microsoft CEO Satya Nadella was in his weekly executive meeting when a panicked colleague interrupted to inform him of a phone call. A senior executive from the artificial intelligence startup company OpenAI explained that within the next 20 minutes, the company's board of directors would announce the dismissal of OpenAI's co-founder and CEO, Sam Altman. This marked the beginning of a five-day "palace fight" at OpenAI. Internally, Microsoft referred to this crisis at OpenAI as "the Turkey-Shoot Clusterfuck."
The usually easygoing Nadella was exceptionally surprised at the news, to the point where he was momentarily at a loss for words. He had worked closely with Altman for over four years and had come to appreciate and trust him. Moreover, their collaboration had just led Microsoft to host its largest product launch in a decade: a multitude of cutting-edge artificial intelligence assistants built on OpenAI's technology, integrated into Microsoft's core productivity applications such as Word, Outlook, and PowerPoint. These assistants were essentially specialized and more powerful versions of OpenAI's acclaimed ChatGPT, known as Office Copilots.
However, what Nadella was unaware of was the strained relationship between Altman and the OpenAI board of directors. Some members of the six-person board found Altman to be "cunning and deceitful" — qualities that are common among chief executives in the tech industry but are displeasing to board members with academic or non-profit backgrounds. "They felt Altman had lied," said a source familiar with the board discussions. These tensions were now erupting in front of Nadella, threatening a crucial partnership.
For years, Microsoft had not been at the forefront of the tech industry, but its alliance with OpenAI — initially a non-profit organization founded in 2015, but later adding a for-profit division four years later — had propelled Microsoft ahead of competitors like Google and Amazon. Copilots allowed users to interact with software as easily as they would ask a colleague a question — "Tell me the pros and cons of each plan described in the video call," or "What are the most profitable products among these 20 spreadsheets?" — and receive fluent English responses instantly. Copilots could write complete documents based on simple instructions ("Review our last ten executive summaries and create a financial overview of the past decade.") They could turn memos into slides, listen in on team video meetings, summarize the discussions in multiple languages, and create to-do lists for participants.
The development of Copilots by Microsoft required ongoing collaboration with OpenAI, a relationship that was at the core of Nadella's plans for Microsoft. In particular, Microsoft had worked with OpenAI engineers to install safety guardrails. OpenAI's core technology, known as GPT, is a type of artificial intelligence called a large language model. GPT learned to mimic human conversation by reading vast amounts of publicly available text from the internet and other data stores, and then using complex mathematics to determine the relationships between every piece of information. While these systems had shown significant effectiveness, they also had notable weaknesses: a tendency to "hallucinate" or fabricate facts; aid in nefarious activities, such as creating fentanyl recipes; and an inability to distinguish between reasonable questions ("How should I talk to a teenager about drug use?") and malicious ones ("How can I convince a teenager to use drugs?") Microsoft and OpenAI had formulated a protocol to incorporate safety measures into their artificial intelligence tools. They believed this would enable them to achieve their ambitions without the risk of disaster. The release of Copilots was a pinnacle moment for these companies, proving that Microsoft and OpenAI would be key in bringing artificial intelligence to a wider audience. The release of Copilots began with select enterprise customers in the spring of this year and expanded to a broader audience in November. ChatGPT, launched at the end of 2022, had once been wildly popular, but it only had about 14 million daily active users. Microsoft has over 1 billion daily active users.
When Nadella recovered from the shock of Altman's dismissal, he called OpenAI board member Adam D'Angelo to inquire about the details. D'Angelo's brief explanation to Nadella was later included in the company's statement: Altman had not "maintained consistent candor in communication with the board." Had Altman engaged in misconduct? No, but D'Angelo refused to elaborate. He and his colleagues had even intentionally kept Nadella in the dark about their intention to dismiss Altman, as they did not want Nadella to intervene.
Nadella hung up the phone feeling disheartened. Microsoft owned nearly half of OpenAI's for-profit division — it should have been consulted when the OpenAI board made such a decision. More importantly, he knew that the dismissal could spark an internal war at OpenAI and potentially affect the entire tech industry, which had been fiercely debating whether the rapid development of artificial intelligence was cause for celebration or concern.
Nadella immediately called Microsoft's Chief Technology Officer Kevin Scott, who was primarily responsible for building the partnership with OpenAI. Scott had already heard the news, and it had spread quickly. They immediately convened a video conference with other Microsoft executives. They questioned each other: Was Altman's dismissal due to the tension between speed and safety in releasing artificial intelligence products? OpenAI, Microsoft, and some tech industry heavyweights had previously expressed concerns about the reckless advancement of artificial intelligence companies. Even OpenAI's chief scientist and board member Ilya Sutskever had publicly discussed the dangers of unconstrained artificial intelligence. In March 2023, shortly after OpenAI released its most powerful artificial intelligence service to date, GPT-4, including "Silicon Valley Iron Man" Elon Musk and Apple co-founder Steve Wozniak, thousands of people had signed a joint open letter calling for a pause in training advanced artificial intelligence models. "Should we let machines flood our information channels with propaganda and lies?" the letter asked. "Should we risk losing control of our civilization?" Many Silicon Valley observers saw this letter as essentially an accusation against OpenAI and Microsoft.
To some extent, Scott respected their concerns. He believed that the discussions around artificial intelligence had oddly focused on scenarios from science fiction — computers destroying humanity — and largely overlooked the technology's potential to "create a fair competitive environment" for those who knew what they wanted computers to do but lacked the training to achieve it. Scott felt that if artificial intelligence was built up with enough caution and patience, it could be capable of communicating with users in simple language and become a force for change and balance.
Scott and his partners at OpenAI had already decided to release artificial intelligence products slowly but steadily: Microsoft would observe how uneducated users interacted with the technology, and users would learn the advantages and limitations of the technology themselves. By releasing imperfect artificial intelligence software and obtaining candid feedback from customers, Microsoft had found a pragmatic approach that improved the technology while fostering skepticism among users. Scott believed that the best way to manage the dangers of artificial intelligence was to be as transparent as possible to as many people as possible and gradually integrate this technology into our lives — starting with mundane applications. What better way to teach humans to use artificial intelligence than through something as unsexy as a word processor?
Scott's cautious positioning was now in jeopardy due to Altman's dismissal. As more people became aware of Altman's dismissal, employees at OpenAI — who held almost fanatical beliefs in Altman and OpenAI's mission — began expressing their dismay online. The startup's Chief Technology Officer, Mira Murati, was subsequently appointed as interim CEO, but she did not enthusiastically accept the role. Soon, OpenAI President Greg Brockman announced on the social platform X: "I'm out." Other OpenAI employees also began threatening to resign.
During a video call with Nadella, Microsoft executives began discussing potential responses to Altman's ousting. Plan A was to attempt to stabilize the situation by supporting Murati and then working with her to see if the startup's board of directors would reconsider their decision, or at least provide an explanation for their hasty actions.
If the OpenAI board refused to comply, Microsoft executives would implement Plan B: using their significant influence, including commitments of billions of dollars to OpenAI that had not yet been disbursed, to help Altman reassume the role of CEO and reshape OpenAI's governance structure by replacing board members. Sources familiar with the meeting stated that Microsoft executives mentioned, "From our perspective, things have been going well, and the OpenAI board has done some unstable things, so we want to 'let some adults take charge and get back everything we own.'"
If both of these plans failed, Plan C would be for Microsoft to hire Altman and his most talented colleagues to rebuild OpenAI internally. In this scenario, the software giant would have ownership of all emerging technologies, potentially allowing them to sell these technologies to others — a lucrative opportunity.
The team involved in the video call believed that all three plans were powerful. However, Microsoft's ultimate goal was to restore normalcy. The belief behind this strategy was that Microsoft had figured out some crucial elements in developing responsible artificial intelligence, including methods, safety measures, and frameworks. Regardless of what happened to Altman, the company was moving forward with its blueprint for popularizing artificial intelligence.
Key Figures in Collaboration with OpenAI
Scott was convinced that artificial intelligence could change the world because technology had already fundamentally changed his own life. He grew up in Gladys, Virginia, a small community not far from where the Southern General Lee surrendered to Grant during the Civil War. No one in his family had attended college, and health insurance was almost a foreign concept. As a boy, Scott sometimes relied on food from neighbors. His father was a Vietnam War veteran who had attempted to run gas stations, convenience stores, a trucking company, and various construction businesses, all of which ended in failure and two bankruptcies.
Scott wanted a different life. His parents bought him a set of encyclopedias on monthly installments, and Scott, like a large language model avant la lettre, read through the entire set. For fun, he took apart the family's toaster and food mixer. He saved up enough money to afford the cheapest computer from Radio Shack and taught himself programming by consulting library books.
In the decades before Scott's birth in 1972, the area around Gladys was home to furniture and textile factories. By the time he reached adolescence, most of the manufacturing had moved overseas. Technology — supply chain automation and advances in telecommunications — seemed to be the culprit, making it easier to produce goods overseas with lower daily expenses. However, even as a teenager, Scott felt that technology was not the real culprit. "The country told itself that outsourcing was inevitable," Scott said in an interview in September. "We could tell ourselves about the negative social and political impact of losing manufacturing jobs, or the importance of protecting communities. But those things never really materialized."
After enrolling at Lynchburg College, a local school affiliated with the Disciples of Christ, Scott earned a master's degree in computer science from Wake Forest University and began his Ph.D. at the University of Virginia in 1998. He was fascinated by artificial intelligence, but he learned that many computer scientists viewed it as equivalent to astrology. Early attempts to create artificial intelligence had all failed, and the field's reputation for being foolhardy was deeply entrenched in academic departments and software companies. Many leading thinkers had given up on the discipline. In 2000, some scholars attempted to revive it by renaming artificial intelligence research as "deep learning." However, skepticism persisted: at a 2007 artificial intelligence conference, some computer scientists created a parody video implying that the deep learning crowd was made up of cultists.
While pursuing his Ph.D., Scott noticed that some of the most outstanding engineers he encountered emphasized the importance of being a short-term pessimist and a long-term optimist. "It's almost a requirement," Scott said. "You see all the broken things in the world, and your job is to try to fix them." Even though engineers believed that most of their attempts would not succeed and that some attempts might make things worse, they "had to believe they could solve the problem until things eventually got better."
In 2003, Scott took a leave from his Ph.D. program to join Google, where he oversaw mobile ad engineering. A few years later, he left Google to lead engineering and operations at the mobile ad startup AdMob, which Google later acquired for $750 million. Scott then moved to LinkedIn, where he became known for his exceptional ability to build ambitious projects in an inspiring yet practical manner. In 2016, LinkedIn was acquired by Microsoft, and Scott joined the company.
By that time, Scott was very wealthy but relatively unknown in the tech circle because he preferred to remain "anonymous." He had planned to leave LinkedIn after the acquisition by Microsoft, but Satya Nadella, who became CEO of Microsoft in 2014, urged him to reconsider. Nadella shared some information that piqued Scott's curiosity about artificial intelligence, partly due to faster microprocessors, which were making the technology more prominent at the time: Facebook had developed sophisticated facial recognition systems, and Google had built an artificial intelligence capable of proficiently translating languages. Nadella soon declared that at Microsoft, artificial intelligence "will drive all of our future actions."
Scott was unsure if he and Nadella shared the same ambitions. He sent Nadella a memo explaining that if he stayed, he wanted his part of the agenda to be about uplifting those typically overlooked by the tech industry. Scott hoped that artificial intelligence could help those who were smart but had not received digital education, as he had grown up among such people. This was a striking argument — would some tech experts view it as intentional? Given the widespread concern about the automation of jobs such as grocery store cashiers, factory workers, or movie extras by artificial intelligence assistants.
However, Scott believed in a more optimistic narrative. He stated in an interview that at one point, about 70% of Americans worked in agriculture. Technological advancements reduced the demand for labor, and now only 1.2% of the workforce is in agriculture. But that did not mean that millions of farmers became unemployed: many of them became truck drivers, went back to school to become accountants, or found other paths. Scott said, "Perhaps to a greater extent, artificial intelligence can revitalize the American dream than any previous technological revolution." He felt that a childhood friend who ran a nursing home in Virginia could use artificial intelligence to handle her interactions with health insurance and Medicaid, allowing the facility to focus on daily care. Another friend worked at a shop that manufactured precision plastic parts for theme parks, and he could use artificial intelligence to help him manufacture the parts. Scott believed that artificial intelligence could transform "zero-sum games with winners and losers into non-zero-sum progress, making society better."
Nadella read the memo and said, as Scott put it, "Yes, that sounds good." A week later, Scott was appointed as Microsoft's Chief Technology Officer.
If Scott wants Microsoft to lead the AI revolution, he must help the company surpass Google. Google has been luring talent in the AI field by offering millions of dollars to almost anyone, even if they have only made a small breakthrough. For the past 20 years, Microsoft has been trying to compete with Google by spending billions of dollars on internal AI projects, but with little success. Microsoft executives began to believe that a company as large as Microsoft, with over 200,000 employees and a massive bureaucracy, lacked the flexibility and drive needed for AI development. "Sometimes smaller is better," Scott said in an interview.
In this context, Scott began to focus on various startup companies, with one standing out: OpenAI. The company's mission was to ensure "general artificial intelligence — systems that outperform humans at the most economically valuable work — benefits all of humanity." Prior to this, Microsoft and OpenAI had already established a partnership: the startup used Microsoft's Azure cloud computing platform. In March 2018, Scott arranged a meeting with some employees at the San Francisco-based startup. He was delighted to meet dozens of young people who had turned down millions of dollars in compensation from large tech companies to work 18 hours a day for an organization committed to ensuring their inventions would not "harm humanity or lead to excessive centralization." The company's chief scientist, Sutskever, was particularly focused on preparing for the emergence of AI, which was so complex that it could either solve most of humanity's problems or lead to widespread destruction and despair.
Meanwhile, Altman was a charismatic entrepreneur determined to make AI useful and profitable. Scott believed that the sensitivity of this startup was ideal. He stated that OpenAI was committed to "channeling energy into the most impactful things. They have a real culture of 'this is what we're trying to do, these are the problems we're trying to solve, and once we find something viable, we'll double down.' They have their own theory of the future."
At that time, OpenAI had already achieved remarkable results: its researchers had created a robotic hand that could solve a Rubik's Cube, even when faced with challenges it had never encountered before, such as having some of its fingers tied together. However, what excited Scott the most was when, in a subsequent meeting, OpenAI's management told him that they had abandoned the robotic hand because it was not promising enough. "The smartest people are sometimes the hardest to manage because they have a thousand brilliant ideas," Scott said. But the company's employees were almost messianic in their enthusiasm for their work. Shortly after meeting Sutskever in July of that year, Scott was told by Sutskever that AI would "disrupt every single area of human life," which could make fields like healthcare "a billion times better" than they are now. This confidence scared off some potential investors, but Scott found it very appealing.
This optimism contrasted sharply with the prevailing gloom at Microsoft at the time. A former Microsoft executive said, "Everyone thought AI was a data game, and Google had more data, putting Microsoft at a huge disadvantage that could never be overcome." The executive added, "I remember feeling very desperate until Scott convinced us that there was another way to play this game." The cultural differences between Microsoft and OpenAI made them a special partnership. But for Scott and Altman, who had previously led the startup accelerator Y Combinator before becoming CEO of OpenAI, joining forces was very wise.
Nadella, Scott, and other Microsoft personnel were willing to tolerate these peculiarities because they believed that if they could strengthen their products with OpenAI's technology and leverage the startup's talent and ambition, they would gain a significant advantage in the AI race. In 2019, Microsoft agreed to invest $1 billion in OpenAI. Since then, Microsoft has effectively acquired a 49% stake in OpenAI's for-profit arm, as well as the rights to commercialize OpenAI's past and future inventions, including in products like Word, Excel, Outlook, as well as Skype and Xbox gaming consoles, all of which can apply OpenAI's technology.
Murati's Upbringing in Poverty
Nadella and Scott's confidence in this investment was supported by their ties with Altman, Sutskever, and Chief Technology Officer Murati. Scott particularly valued his relationship with Murati. Like him, she also grew up in poverty. She was born in Albania in 1988 and experienced the rise of gangster capitalism and the outbreak of civil war. She coped with these upheavals by participating in math competitions.
When Murati was 16, she received a scholarship to a private school in Canada, where she excelled. "Much of my childhood was filled with sirens, gunfire, and other terrible things," Murati said in an interview this summer. "But there were still happy birthdays, teenage crushes, and oceans of knowledge. It teaches you a kind of resilience — believing that if you keep working hard, things will get better."
Murati studied mechanical engineering at Dartmouth College, where she joined a research team that was building a race car powered by supercapacitor batteries, which could produce massive energy bursts. Other researchers thought supercapacitors were impractical, while some pursued more esoteric technologies. Murati believed both views were too extreme. Such people could never cross the minefields to reach her school. Murati said you had to be an optimist and a realist, "Sometimes people misunderstand optimism as careless idealism. But it has to be well-considered and thoughtful, with a lot of guardrails — otherwise, you're taking a big risk."
After graduating, Murati joined Tesla, and then in 2018, she joined OpenAI. Scott stated that one reason he agreed to the $1 billion investment was that he "had never seen Murati panic." They began discussing how to use supercomputers to train various large language models.
The two companies quickly established and operated a system that produced impressive results: OpenAI trained a robot that could generate stunning images in response to prompts like "show me a baboon throwing a pizza next to Jesus, in the style of Matisse." Another creation, GPT, could answer any question in English conversationally — even if not always correctly. But it is currently unclear how ordinary people would use this technology for anything other than idle entertainment, and it is unclear how Microsoft would recoup its investment. Earlier this year, there were reports that Microsoft's investment would increase to $10 billion.
One day in 2019, a vice president at OpenAI named Dario Amodei showed his colleagues something extraordinary: he input a part of a software program into GPT and asked the system to complete the programming. It did so almost immediately (using a technique that Amodei himself had not planned to use). No one could say exactly how AI had done this — large language models are essentially a black box. GPT's actual code is relatively small; its answers are based on billions of mathematical "weights," deciding what to output next based on complex probabilities. When answering user questions, it is impossible to trace all the connections the model has built.
For some people inside OpenAI, GPT's mysterious programming ability was frightening — after all, it was the stuff of dystopian movies like "Terminator." When employees noticed that, despite GPT's advanced technology, it still made programming errors at times, it was almost exhilarating. Upon learning about GPT's programming ability, Scott and Murati felt some concern, but more excitement. They had been looking for practical applications of AI that people might be willing to pay for.
The Birth of Copilot
5 years ago, Microsoft acquired GitHub for reasons similar to its investment in OpenAI. GitHub's culture was young and rapidly evolving, free from traditional and orthodox constraints. After the acquisition, it became an independent division within Microsoft, with its own CEO and decision-making authority. This strategy proved successful, as GitHub became beloved by software engineers, with its user base growing to over a hundred million.
Therefore, Scott and Murati were excited to find a Microsoft division that might be interested in a tool that could automatically complete code, even if it occasionally made mistakes. They turned to GitHub's CEO, Nat Friedman. After all, code posted on GitHub sometimes contained errors, and users had learned to work around imperfections. Friedman said he wanted this tool. He pointed out that GitHub only needed to find a way to tell people that they couldn't fully trust the auto-complete feature. GitHub employees collectively discussed the name of the product: Coding Autopilot, Automated Pair Programmer, Programarama Automat. Friedman, an amateur pilot, and others felt these names wrongly implied that the tool could do all the work. Instead, the tool was more like a copilot — someone who enters the cockpit with you and offers suggestions, occasionally even untimely ones. You usually listen to the copilot's advice; sometimes you choose to ignore it. When Scott heard Friedman's favorite name, GitHub Copilot, he liked it. Scott said, "The name perfectly conveys its strengths and weaknesses."
However, when GitHub was preparing to launch Copilot in 2021, some executives from other Microsoft divisions protested, believing that the tool's occasional errors would damage Microsoft's reputation. "It was a fierce battle," Friedman told me. "But as CEO of GitHub, I knew it was a great product, so I released it." After the release of GitHub Copilot, it was an immediate success. "Copilot is blowing my mind," one user tweeted a few hours after the release. "It's magic!!!" said another post. Microsoft began charging a monthly fee of $10 for the application, and within a year, GitHub's annual revenue exceeded $100 million. The independence of the division had paid off.
However, GitHub Copilot also sparked some less positive reactions. On message boards, programmers speculated that if someone was too lazy or ignorant to check the auto-completed code before deploying it, this technology could erode their work, provide motivation for cyberterrorists, or cause chaos. Prominent scholars, including some AI pioneers, cited the late Stephen Hawking's 2014 statement that "full artificial intelligence could mean the end of humanity."
It was shocking to see GitHub Copilot's users discover so many catastrophic possibilities. But executives at GitHub and OpenAI also noticed that the more people used the tool, the more subtle their understanding of its capabilities and limitations became. "After using it for a while, you develop an intuition for what it's good at and what it's not good at," Friedman said. "Your brain learns how to use it correctly."
Microsoft executives believed they had found a bold and responsible AI development strategy. Scott began writing a memo titled "The Era of AI Copilot" and sent it to Microsoft's technical leaders in early 2023. Scott wrote that it was important that Microsoft had found a powerful metaphor to explain this technology to the world: "Copilot does exactly what the name implies; it's an expert assistant for users trying to accomplish complex tasks… Copilot can help users understand the limits of its capabilities."
With the release of ChatGPT, it brought AI to the attention of most people and quickly became the fastest-growing consumer application in history. But Scott could foresee the future: machines and humans interacting through natural language; people, including those who know nothing about programming, simply programming computers by speaking their thoughts. This was the fair competitive environment he had always pursued. As one of the co-founders of OpenAI said on social media, "The hottest new programming language is English."
Scott wrote, "In my career, I have never experienced such a moment of great change in my field, where the opportunity to reimagine possibilities is so real and exciting." The next task was to apply the success of GitHub Copilot — a boutique product — to Microsoft's most popular software. The engines of these Copilots would be a new OpenAI invention: a large language model. OpenAI called it GPT-4.
Many years ago, Microsoft had tried to bring AI to the masses, but it ended in embarrassment. In 1996, the company released Clippy, the "assistant" for its office products. Clippy appeared on the screen as a cartoonish paperclip with big eyes, seemingly popping up at random to ask users if they needed help writing a letter, opening PowerPoint, or completing other tasks. Renowned software designer Alan Cooper later said that Clippy's design was based on a "tragic misunderstanding" of research that suggested people might interact better with seemingly emotional computers. Users certainly had emotions about Clippy: they hated it. The Smithsonian called it "one of the worst software design mistakes in computer history." In 2007, Microsoft axed Clippy.
Nine years later, Microsoft created the AI chatbot Tay, designed to mimic the tone and attention of a teenage girl and interact with Twitter users. Tay almost immediately began posting racist, sexist, and homophobic content, including statements like "Hitler was right." Within the first 16 hours after its release, Tay had posted 96,000 times, at which point Microsoft realized it was a PR disaster and shut it down.
By the end of 2022, Microsoft executives felt ready to start developing Copilots for Word, Excel, and other products. But they understood that, just as the law is constantly changing, the need for new safeguards would continue to increase even after the product was released. Sarah Bird, the head of AI engineering, and Scott often felt embarrassed by the technology's mistakes. During the pandemic, when they were testing another OpenAI invention, the image generator Dall-E 2, they found that if they asked the system to create images related to COVID-19, it would often output images of empty shelves. Some Microsoft employees were concerned that such images would exacerbate people's fears of an economic collapse due to the pandemic, and they suggested changing the product's safety measures to suppress this trend. But others at Microsoft thought these concerns were foolish and not worth the software engineers' time.
Scott and Bird decided to test this scenario in a limited public release rather than settle the internal debate. They launched a version of the image generator and waited to see if users would feel uneasy seeing empty shelves on the screen. They wouldn't design a solution for a problem that no one was sure existed — just like a wide-eyed paperclip helping you navigate a word processor you already know how to use — they would only add a mitigation measure when necessary. After monitoring social media and other corners of the internet and collecting direct feedback from users, Scott and Bird concluded that these concerns were unfounded. "You have to experiment in public," Scott said. "You can't try to figure everything out on your own and hope you get everything right. We have to learn how to use these things together, or none of us will understand."
In early 2023, Microsoft was preparing to integrate GPT-4 into its flagship product: the Bing search engine. The integration of AI technology into Bing was warmly welcomed, and downloads increased eightfold. Nadella joked that Microsoft had beaten the "800-pound gorilla" to mock Google. (Although this innovation was impressive, it didn't have much significance in terms of market share: Google still held over 90% of the search market.)
Bing was just the beginning of Microsoft's agenda. Microsoft then began rolling out Copilots in other products. When Microsoft finally started rolling out Copilots this spring, the releases were carefully staggered. Initially, only large companies could use the technology; as Microsoft learned how these customers were using it and developed better safeguards, it would be offered to more and more users. As of November 15th, tens of thousands of people were using Copilots, and it was expected that millions would soon sign up.
Two days later, Nadella heard that Otman had been fired. Some members of the OpenAI board found Otman to be a cunning and unsettling manipulator. For example, earlier this fall, he confronted Helen Toner, director of the Security and Emerging Technology Center at Georgetown University, because she co-authored a paper that seemed to criticize OpenAI for "fanning the flames of AI hype." Toner defended herself (although she later apologized to the board, not anticipating how the paper would be received). Otman began contacting other board members individually to discuss replacing her. When these members exchanged their conversations, some felt that Otman misrepresented their support for removing Toner. "He would lie about other people's thoughts, pit them against each other," a person familiar with the board's discussions revealed. "This has been going on for years." (A person familiar with Otman's perspective acknowledged that he admitted "the way he tried to oust a board member was clumsy," but he had no intention of manipulating the board.)
Microsoft's A, B, C Plan
Otman was seen as a shrewd corporate fighter. This had been helpful to OpenAI in the past: in 2018, he thwarted an early board member's impulse to sell OpenAI to Musk. Otman's ability to control information and manipulate perceptions — both public and private — attracted venture capitalists to invest in various startups to compete with each other. His tactical skills were so fearsome that when four board members — Toner, DeAngelo, Sutskever, and Tasha McCauley — began discussing his removal, they were determined to catch him off guard. "It's clear that once Sam knows, he'll do everything possible to weaken the board," a person familiar with these discussions said.
Disgruntled board members felt that OpenAI's mission required them to be vigilant about the dangers of AI becoming too powerful, and they believed that under Otman's leadership, they couldn't fulfill that duty. "The mission is multifaceted, to ensure that AI benefits all of humanity, but if the CEO can't be held accountable, no one can," another person familiar with the board's thinking said. Otman saw things differently. People familiar with his perspective said that he had engaged in "very normal and healthy board debates," but some board members were unfamiliar with business norms and were intimidated by their responsibilities. "Every step closer we get to AI, everyone has to endure 10 points of mental breakdown," this person said.
It's hard to say whether board members were more afraid of sentient computers or of Otman's overreach. But regardless, the board ultimately chose to be proactive, mistakenly believing that Microsoft would stand with them and support their decision to oust Otman.
Shortly after learning of Otman's firing and convening a video conference with Scott and other executives, Nadella began executing Plan A: supporting Murati as interim CEO to stabilize the situation while attempting to understand why the board had acted so impulsively. Nadella had already approved a statement emphasizing that "as we bring the next era of AI to our customers, Microsoft remains committed to Mirai and their team," and expressed the same sentiment on his personal X and LinkedIn accounts. He remained in frequent contact with Murati to stay informed of the information she had from the board.
The answer was: not much. The day before Otman was fired, the board informed Murati of their decision and received her commitment to remain silent. They believed her agreement meant she supported Otman's dismissal, or at least wouldn't oppose the board, and they also believed that other employees would agree. They were wrong. Internally, Murati and other OpenAI executives expressed their dissatisfaction, and some employees viewed the board's actions as a coup. OpenAI employees raised sharp questions to board members, but the board hardly responded. Two people familiar with the board's thinking said that for reasons of confidentiality, board members felt they had to remain silent. Additionally, with Otman's ouster making global news, board members felt overwhelmed, "with limited bandwidth to engage with anyone, including Microsoft."
The second day after Otman's firing, OpenAI's COO Brad Lightcap sent a company-wide memo, stating that he understood "the board's decision was not in response to malfeasance or anything related to our financial, business, security, or privacy practices." He then said, "This is a breakdown in communication between Sam and the board." However, whenever anyone asked Otman to provide examples of how he hadn't been "consistently forthright in communication," board members remained silent, even refusing to mention Otman's opposition to Toner's ouster movement.
Internally at Microsoft, the whole affair looked unbelievably foolish. OpenAI was reportedly valued at around $80 billion. "Unless the goal of the OpenAI board is to destroy the entire company, they seem to always inexplicably make the worst choices every time they make a decision," said a company executive. Even as other OpenAI employees publicly resigned under President Brockman's leadership, the board remained silent.
Plan A had clearly failed. Therefore, Microsoft's executives turned to Plan B: Nadella began negotiating with Murati to see if there was a way to reinstate Otman as CEO. During this time, the Cricket World Cup was taking place, and Nadella's beloved Indian team was facing off against Australia in the finals. Nadella occasionally posted updates on the latest developments of the match on the X social platform, hoping to ease the tense atmosphere, but many of his colleagues didn't know what he was talking about.
OpenAI employees threatened to revolt. With the support of Microsoft, Murati and her colleagues began urging all board members to resign. Eventually, some of them agreed to leave, as long as they felt that replacements were acceptable. They indicated that they might even be open to Otman's return, as long as he wasn't CEO and didn't have a board seat. By the Sunday before Thanksgiving, everyone was exhausted. The OpenAI board invited Murati to join them for a private conversation. They told her that they had been secretly recruiting a new CEO and had finally found someone willing to take the job.
For Murati, OpenAI employees, and Microsoft, they could only grasp at the last straw and initiate Plan C. On Sunday night, Nadella officially invited Otman and Brockman to lead a new Microsoft internal AI research lab, offering them all the resources and as much freedom as they wanted. Both accepted. Microsoft began preparing offices for the hundreds of OpenAI employees they expected to join the department.
Murati and her colleagues wrote an open letter to the OpenAI board: "We cannot work or collaborate with those who lack the ability, judgment, and care for our mission and employees." The authors of the letter pledged to resign and "join the newly formed Microsoft subsidiary" unless all current board members resigned and Otman and Brockman were reappointed. Within a few hours, almost all OpenAI employees had signed the letter.
The threat of mass resignations and Plan C were enough to soften the board's stance. Two days before Thanksgiving, OpenAI announced that Otman would be reinstated as CEO. All board members except DeAngelo would resign, and more prominent figures — including former Facebook executive and Twitter chairman Bret Taylor, and former Treasury Secretary and Harvard University president Larry Summers — would join the board. OpenAI's executives agreed to conduct an independent investigation into what had happened, including Otman's past behavior as CEO.
While Plan C initially seemed appealing, Microsoft executives later concluded that the current situation was the best outcome. Moving OpenAI's employees to Microsoft could lead to costly and time-wasting lawsuits, as well as potential government investigations. Under the new framework, Microsoft gained a non-voting board observer seat at OpenAI, giving it greater influence without triggering regulatory scrutiny.
Microsoft's Big Win
In fact, the soap opera's conclusion was seen as a huge win for Microsoft and a strong validation of its approach to developing artificial intelligence. A Microsoft executive said, "Otman and Brockman are really smart, they could go anywhere. But they chose Microsoft, and all those people from OpenAI are ready to choose Microsoft, just like they did four years ago. This greatly validates the system we've built. They all know that this is the best place to continue the work they're doing, the safest place."
Meanwhile, the fired board members insisted that their actions were wise. "There will be a comprehensive and independent investigation, instead of putting a bunch of Sam's cronies on the board, we finally have new people who can stand up to him," a person familiar with the board's discussions revealed. "Sam is powerful, he's persuasive, he's used to getting his way, and now he's noticed that people are watching him." Former board member Toner said, "The board has always been focused on fulfilling our obligations to the mission of OpenAI." (Otman told others that he welcomed the investigation — partly to help him understand why such a tragedy had occurred and what different measures he could have taken to prevent it.)
Some AI regulatory bodies were not particularly pleased with this outcome. Margaret Mitchell, Chief Ethics Officer of the open-source AI platform Hugging Face, believed that "the board was doing its job when it fired Otman. His return will have a chilling effect. We will see fewer and fewer people speaking up within companies because they will fear being fired — and the higher-ups will be even more irresponsible."
As for Otman, he is ready to move on. "I think we're just turning to good governance and excellent board members, and we will have an independent assessment, which excites me a lot," he told me. "I just hope everyone continues to live happily. We will continue with this mission."
Nadella and Scott breathed a sigh of relief as everything at Microsoft returned to normal with the large-scale release of Copilots. However, Office Copilots seemed both impressive and mundane. They made mundane tasks easier, but they still have a long way to go before replacing human workers. They feel far from the science fiction predictions, but they are something people might use every day.
According to Scott, this effect is intentional. "True optimism means sometimes taking it slow," he said. If he, Murati, and Nadella get their way — given their recent victory, this is now more likely — artificial intelligence will continue to steadily permeate our lives at a pace that accommodates the warnings of short-term pessimism, and only when humans can absorb how this technology should be used. Things could still spiral out of control — the incremental development of AI will prevent us from realizing these dangers until it's too late. But for now, Scott and Murati believe they can balance progress and safety.
Scott said, "AI is one of the most powerful things humans have invented to improve the quality of life for everyone. But it takes time, and it should take time. We always solve incredibly challenging problems through technology. So, we can tell ourselves a good story about the future, and we can tell ourselves a bad story about the future — either one could become reality."
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。