Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Just now, Sam Altman was attacked again, this time it was gunfire directly.

CN
链捕手
Follow
4 hours ago
AI summarizes in 5 seconds.

Sam Altman was attacked again.

If the Molotov cocktail two days ago could be viewed as an extreme and sporadic attack with personal undertones, then the second incident that has just occurred is of a completely different nature.

In the early hours of Sunday local time, a car stopped outside OpenAI CEO Sam Altman's residence and fired a shot towards the house. The San Francisco Police Department subsequently arrested two suspects, 25-year-old Amanda Tom and 23-year-old Muhamad Tarik Hussein, both currently booked for negligent discharge of a firearm.

Suspects captured by security cameras outside Sam Altman's home

This is the second attack on Sam Altman's residence in San Francisco since last Friday. Neither incident has resulted in any significant injuries, but they have pushed an issue that was previously limited to public discourse right to the edge of real violence.

The reason Sam Altman has become the focus of such emotions is not just because he is the head of OpenAI, but because what he represents has long transcended that of a mere tech company CEO. He is the leader of cutting-edge AI products and the intersection of computing power, capital, policy, public opinion, and state machinery.

The true meaning of these two attacks is not simply that the public has begun to oppose technological progress, but that an increasing number of people are viewing AI companies as a quasi-political force. In the past, discussions surrounding tech companies revolved more around product experience, monopolies, privacy, and platform governance; now, OpenAI touches on employment, taxation, wealth redistribution, national security, infrastructure, geopolitical dynamics, and even the use of models in warfare. In other words, Altman is becoming less of an ordinary business figure and more like a person situated between entrepreneur, policy player, and quasi-public power. Once perceived in this light, it is easy for him to shift from a businessperson to a fixture of political sentiment.

This is precisely what makes the situation so dangerous. The public's fear of AI is not entirely fictional; even Altman himself acknowledges that such fears are reasonable. He wrote after the first attack that fears and anxieties about AI are justified, stating that “we are experiencing one of the largest societal changes in a long time, perhaps in history.”

Last week, OpenAI happened to release a policy document discussing a new social contract for the age of superintelligence centered around a series of human-focused ideas and proposed initiatives like a public wealth fund, robot tax, and a four-day work week.

Not long ago, OpenAI unexpectedly acquireda Silicon Valley tech talk show TBPN, and also announced plans to set up an office in Washington and create a space called OpenAI Workshop for non-profit organizations and policymakers to understand and discuss the company's technology. OpenAI’s competitor Anthropic has also announced the establishment of its think tank, the Anthropic Institute, focusing on how AI growth affects society.

As the impacts of AI become increasingly concrete, calls for intensified scrutiny of tech giants are also on the rise. The industry clearly recognizes that social discontent is spreading, and thus it simultaneously acknowledges the existence of this sentiment while trying to redefine the debate and rewrite external understanding of the entire sector.

Last month, Sam Altman attended a meeting held by BlackRock in Washington, where he mentioned the public perception challenges faced by AI companies. He noted that many headwinds are currently present. AI is unpopular in the United States, rising electricity prices are blamed on data centers, and almost every company laying off workers attributes the blame to AI, regardless of whether AI is actually the cause.

Polls also confirm that public distrust in AI is deepening. This distrust is not only directed at the changes in the labor market but also at AI as a social force itself. A Pew Research Center survey released last year showed that only 16% of Americans believe AI will help people become more creative, and only 5% believe AI will help people establish more meaningful interpersonal relationships. An NBC News poll last month showed that only 26% of voters view AI positively, with its net negative evaluation even 2 percentage points lower than that for U.S. Immigration and Customs Enforcement...

It is difficult to explain why people are so averse to AI in a single sentence. It may be because the industry has packaged its technology as capable of destroying the world from the very beginning, or perhaps it stems from economic anxieties surrounding job displacement, or a broader, long-standing animosity towards big tech companies. As opposition movements against data centers, proposals for limiting AI, and clear public animosity increase, the entire industry is beginning to feel uneasy.

This unease has sparked a wave of public relations efforts. Writing policy documents, discussing new social contracts, proposing public wealth funds, robot taxes, and four-day work weeks; acquiring more friendly content channels, establishing offices and communication spaces in Washington; founding research institutes to shift discussions from model performance towards employment, welfare, education, democracy, and national competitiveness.

However, this is precisely where the problem lies. If a company is merely releasing products, the public's judgment of it mostly revolves around whether it is usable, expensive, or infringing on privacy; but once it begins discussing how to rewrite labor systems, how to distribute technological benefits, and how to arrange social safety nets in the age of superintelligence, it is no longer just a market entity but is reaching into the public domain.

This new narrative itself carries a stark contrast. On one side, there are terms like human-centric, equitable dividends, and shared benefits; on the other side, there are increasingly towering data centers, more centralized computing power and capital, increasingly complex regulatory-business relationships, and increasingly skilled policy lobbying. What people feel is no longer just uncertainty stemming from technological advancements but a more indescribable tension: those who claim to be designing buffering mechanisms for society are often also the most capable of accelerating the onset of disruption. The more they speak in the name of public interest, the more they will be asked to demonstrate corresponding restraint, transparency, and self-limiting behavior.

This is also why the controversy surrounding Sam Altman is particularly sensitive. He is simultaneously a hero, prophet, opportunist, and source of risk, and has also become a target of attack. What is perhaps most unsettling about him is not just his ambition but his ability to articulate nearly valid statements in different contexts. He speaks of growth and scale to investors, responsibility and regulation to policymakers, risks and bottom lines to safety advocates, and how technology will benefit everyone to the public. Each of these statements makes sense individually, with its own logic and basis in reality; but when these statements overlap and even pull at each other in reality, it is hard for the outside world not to develop deeper questions: which layer is the most authentic one?

And this questioning did not just arise recently. Even within the company, there have been repeated worries that initial commitments regarding non-profit missions, safety priority, and avoiding power imbalances are being pushed further back by product pressures, income targets, and expansion impulses. The resources that the safety team once proudly showcased have dwindled compared to what was promised; the principles originally meant to restrain themselves have repeatedly yielded to more pragmatic goals when they truly need to take effect. The starting point may have been to set an exception, but the endpoint increasingly resembles those large companies of the past that, in the name of changing the world, ultimately pushed the world further towards centralization.

Therefore, the current feelings of discontent surrounding OpenAI cannot simply be understood as technological pessimism or merely AI taking human jobs. It resembles a result of the overlap of several emotions: anxiety over altered personal destinies, resentment towards highly concentrated power, disappointment at regulation lagging behind reality, and vigilance towards big companies that both demand understanding and seek greater discretion. These feelings might have existed separately, but when society lacks sufficiently clear institutional outlets, they instinctively seek the most distinct, concrete, and easily identifiable figure to carry them.

Thus, an issue of an abstract system ultimately fell onto a specific individual. In a highly mediated age, complex powers always tend to coalesce into a personified symbol. Those who resemble the spokesperson for the future are the most likely to become the focus of emotional resonance. This mechanism itself is not novel; it has simply fallen so completely onto the AI industry for the first time today.

Exterior view of Sam Altman's mansion

Therefore, the most urgent answer right now cannot simply be to raise walls, increase security, and isolate risks outside a particular residence.Today it's Sam Altman; tomorrow it may not even be him, and the problem will not automatically disappear.

What truly needs to be addressed are clearer boundaries, more trustworthy external oversight, more honest disclosures of interests, and governance mechanisms capable of penetrating corporate narratives. Otherwise, technology will continue to advance, capital will continue to increase, and policy discussions will continue to broaden, but the societal doubts will only accumulate, not dissipate. What people truly fear is not just how powerful a certain model is, but rather how such a force is rapidly shaping reality without a commensurate system of checks and balances.

Of course, any violence must be unequivocally rejected. Discontent towards a company, doubts about a founder, and concerns regarding AI's direction should not cross that line. The real pressure test of the AI era is no longer just about model capabilities but whether society can establish sufficiently solid trust and constraints to cope with this change.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

星球发贴瓜分10万U
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 链捕手

8 hours ago
Cryptocurrency ETF Weekly Report | Last week, the net inflow of Bitcoin spot ETF in the United States was 816 million dollars; the net outflow of Ethereum spot ETF in the United States was million dollars.
8 hours ago
This week's news preview | The United States will release March PPI data; French President Macron will deliver a speech at Paris Blockchain Week.
1 day ago
Circle Product Management Director: The Future of Cross-Chain: Building an Interoperability Tech Stack for Internet Financial Systems
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarTechub News
35 minutes ago
At the hackathon, developers bet on HarmonyOS.
avatar
avatar律动BlockBeats
40 minutes ago
The Return of the Marginal Zones: A New Game Surrounding Maritime Power, Energy, and the Dollar
avatar
avatar律动BlockBeats
43 minutes ago
Illustration: The more Claude is used, the more foolish it becomes: The cost of saving money is a 100-fold increase in the API bill.
avatar
avatar律动BlockBeats
45 minutes ago
Will encrypted VC die? The market elimination cycle has already begun.
avatar
avatar深潮TechFlow
1 hour ago
The Bank of Korea interprets the AI semiconductor cycle: the most dangerous signal is hidden in the financing side.
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink