Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Information is a weapon: AI did not kill the truth, it just made the truth irrelevant.

CN
Techub News
Follow
8 hours ago
AI summarizes in 5 seconds.

Author: Rust, who doesn't understand classics

On April 5, 2026, Sunday. U.S. President Trump posted a message on his social platform Truth Social, sounding like he was issuing an ultimatum.

"Tuesday, 8 PM Eastern Time!"

If Iran does not reopen the Strait of Hormuz, the U.S. will bomb its power plants and bridges. Two days later, he escalated: "The entire civilization of Iran will die tonight." Hours later, he suddenly announced a two-week ceasefire.

Who responded the fastest to such posts?

It was the Iranian embassy in Zimbabwe. They wrote on X: "8 PM is not good. Can we change it to 1 to 2 PM, or 1 to 2 AM? Thank you for your attention to this important matter."

The phrase at the end, "Thank you for your attention to this important matter," imitates Trump's signature style.

A country under bombardment responded to a war threat with jokes.

This is not an isolated case. The Iranian embassy in Thailand responded to Trump's comment about "blowing Iran back to the Stone Age" with an AI-generated image: Trump wearing animal skins sitting in a cave. The South African embassy posted a set of photos of U.S. military generals with crosses over them, captioned: "Regime change successfully completed." The crossed-out figures were not Iranian leaders but American generals recently dismissed by U.S. Secretary of Defense Pete Hegseth.

Meanwhile, a team called "Explosive Media" was mass-producing AI Lego animations. Trump, the U.S. military, and the Iranian military all turned into yellow plastic figures. One rap diss video called Trump a "loser" and described Israeli Prime Minister Netanyahu as a "puppet," racking up millions of views on global social media.

The White House was not idle either. The official X account posted an AI video depicting the Iranian regime as bowling pins, all knocked down by the U.S. military. A senior White House official encapsulated the entire situation with a statement to Politico:

"Dude, we are just constantly cranking out viral memes."

Another official added that the White House's Iran war video had already garnered "over 3 billion views" online.

3 billion.

Stop and think for a second. This is a real war. There are real bombs, real civilian casualties, real disruptions in oil supply. But in the space of public information, it looks like a meme contest. A superpower and a bombed country are attacking each other on the same platform, using the same language. That language is called memes.

This is the information landscape of 2026. Information is no longer a description of reality. It is, in itself, the battlefield.

The essence of information gap is not about information

A cognitive turning point of an era: questions that were not a problem before have now become fundamental issues

The true goal of information flow is not to trick you

Most people's understanding of " misinformation" is still stuck in a simplistic framework: someone told a lie, someone believed it, causing harm. Therefore, the countermeasures are also intuitive: fact-checking, exposing lies, improving public media literacy.

But that framework is outdated.

The ultimate goal of modern information warfare has never been to make you believe a specific lie. Its goal is to make the distinction between "true" and "false" a dimension you no longer care about.

Hannah Arendt (one of the most important political philosophers of the 20th century, author of "The Origins of Totalitarianism") wrote the following in 1951:

"The ideal subject of totalitarian rule is not the staunch Nazi or the staunch communist, but those for whom the distinction between fact and fiction, truth and falsehood, no longer exists."

She was describing the totalitarian propaganda machine of the 20th century. Seventy-five years later, AI is turning this prophecy into the everyday experience of every mobile phone user with an efficiency she could never have imagined.

George Orwell, even earlier, wrote in 1943 in "The Road to Wigan Pier":

"The concept of objective truth is disappearing from the world. Lies will be recorded in history."

He was talking about Nazi Germany. But today, this process no longer requires a totalitarian state. An algorithm is enough.

Chess world champion, and later a famous political activist Garry Kasparov put it even more precisely:

"The purpose of modern propaganda is not merely to mislead you or push a certain agenda. Its purpose is to exhaust your critical thinking, to eliminate the truth itself."

Exhaust. Not defeat, but exhaust. This word is chosen precisely.

AI has already technologically dismantled the basis of "discerning true from false"

The above statements may still sound like philosophical deductions. Let's look at the facts that have already occurred.

The signals of the AI era have already inverted.

In the past, a photo without digital traces meant it was original and unaltered. In 2026, the lack of digital traces may precisely indicate it has never been captured by a camera. Because it was AI-generated from the start. The signaling system of reality has entirely inverted. Like a film negative, black and white reversed.

Even more deadly is the "hybrid."

Recently, WIRED published an in-depth report, "The Internet Has Destroyed Everyone's 'Lie Radar'," citing Dutch investigative journalism trainer Henk van Ess's statement. He pointed out that the hardest to identify is not the images entirely generated by AI, but the hybrid of 95% real and 5% altered.

A photo, with real metadata, real sensor noise, real light and shadow physics. The alteration only exists in one detail: an additional insignia on the uniform, a weapon in hand, or a subtly replaced face. Pixel-level detection tools will determine it to be authentic. Because it is genuinely real on the vast majority of dimensions. The fake part may only be one square inch.

"All past verification methods were based on one premise: the image is a record of something," van Ess said. "Generative media has shattered this premise fundamentally."

Deepfake researcher Henry Ajder (who provided AI consulting for companies like Adobe) took it further. He said AI is no longer something that is easily seen through; it has embedded itself into our daily content. The era where fingers drew six wrong lines and text garbled into a mess is over. The new AI content looks completely credible.

What about detection tools? Ajder's exact words were: "Detection tools should never be treated as the only signal for making judgment calls." They are not truth engines. Even the best tools will frequently fail. Most will only yield an inexplicable "confidence score." 85% real? 62% fake? These numbers tell you nothing.

Why are wise individuals fleeing social media?

The Atlantic: The age of social media has ended, but what replaces it is worse

Busy posting are the presidents of major nations

Meanwhile, the door to verification is being closed.

On April 4, 2026, Planet Labs announced an indefinite ban on satellite images of Iran and the Middle East conflict areas. Planet Labs is one of the most relied-upon commercial satellite image providers for conflict reporting worldwide. The ban was executed at the request of the U.S. government, effective from March 9.

U.S. Secretary of Defense Hegseth's response was candid: "Open source intelligence is not the place to ascertain the truth."

Translated, it means: You do not need to see anything for yourself; we will tell you what happened.

At the same time, according to the 2026 AI Traffic and Cyber Threat Benchmark Report data, automated traffic on the internet has accounted for 51% of total traffic, with a growth rate eight times that of human traffic. These bots are not just distributing content; they are prioritizing the promotion of low-quality viral content. Synthetic content is on the move, while verification is still putting on shoes.

On one side, a forgery engine is running at full speed. On the other, the door to verification is closing. This is not a fair competition. One side is accelerating, while the other is dismantling its engine.

The ghost of McLuhan: why "atmosphere" is more powerful than "facts"

At this point, many may feel that the problem is already serious enough. It is not. The collapse of verification at the technical level is just the tip of the iceberg. What lies beneath the surface is larger and harder to deal with.

The digital prophet Marshall McLuhan, whom Uncle doesn't quite understand, proposed a phrase that changed the entire field of communication in his 1964 book "Understanding Media": "The medium is the message."

Most people interpret this statement as "the communication channel is important." This is a serious misreading and underestimation.

McLuhan's true meaning is: the medium has already reshaped you on a deeper perceptual level before you even begin to consciously evaluate the content.

Television does not need to broadcast a specific program to change you. The form of television itself has changed the way humans understand the world. Printing does not need to print a specific book to create nationalism. The printing press made mass dissemination of a unified language possible, and that fact itself gave birth to national identity.

Applied to today: AI-generated content does not need to fool you just once. It only needs to exist en masse to shift your default psychological state from "what I see is probably true" to "everything I see could be fake."

This shift in cognitive state is, in itself, the effect of information weaponization. It affects everyone. No distinction in IQ, education, or stance.

I recently saw an article about the fusion of media and machines that took McLuhan a step further, proposing a new equation:

In the age of large language models, the medium is the message, the medium is the machine.

Natural language now serves as both the interface for human-computer interaction and the underlying infrastructure. Writing is constructing; constructing is writing. Code and culture flow from the same source. This fusion of media and machines forms a Möbius loop, where the medium creates the machine, and the machine in turn creates the medium, linking head to tail, looping infinitely.

On the battlefield of the Iranian meme war, this equation needs one more push.

Why were the Iranian embassy's AI Lego videos effective? Not because of their content. The content is merely crude satirical propaganda, informational value approximately zero. It is effective because the form of the medium itself is an attack.

AI-generated, platform-native, optimized for sharing. The same goes for the White House's war memes. The act of the "president posting memes" is information in itself.

What it conveys is not any specific policy content but a meta-information: the rules have disappeared, seriousness has vanished, the "order" you once thought existed is gone.

Professor Gregory Daddis of Texas A&M University, who has served in the U.S. military for over 20 years, said clearly in an interview: Trump's social media style primarily serves his domestic political base: "That audience that thinks it is cool for rock musician Kid Rock and Secretary of Health Kennedy to work out in a sauna. This is not a serious diplomatic approach."

But Iran has clearly learned. An expert studying Iranian proxy organizations, Phillip Smyth, pointed out that AI tools have helped Iran and countries like China bridge cultural gaps, enabling them to create propaganda that resonates with Western audiences, even if the creators themselves may not be familiar with Western culture.

A Chinese state media AI propaganda video depicted Americans as bald eagles and Iranians as Persian cats. This video was translated, reposted, and reported by global mainstream media.

The medium is the message, the medium is the machine. And in the context of war, one more equation can be added: the medium is the weapon.

You tracked 5,000 influencers with AI, but your world is narrowing

What is today's most precious asset? Wall Street has thrown analysts directly into the Strait of Hormuz

Why are the smartest people in Silicon Valley all rereading one person?

What do ordinary people receive from these weaponized media?

A recent article in the Financial Times presented a counterintuitive view: the impact of social media on people's opinions has been greatly overestimated. "Not every person reading the Bible becomes a Christian, and not every Guardian reader becomes leftist." People are more skeptical of random information on social media than they are of traditional authoritative media.

On the surface, this view seems to defend social media.

But it actually reveals something deeper. What people obtain from social media is never "information," but rather an "atmosphere." You don't need to believe a specific false message. You only need to absorb a certain atmosphere from hundreds and thousands of messages. Angry, anxious, nihilistic.

This precisely validates McLuhan's core insight in 2026: the influence of media does not occur at the "content" level but at the "perception structure" level. You think you are reading, judging, rationally thinking. But media has already changed the way you perceive the world before your judgment. As McLuhan said, the medium is the massage.

You think you are choosing what to believe. In reality, you are merely absorbing the atmosphere.

Atmosphere is domination

What does Kasparov mean by "exhaustion"?

Imagine your daily information consumption. Every day you open your phone and come across a shocking image. You aren't sure if it's real. You want to verify but with what? Reverse image searches on Google Lens, Yandex, and TinEye yield three different results. No match no longer means "this is original" but could simply mean it has never been photographed. Detection tools give a confidence score of 72%. What does 72% indicate? You close the page and continue scrolling.

The next day, another one. The third day, another one.

A month later, you won’t become a "more cautious person." You will become a person who has given up judgment. Not because of stupidity or laziness, but because cognitive resources are limited. This war is designed to consume all your cognitive bandwidth. Every piece of information requiring you to judge "true or false" is imposing a tax on your attention. The tax rate is getting higher while your tax base (the total amount of attention available each day) remains unchanged.

Healthy skepticism sounds like "Let me verify." But when the cost of verification inflates endlessly, skepticism will slide into cynicism, turning into "nothing can be trusted anyway."

From "everything could be fake" to "everything is fake," there is only one step. And that step you won’t even realize you’ve taken.

Arendt's prophecy completes a loop here. When the majority of people in a society reach a state of "nothing can be trusted anyway," they will not become critical thinkers. They will become the "ideal subjects." Neither persuaded subjects nor caring subjects, but exhausted subjects. Not citizens who believed lies but those who no longer care about truth or falsehood.

In the comments section of that Financial Times article, someone wrote: "Hate, division, and anger generate clicks. Zuckerberg, Musk, none of them earned billions by encouraging us to respect each other."

Algorithms are optimized for transmission, not for truth. When emotions become the fuel for algorithms, atmospheres become the true rulers of this era.

Truth has not been defeated. It has merely been submerged beneath a power it can never compete with: cost-free, infinitely fast, instinctively emotional atmospheres.

AI has pushed out the truth of education, and American universities are beginning to revive an old tradition

The first batch of deep AI users are being deskilled by AI

Anchor yourself in the tide of hypocritical atmosphere

As I write this, if your feeling is "What should I do? I can’t fight the algorithm," that just proves that atmosphere is in effect on you.

The following is not a universal cure. There is no universal antidote. But there are some structural strategies that can at least make you less easily swept away.

The first thing: Stop asking "Is this true?" Start asking "Where does this come from?"

In that WIRED report, verification expert van Ess provided a five-step method, with the core logic shifting from "verifying content" to "tracing origins." He refers to a key step as "finding patient zero," tracing where information first appeared.

Real materials usually have a connection line to a specific person: a witness, a photographer, a locatable coordinate. Synthetic content has a typical characteristic: anonymous, polished, born to be shared. It is like an orphan. No parents, no origin, only a perfect appearance.

Van Ess also offered a rule of thumb, simple but effective: "If a picture makes you feel too much like a movie, the lighting is too perfect, the composition is too symmetrical, that is your first warning. Real disasters are rarely symmetrical."

The era of verifying information is ending; the era of tracing information's source is beginning. Although this is a very resource-intensive task.

The second thing: Levy a "tariff" on your attention.

Not the correct but useless advice of "spend less time on your phone." But consciously set a slowdown checkpoint before information enters your brain.

Specific practices should be:

When you see any content that triggers strong emotions, give yourself a 24-hour cooling period before deciding whether to share it the next day. You will find that the next day, you will most likely no longer want to share it because that atmosphere has dissipated, leaving only the information itself. And the information itself is often not very shareable.

For any significant event you are prepared to believe, find at least three independent sources for cross-verification. Not three accounts on the same platform (they may be sharing the same source) but three different information channels.

Return to long-form text. Books and in-depth reports inherently provide a functionality that algorithms cannot: slowing down. Algorithms compress everything into 3-second judgments. Books force you to immerse yourself in the same thought process for 3 hours. This is not a loss of efficiency; it is a recalibration of thought. This is also a very resource-intensive task.

The third thing: Rebuild a small-scale, trust network based on real relationships.

If the large-scale public information space has irreversibly been "atmospheric," then the most important thing a person can do is not to discern true from false more smartly within that space but to find anchor points outside that space.

That article on the fusion of media and machines made a judgment: when everything visible is consumed, the only remaining "excess return" is those things that refuse digitalization. The experience of going offline, intuition, the silence between old friends that needs no words.

McLuhan is likely right again; electronic media will bring us back to a "tribal" state. You should seriously rebuild your tribe. Not the tribe recommended by algorithms, but the tribe you genuinely know, have shared memories with, and can face-to-face question.

In a world where everything can be faked, the most difficult thing to fake is a long-term relationship.

AI 2028 - AI 2027 - AI 2026: Countdown to major changes and a self-help guide for ordinary people

Collapse of the middle layer: 2,000 years of management history ends in an AI cycle

Being a slow person in an accelerating era

Returning to the image at the beginning of this article.

A superpower's president posts on social media in the early morning, threatening to destroy a civilization. The threatened country retaliates with AI Lego animations. The White House celebrates the bombing with an AI bowling video. The Iranian embassy jokes about the timing of the ultimatum. 3 billion views.

Every participant is accelerating. AI is accelerating in generation, algorithms in distribution, and emotions in contagion.

As WIRED concludes:

"In a system where synthetic content spreads faster than verification, the only real defense may be at the behavioral level: hesitation. A pause before sharing. In a system designed for zero thought, taking a few minutes to reflect."

In a world where all systems are accelerating, the most disruptive thing an individual can do may be to slow down. Not because slowing down will necessarily help you find the truth. Many times you might never find it. But because slowing down itself is a form of refusal.

Refusing to become what Arendt calls the "ideal subject." Refusing to let "true or false" become your default setting.

Orwell said lies would be recorded in history.

Maybe. But as long as there are people willing to pause for three seconds before clicking "share," history is not yet finished writing. 【Understanding】

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Gate TradFi跟单,瓜分10万U
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Techub News

6 hours ago
After a 145% tariff, the underlying logic of the world economy is being rewritten.
7 hours ago
The market is desensitizing to geopolitical risks.
8 hours ago
Your agent has two bosses, you are just one of them.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarOdaily星球日报
42 minutes ago
How does AI empower smart contract security? A practical sharing from general models to the three-audit mode.
avatar
avatarOdaily星球日报
5 hours ago
Should on-chain DEX traders be worried about front-running?
avatar
avatarOdaily星球日报
5 hours ago
SUNX Research Institute Weekly Report: Liquidity Tightening and Breakthrough Points in the Cryptocurrency Market under Geopolitical Gamesmanship
avatar
avatar律动BlockBeats
5 hours ago
How to quickly build a cognitive framework in a new field using AI in half an hour?
avatar
avatarOdaily星球日报
6 hours ago
A Brief History of Web3 Airdrops: Reviewing Twelve Iconic Anti-Cheating Projects
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink