Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

You used AI to track 5000 talented individuals, but your world is becoming narrower.

CN
Techub News
Follow
3 hours ago
AI summarizes in 5 seconds.

Written by: Rust of Uncle who does not understand classics

A friend recently excitedly showed me his new tool.

He follows about 5,000 people on Twitter. Researchers, founders, investors, developers, media; after years of accumulation, the information flow has long turned into an endless waterfall. He had previously tried "read later" apps, saving thousands of articles, but ended up reading just 5. Like most people.

Now he uses an AI Agent to fully read the tweets of 5,000 followed accounts, summarizing them into 20 essence summaries every day. In 10 days, he produces 54 structured briefings. Content that previously took 2 hours to finish can now be done in 5 minutes. Noise filtering rate of 95%.

He said, "The essence of information anxiety is the cost of filtering. When filtering is handed over to an Agent, anxiety disappears."

He is right. But only half right.

Anxiety has indeed disappeared. At the same time, what also disappears is the knowledge of what you didn’t know you needed to know. 5,000 tweets condensed into 20. Among the 4,980 discarded, there might be one from a field you’ve never followed, explained with logic you’ve never heard, addressing a question you thought you had figured out.

You will never see it again. Because your Agent decided for you: this is not relevant to you.

Marshall McLuhan wrote a quote in "Understanding Media" that has been cited countless times: the medium is an extension of man. But most people selectively ignore the latter half of the sentence: every extension is accompanied by a form of amputation.

The AI Agent has extended your information-processing ability. What it cut off is your capacity for serendipity. That ability to accidentally stumble upon an unexpected idea, to be struck by information completely outside your range of vision, forcing a reorganization of your entire cognitive framework.

The information cocoon is chosen by the algorithm. This time, it is built by your own hands.

Future companies will transform into a new species: half media, half machine.

What is the most expensive asset today? Wall Street directly threw analysts into the Strait of Hormuz.

Filtering is also a form of amputation

In 2011, Eli Pariser conducted an experiment that is now almost forgotten. He had two friends search for "Egypt" on the same Google, at the same time. The search results were completely different. One saw pyramids and travel information, while the other saw the crowds and protests in Tahrir Square.

He called this phenomenon the "filter bubble," and wrote a book titled "The Filter Bubble." Pariser is an internet advocate in the United States and was the executive director of the progressive organization MoveOn.org. The evil doers are the algorithms, the platforms quietly determining what is worth seeing and what is not. This book was published eleven years before the birth of ChatGPT.

The situation hasn’t improved now; it has only become worse in a different way. The cocoon is no longer secretly constructed by the platform but is handed over for AI Agents to build. You signed a mandate, and you still think this is an efficiency tool.

This is a story about efficiency, but not the kind of ending you expect.

McLuhan presented an uncomfortable idea in "Understanding Media": every medium is an extension of the human body, but every extension must be accompanied by amputation. Cars extended legs, causing people to stop walking, leading to atrophy. Cameras extended eyes, outsourcing memory, degrading perceptual detail of the visual.

Now, AI Agent has extended our information-processing capability. This is not a problem. It can help you digest a hundred newsletters in three seconds, distilling what you "really need to pay attention to” from the timeline, compressing English original texts into summaries that are easy for you to read. It certainly is an efficiency tool.

But McLuhan's question is: what is it amputating?

The answer is not time, not attention. What it amputates is cognitive wandering.

The collapse of the middle layer: two thousand years of management history ends with an AI loop

The first group of people deeply using AI is being deskilled by AI

You haven't really gotten lost in the information flow for a long time

In 1973, sociologist Mark Granovetter published a paper titled "The Strength of Weak Ties." He studied the job search process of hundreds of people and found a counterintuitive conclusion: the information that truly brings opportunities rarely comes from your closest friends and colleagues but from old classmates you haven’t spoken to in years, distant relatives you meet occasionally, and strangers you exchanged business cards with at meetings.

The reason is this: your closest relationships overlap heavily with your information. You see similar content, know similar people, and have similar judgments about the world. You have strong ties, but there is no new information in strong ties. Truly unfamiliar information can only come from weak ties, from those you hardly know.

The optimization logic of AI Agents is to continually strengthen strong ties. It understands you better and increasingly accurately pushes the content you are "interested in," while presenting fewer things that stray far from your taste. Weak ties in this logic are noise, impurities that need to be filtered out.

But that noise is the only place cognitive wandering exists.

Technology critic Evgeny Morozov wrote an article in 2012 titled "The Death of the Cyber Wanderer." He borrowed the image of "wanderer" from Walter Benjamin: the aimless wanderer in 19th-century Paris, turning at street corners, daydreaming in shop windows, observing the crowds, discovering the world in unexpected corners. Benjamin believed this wandering was the most important spiritual state of modern urban culture; it kept people open to the unexpected.

Morozov said that the early internet had this spirit of wandering. You wandered through hyperlinks, aimlessly, unpredictably; one article led to another, one name led to a span of history, and you never knew where you would land.

Now, such wanderers are almost non-existent. AI Agents are the ultimate guides; they plan the optimal route, know your destination, and ensure you never get lost. But because of that, you will only ever reach places you already know you want to go.

AI is exposing the truth of education; American universities are beginning to revive an old tradition.

Stop being friends with time; in the AI era, "space" is your friend to wealth.

This is not a new problem, but this time it is different

The issue of "too much information to read" is not an anxiety invented by our generation.

In 1545, Swiss naturalist Conrad Gessner published "The Universal Library," attempting to compile a catalog of all known books at that time. In the preface, he complained that "the number of books is vast and harmful." That was about a hundred years after Gutenberg invented the printing press.

In 1685, French scholar Adrien Baillet directly warned that the increasing number of books "might plunge future generations into a state of barbarism."

Bacon's famous quote was made against this background: "Some books are for tasting, others for swallowing, a few are for thorough chewing."

Each generation facing the flood of information invents new "shortcuts." Indexes, summaries, clipping books, anthologies, curated reference books. Harvard's Professor Ann Blair studied the information management practices of scholars from the 16th to the 18th century, discovering a ironic cycle: anxiety over book surplus spawned more books that helped you "read less," while these books themselves exacerbated the surplus.

From this perspective, AI Agents are merely the latest generation of reading shortcuts. Nothing new.

But this time there is a fundamental difference.

All past information filtering tools—indexes, summaries, encyclopedias, editors, curators—share a common characteristic: they do not know who you are. Bacon's division of "tasting/swallowing/chewing thoroughly" is a public standard. An encyclopedia presents the same entry to all readers. A newspaper editor lays out content based on news value without changing headlines based on your reading history.

Agents know who you are. Their filtering criteria are a function of your behavior data. What you've read in the past, what you've highlighted, how long you lingered, what you skipped. The "essence" each person receives is different. Not because the world presents different faces to you, but because each Agent curates the world using each owner's cognitive model.

Past information shortcuts were "publicly narrow." You and others read the same abstract, possibly discovering the same surprises. Agents create a "privately narrow." Each person is locked into a reading room that belongs only to themselves, containing only the books they want to read.

There is a person's existence that precisely illustrates why this distinction is important.

Robert Cottrell, the founder of The Browser newsletter. He reads 1,000 articles daily. Not just scanning titles, but truly reading. He subscribes to 700 RSS feeds and has read between 3 to 5 million articles over 10 years. Each day, he selects 5 articles from the 1,000 to send to readers.

He also tried to use machine learning to replace himself. He trained a model with all past selections as training data. Result: the model selected 50 articles from 1,000, and about half were false positives.

His reflection is: "The more I read, the more convinced I am that the true guarantee of article quality is the author itself, not the publishing platform."

Cottrell's value lies not in the quantity he reads. Agents can also read a lot. His value is that his judgment criteria do not come from your behavioral data. What he recommends may confuse you, challenge your preconceptions, and come from a field you have never followed.

Agents do matching; Cottrell does triage. Matching makes you comfortable, triage stimulates your growth.

Algorithms are designed to squeeze, and squeezing gets you stuck

In computer science, there is a classic dilemma called the "explore/exploit tradeoff." Imagine you are in front of a row of slot machines, and you don’t know which one has the highest probability of winning. You can choose to "exploit": keep pulling that one machine that currently seems to give steady returns; or choose to "explore": randomly try other machines, even if you win less in the short term.

Over-exploiting systems get stuck in a local optimum. You find a reasonably good machine and keep pulling it, never knowing there’s a better one around the corner. The instinct of AI Agents is to exploit, continuously refining within the range of your known preferences. This makes them highly efficient, but also makes it increasingly difficult for you to encounter information that could change your worldview.

There’s a story worth mentioning here.

Richard Feynman went through a professional crisis at Cornell University where he felt exhausted and found no joy in physics. He made a decision: to abandon all "important" and "promising" research directions and purely for fun, calculate the trajectory of a spinning plate tossed in the air at a restaurant. This had nothing to do with any cutting-edge research, no efficiency, no meaning. Just fun.

The motion of that plate eventually became the starting point of his theory of quantum electrodynamics, earning him the Nobel Prize in Physics in 1965.

If Feynman had an efficient enough AI Agent to help him plan his research path, it would surely filter out the option of "calculating the plate trajectory" because that was low priority, a waste of time, and didn’t match any keywords.

Smoothing is not neutral; it's a form of deprivation

Contemporary philosopher Byung-Chul Han said in "The Disappearance of the Other" that contemporary society is eliminating all "others" that have negativity, heterogeneity, and resistance.

Our world is becoming increasingly smooth, increasingly seamless, and devoid of friction. Algorithms always push what you like; there are no true discrepancies in your information flow anymore, no bewildering questions, no viewpoints you never thought you would encounter.

This ultimate smoothness is a state of cognitive vacuum.

Neuroscience has an intriguing finding: when humans stop performing goal-oriented tasks and enter a state of "daydreaming" or "spacing out," a system in the brain called the Default Mode Network is activated. In this state, the brain begins to connect distant and seemingly unrelated memories and concepts, which is the neurological premise for insight and creativity.

AI Agents keep our brains in task execution mode with extremely high efficiency for extended periods. Information continuously flows in, processed, categorized, prioritized, efficiently, neatly, with no gaps. Those gaps that put the brain into default mode are eradicated. Not because you lack time, but because your information flow is too full, too full to leave any gaps for the brain to operate aimlessly.

McLuhan said that the message of the lightbulb is not what it illuminates but the fact that it makes the night disappear. Similarly, AI Agents, their biggest impact is not in how much content they help you process, but in that they fill all the blank moments in the information flow. That state of boredom where you click on a strange link and then sit down for a long time in an unexpected place is disappearing.

Why are smart people fleeing social media?

Young people are waking up first and starting to withdraw from social media; the era of making money through traffic is over.

Do not use AI logic to track your life

The world Pariser described in 2011 has one premise: you are a passive victim. It is the platform manipulating behind the scenes, it is the algorithm deciding what you can see without your knowledge. You can be angry, switch to a more "open" platform, and feel deceived.

Now the problem is harder to deal with, because you are complicit. You opened the AI Agent, let it help you process the information flow, let it decide what is worth your attention, let it condense 5,000 follows into a brief. You think you are saving time; in reality, you are signing a cognitive mandate.

You have relinquished the right to cognitive wandering. That filter increasingly understands you, increasingly pleases you, and its level of accuracy exactly equals the degree to which your world shrinks. Among those 5,000 follows, every day, what truly enters your consciousness is a small handful that the algorithm deems most suitable for you and is further refined by AI, transformed into the forms easiest for you to digest, presented before you.

How many spinning plates are being filtered out among those 5,000?

The ultimate efficiency is a person sitting deeper and deeper in their own echo, mistaking it for grasping the dynamics of the world.

AI is trained to look for patterns and consensus. When you use it to filter information, it pulls you toward the quality center of all users.

Emily DeJeu, a professor at Carnegie Mellon University, said something worth reflecting on: "Humans are not creativity machines. Sometimes, we are most creative when we are least efficient."

Those seemingly time-wasting accidental readings. Getting attracted by a piece of content completely unrelated in the information stream, jumping to a field you’ve never heard of through a link, discovering a metaphor that changes your thinking in an article you wouldn’t have opened otherwise. These are the off-road trainings for cognitive systems.

Agents have paved a straight highway for you. The cost is that you can no longer walk those side roads covered with wildflowers. 【Understand】

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

停战利好来!币安注册领100USDT
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Techub News

27 minutes ago
The first deal in the Guangdong-Hong Kong-Macao Greater Bay Area: Digital renminbi + smart contract implemented for rent payment, 57 million automatically deposited at midnight.
58 minutes ago
The clear bill has entered a critical moment, which may influence the development of the cryptocurrency industry.
2 hours ago
Sun Yuchen's Choice: When AI Becomes a System, White B.AI Bets on Infrastructure First
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarTechub News
27 minutes ago
The first deal in the Guangdong-Hong Kong-Macao Greater Bay Area: Digital renminbi + smart contract implemented for rent payment, 57 million automatically deposited at midnight.
avatar
avatarTechub News
58 minutes ago
The clear bill has entered a critical moment, which may influence the development of the cryptocurrency industry.
avatar
avatar律动BlockBeats
1 hour ago
Before using Musk's "Western WeChat" X Chat, you need to understand these three questions.
avatar
avatar律动BlockBeats
1 hour ago
After the blockade of Hormuz, when will the war end?
avatar
avatar律动BlockBeats
1 hour ago
The US-Iran negotiations have failed, and Bitcoin is staging a battle to defend the $70,000 level.
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink