Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

The Era of Pseudo-Intelligence: AI does not make you stupid, it makes you lose the opportunity to become smart.

CN
Techub News
Follow
1 hour ago
AI summarizes in 5 seconds.

Written by: Rust, who does not understand scriptures

There is a startup in San Francisco called Cluely that creates tools for workers to get real-time AI answers while in meetings, interviews, or coding. They have two slogans: "You no longer have to think" (Stop thinking) and "The end of thinking" (End of thinking).

My first reaction upon seeing these two statements was not shock; rather, I found them to be very honest and capturing the spirit of our times.

Where is the honesty? The honesty lies in Silicon Valley’s precise judgment that the things that can truly be sold in this era are no longer about "making you smarter." Such statements sound lofty, but products cannot deliver on them.

What can truly be sold is "making you not need to be smart." These are not merely differences in degree but in opposing directions. The former requires friction, while the latter promises to help you bypass it.

What I want to discuss in today’s article is something a bit darker than "AI makes people stupid":

AI may not have made you dumb; it has merely made you lose the process of becoming smarter.

I will repeatedly use a term to refer to this state: pseudo-intelligence. It is important to clarify the meaning of this term as its emphasis is not on AI but on people.

Pseudo-intelligence is not about the content output by AI being false. Most of the time, AI’s output is not false, and it is becoming less false. Pseudo-intelligence is a state in which a person uses AI to appear to be smart, yet has not undergone the necessary training behind that intelligence.

The essays submitted by students, the market analyses seen by CEOs, the diagnostic recommendations received by doctors, the policy reports received by governments, all "look correct." There is an entire thought process missing between "looks correct" and "is correct." There is an entire series of training that no one is willing to undertake between "looks smart" and "is truly smart."

Being smart has never been a state, but a habit. A person does not become smart simply by knowing the answer; they become capable of finding the answers again in the future by repeatedly going through the process of "finding the answer themselves." After AI eliminates this process, the results remain, but the capability is lost.

This issue unfolds simultaneously on three levels. The first level is the cognitive surrender of the individual; the second level is the algorithmic collapse of organizations and education; the third level is a government that does not understand technology outsourcing its civilizational judgment to a system that is wrong 30% to 90% of the time.

Let’s examine each level one at a time.

It is not AI that makes people dumb; it is that most people are trapped in games and systems that do not reward "thinking."

Everyone is cheating their way through college with AI while traditional education is dead.

1. Cognitive Surrender: When You Are No Longer the Source of Judgment

In 2015, Professor Esko Penttinen from the Aalto University Business School noticed something peculiar. In a top Finnish financial company, a key software designed to accelerate the work of accountants was facing abandonment. The problem wasn’t that it wasn’t working well; the problem was that it was working too well.

This software automates the accounting work of fixed asset management and has hardly ever made a mistake. However, one of the accountants told Penttinen: "Once this job is automated, you will stop deeply thinking about the essence of things." Key skills are turning into a "lost ability." The company executives were so shocked that the final decision was to shut down the perfectly functioning software and retrain employees on the principles of fixed asset management accounting.

To keep the company afloat, this software must die.

This story was later included in a research paper that Penttinen co-authored titled "The Vicious Circles of Skill Erosion: A Case Study of Cognitive Automation." This paper serves as an early warning: when machines begin to take over tasks that were originally thought out by humans, people do not automatically turn to thinking about more important things; they will stop thinking altogether.

This judgment was precisely confirmed by a set of experimental data in 2024.

Steven Shaw and Gideon Nave from the Wharton School of the University of Pennsylvania designed a clever experiment. They had a group of volunteers answer challenging questions with the assistance of AI, deliberately inserting errors into the AI's answers. The results fell into two scenarios:

When the AI provided a correct answer, the volunteers using AI performed better than the control group that relied entirely on themselves. This was an expected result.

When the AI gave an incorrect answer, the performance of those using AI was far worse than the control group. In other words, they failed to recognize that the AI was wrong and simply adopted its answer. Shaw and Nave coined a term for this state: cognitive surrender.

This name is quite fitting. It is very different from "cognitive offloading." Cognitive offloading signifies "I let the tool handle this task," provided that you are still judging "whether the output from this tool is reliable." Cognitive surrender means "I let the tool handle the judgment." In the former, you are still using your brain; in the latter, you are just a microphone.

The more AI resembles a smart person, the more easily people forget it is just a probabilistic system.

The biggest danger is not that AI is wrong; it is that AI is wrong with great confidence, fluency, and coherence. A tool that clearly knows it may be wrong keeps people on guard, while a tool that appears to never make mistakes leads people to completely let go. Cluely’s slogan can be articulated because they have clearly understood: what users want is not a smarter assistant, but an object that allows them to let go entirely.

However, what truly compels me to write this down is the cost at another level. The cost is not that people using AI make more mistakes in a particular task; it is that they are losing the ability to independently make the right judgment in the future.

This brings us to something that few readers of public accounts mention: many jobs considered low-value are actually training grounds for high-value skills.

Junior lawyers researching cases are not doing it to find that one case; they are doing it to develop a sense of "which details can determine winning or losing" while browsing through hundreds of cases.

Young analysts organizing data are not doing it to submit a form; they are doing it to gradually grasp "what the real mechanisms of the industry are" within the data.

Programmers debugging are not doing it to make the program run; they are doing it to cultivate the judgment of "where in the code things are most likely to go wrong" through repeated error localization.

Editors revising drafts are not doing it to make sentences smoother; they are doing it to develop a natural instinct that prevents them from making the same mistakes when they write their own texts.

Students writing essays are not doing it to submit an essay; they are doing it to transform vague feelings into clear language through the processes of getting stuck, overturning, and rewriting.

These tasks share a common feature: they look inefficient, but they provide desirable difficulty. This concept was proposed by cognitive psychologist Robert Bjork at UCLA in the 1990s and has since been validated by hundreds of studies: if learning occurs with no friction at all, the brain will not create deep imprints; only through a certain degree of struggle will long-term memory and transferability develop.

What AI is currently doing is swallowing entire segments of such work. It does not start replacing tasks from the highest end of the work chain; it starts from the lowest end, seemingly the least important segments. However, it is precisely the path that judgment relies on to form that is being consumed.

I previously wrote a line in an article titled "AI Folding": what algorithms first take away is not your job, but the most growth-oriented part of your job.

That article was framed from a distribution perspective: those upstream of the algorithm, those downstream of the algorithm. Looking back on it today, the situation is even more thorough. The accountants in Finland were not laid off; they just stopped thinking. The volunteers in the Wharton experiment were not unemployed; they simply relinquished the task of judgment. AI is not taking your job; it is taking the part of your job that makes you stronger.

The work remains, the people remain, but the possibility to grow stronger is gone.

The proletariat of judgment: the first batch of cannon fodder under AI algorithms has already been born.

Now it is clear: AI is not equity; it is the final battle between capital and labor. The wealthier run faster.

2. Algorithmic Monoculture: When Everyone Thinks with the Same External Brain

The British conservative historian Niall Ferguson named this era "pseudo-intelligence." He initially used this term to describe the false intelligence performance created by students in universities who have AI write their papers: the submissions look very professional, but there is no real cognitive work behind them.

Ferguson’s use is quite narrow. In this article, we want to widen the scope of this term because the same phenomenon is spreading from classrooms to the entire organizational world, and the consequences at the organizational level are harder to reverse than at the individual level.

First, let’s look at a set of data.

In 2024, a research team formed by Meta and two universities published a study that scanned 21 of the most advanced large language models on the market (including GPT-4, Claude, Gemini, etc.) and tested their answers to common questions. The conclusion contained a silent word: algorithmic monoculture.

These 21 models provided answers that exhibited strong convergence in terms of structure, stance, and wording when faced with the same question.

This is not a coincidence. These models have extensive overlaps in training data, similar alignment processes, and closely related human feedback sources. They are fundamentally different variants of the same model.

So, the question arises: what happens when hundreds of millions of people around the world use these 21 models to ask the same kind of question, modify the same types of documents, write similar reports, and make the same kinds of decisions?

Dr. Nataliya Kosmyna from the MIT Media Lab provided a specific answer. She led a team studying students who used ChatGPT to write papers, comparing them with students who solely used search engines and those who wrote completely independently. The conclusion was that the papers produced by the AI-using group were "very homogenous, and they were all quite similar." These students repeatedly focused on a narrower set of viewpoints, resulting in significantly reduced creativity.

Kosmyna also monitored the brain activities of these students using brain-machine interfaces. The students using ChatGPT formed shallower neural connections with the research topics, weakened critical thinking, and reduced long-term memory formation, even more than those using Google Search.

This embodies pseudo-intelligence at the organizational level: the output appears more diverse but is actually a mere echo of the same voice. Everyone feels they have achieved unique insights using AI, when in reality, hundreds of millions are reaching nearly identical insights, with only slight variations in expression.

Google’s AI Mode launched this year has pushed this process to the extreme. In the past, when you searched for a question, Google would provide you with ten blue links, and you would click on a few to judge which were credible or worthy of in-depth reading. Now, when you search for the same question, Google directly gives you an AI-generated summary. What you see is the content that AI decides to show you. The remaining links are ones you will no longer click on.

I previously wrote an article titled "It’s Not AI That Makes People Stupid." The core judgment at that time was that society is systematically shifting rewards from "deep thinking" to "emotionally provoking."

After the emergence of AI, this reward mechanism has completed its final piece: now, even "emotionally provoking" does not need to be done by you; the model does it for you. You only need to press the generate button.

This leads to a judgment expressed by Heidegger in his famous 1953 lecture "The Question Concerning Technology": the true danger of technology is not in itself but in how it reveals the entire world as a resource that can be scheduled, calculated, and optimized. In his view, technology is not a neutral tool; it is a "way of seeing the world." Once humanity begins to see the world in this way, forests cease to be forests and become merely sources of lumber; rivers cease to be rivers and become mere potential for hydropower.

AI is the cleanest sample of this logic today. It transforms language into callable resources, knowledge into callable resources, viewpoints into callable resources, taste, style, and creativity into callable resources, and ultimately, even "thinking like a smart person" itself becomes a callable resource.

When everything becomes callable, one thing will quietly be eradicated: the willingness to stop and genuinely understand something.

Because understanding is slow, uneconomical, and cannot be reused. It has no place in a system that worships efficiency.

Pseudo-intelligence is not imposed but is rather filtered out by efficiency itself. When everyone around you uses AI to generate a report in thirty seconds, write an article in ten minutes, and finish a project in an hour, if you insist on spending three days understanding something, you will be seen as someone who is wasting time. The system will not reward you; it will eliminate you.

This is the second layer of meaning of pseudo-intelligence: it is not just a way of use; it is becoming a pressure of environment. When everyone thinks with the same external brain, the world does not become smarter; it becomes neater.

Permanent Underclass: AI practitioners in Silicon Valley generally believe that ordinary people are "finished."

Why are the most knowledgeable about AI leaving in droves: "I have gazed into the abyss of endless night."

3. Outsourcing Civilization to a System That Will Be Wrong Half the Time

Plato told a myth in the "Phaedrus."

The inventor of writing, Theuth, presented this gift to King Thamus, claiming it would enhance people's memory and wisdom. Thamus replied: No, this will only make people rely on external symbols and diminish true memory. People may seem to know a lot but are merely possessing many external records.

Two thousand five hundred years later, Thamus’s prophecy has been reactivated every few generations. Writing raises concerns about memory, printing raises concerns about focus, broadcasting raises concerns about seriousness, television raises concerns about depth, and search prompts concern about understanding. With each new tool's emergence, someone reiterates Thamus's statement, and each time, this concern is partially disproven and partially confirmed by practice.

AI raises concerns about the last thing: judgment itself.

The peculiar thing this time is that earlier tools replaced certain abilities; this time, the tool replaces judgment. Lost abilities can be relearned; when judgment is relinquished, you cannot even judge whether "this time it was the right thing to relinquish."

In 2025, the UK Labour government announced an "AI Action Plan." The plan's core is proposed by technology investor Matthew Clifford, who was hired by the government: through "comprehensive digitalization," to mainstream AI in the public sector to reportedly save the country £46 billion.

The report contains language like this: AI will shorten waiting times, identify bottlenecks, make public services "feel more personable," and even curb prison riots before they occur.

However, British novelist Ewan Morrison believes the government is introducing something with an error rate between 33% and 90% into fields like healthcare, military, and education—it shouldn't be introduced at all.

We are not embracing technology. We are letting a system that is wrong half the time make our most important judgments.

What is more absurd is that most of those making this decision do not understand how the system operates. They see the money-saving numbers in the PPT, the carefully selected success stories in the vendors' presentations, and the urgency created by other countries doing the same thing.

They lack the ability to ask the most basic counter-questions: under what circumstances might this system fail? Who will bear the cost of failure? If we discover it has issues when we can no longer live without it, is there a backup plan?

When the governed understand technology, while the governors do not, a peculiar reversal occurs: technology becomes like magic. How does a person who does not understand magic govern magic? They can only trust those who claim to understand it.

This state is not without counterexamples. Former Facebook engineer Georg Zoeller is now in Singapore, serving as a technical advisor to the Singapore government. He has observed that Singapore is a different sample: technological decisions are made by those who understand technology. The introduction of AI is done with an engineering, cost-calculation, and backup perspective, rather than a religious, betting, or irreversible stance.

The significance of the Singapore sample is that another posture of governance is possible; a country can use AI seriously while simultaneously refusing to believe in AI. This has not been established in the UK, the United States, or most European countries.

I previously judged that AI is the last battle between capital and labor. Looking back today, this issue is accelerating, but the manner of acceleration is darker than I originally wrote. The labor force has not yet organized a rebellion; it has already relinquished its judgment. Capital does not need to deprive workers of their capacity for resistance; workers are actively giving it up. Governments do not need to suppress the judgment of citizens; citizens are actively outsourcing it.

This reminds me of a short story written by British novelist E.M. Forster in 1909, titled "The Machine Stops." The story is set in a future society that is heavily dependent on a giant machine, where people's living, socializing, education, and entertainment are all achieved through this machine. Over time, people forget how the machine operates and how to fix it; they begin treating it as a god.

The climax of the story occurs when the machine begins to malfunction. However, people do not view this as a crisis but as divine wisdom, having transformed the machine into an unquestionable object. By the time they finally realize that the machine can indeed stop, no one remembers how to repair it.

This novel was written in 1909. Forster had no computers, no internet, no AI. What he described is not technology but the relationship between humans and technology, and what happens when humanity completely relies on a system they no longer understand.

A government that does not understand technology is outsourcing decisions to a model that does not understand the world. This is not called governance; it is called gambling.

You are no longer the smartest being on Earth.

Why are smart people fleeing social media in droves?

4. Keep the Opportunity to Get Smarter

Some observers believe that "if we are becoming increasingly dumb, we can hardly blame AI. This is all our doing."

When I read this sentence, I somewhat disagree.

Shifting the blame to oneself sounds very moral, but it is another form of laziness. Becoming dumb is never a personal choice; it is the result of a whole set of reward mechanisms, policy directions, business models, and educational systems colluding together. When everyone around you is using AI to complete a task in thirty seconds that you take three days to finish, when your boss asks you "why did you only submit one idea this week" holding an AI-generated solution, when your child tells you that everyone in their class is using ChatGPT for homework, choosing to "think for oneself" alone becomes a luxury rather than a virtue.

The real issue is not "how do we become less reliant on AI." The real issue is "how do we redesign an environment that rewards thinking."

This matter cannot be resolved by individual will. Can schools forcibly take AI offline for certain courses? Can companies retain certain tasks that must be done manually? Can governments legislate to limit AI intervention in the most sensitive areas of decision-making? These are institutional issues, not moral ones.

But before institutional responses arrive, one limited action that everyone can take is: to keep the power of judgment with themselves. It is not about not using AI; it is about reviewing what you use AI for on your own. It is not about resisting efficiency; it is understanding that some things are worth doing more slowly. It is not about returning to a past without AI; it is about maintaining a space in a world with AI where AI is not allowed to enter.

As McLuhan pointed out, every new tool makes certain things obsolete while reviving others. AI has made "knowing the answer" obsolete. What should be revived?

It should be the ability to ask questions. The ability to judge whether an answer is worth trusting. The ability to keep moving forward in places where there are no answers. These are the capabilities that AI can never replace.

But the prerequisite is that you must still retain them.

What is truly scarce in the age of pseudo-intelligence is not answers, but people who are still thinking.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Techub News

21 minutes ago
Will the French government bond crisis impact the United States?
46 minutes ago
The strongest growth company in history has arrived! Caude Code's "light speed" growth! Anthropic's annual revenue has doubled to 44 billion dollars in two months!
1 hour ago
Dialogue in the Era of Creation: Just Secured Hundreds of Millions in Financing, the Turning Point for Desktop CNC Has Arrived
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarTechub News
21 minutes ago
Will the French government bond crisis impact the United States?
avatar
avatarOdaily星球日报
21 minutes ago
After the storage chip surge: Micron vs. SanDisk, which one do analysts prefer?
avatar
avatarTechub News
46 minutes ago
The strongest growth company in history has arrived! Caude Code's "light speed" growth! Anthropic's annual revenue has doubled to 44 billion dollars in two months!
avatar
avatarTechub News
1 hour ago
Dialogue in the Era of Creation: Just Secured Hundreds of Millions in Financing, the Turning Point for Desktop CNC Has Arrived
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink