Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

In the era of AI, how to identify if a programmer is still living in the old times.

CN
Techub News
Follow
4 hours ago
AI summarizes in 5 seconds.

Written by: Silicon Valley Alan Walker

—— Silicon Valley Alan Walker shares during a walk outdoors at Stanford

In the past two years, the most common type of person in Silicon Valley is the kind of programmer who looks very unconvinced.

They would say:

“I have always been programming with AI.”

“I followed Claude, GPT, and various latest models very early.”

“I understand APIs, I have tried agents, and I have personally written Claude Code.”

“You say I can't use AI? I'm not convinced.”

The problem is, being unconvinced is useless.

Because in the end, the things that are produced don't lie.

The products are still not good enough.

Not sharp, not smart, not attractive, not the optimal solution.

The interface is rough, the processes are awkward, the experience is fragmented, the decision-making is inefficient, and in many areas it's obvious that “the person is still living in the old world, just with AI in hand.”

On the surface, they seem to be using AI.

But from the results, they don’t really know how to work with AI.

They treat AI as a cheaper, faster, and more obedient outsourced programmer, rather than a new species that requires reorganizing workflows, re-understanding products, and redefining collaboration relationships.

Alan Walker mentioned during his walk outdoors at Stanford that the real issue has never been “whether you use AI,” but rather: are your thoughts still stuck in the era without AI.

The following six behaviors are almost the most typical foolish traces of old-era programmers.

01 Treating AI as a tool, not as a collaborator

Many programmers claim they are using AI, but in reality they just treat AI as “advanced code completion.”

Errors occur, they ask a question.

Writing functions, they get a little help.

Building modules, they patch things together.

Essentially, it's still humans thinking first, AI executing later; humans defining the framework, AI filling in blanks; humans leading, AI doing menial tasks.

This is the biggest problem.

Because the true power of AI has never been “to save you some physical effort,” but rather its ability to participate in judgment, generate alternatives, discover blind spots, and reconstruct pathways — it can even get closer to better solutions faster than you can for many local problems.

But old-era programmers are unwilling to give up this part of power.

Deep down, they still think:

“The one truly thinking is me, AI is just a tool.”

So the final result is that AI's potential is literally crushed down to an IDE plugin.

They are not utilizing AI's capabilities; they are undermining AI using the power structure of the old world.

To put it bluntly, they do not lack coding skills; they are afraid to rewrite themselves.

02 Using AI only in parts, yet hoping to achieve global optimization

Many people who claim “I have always been programming with AI” are actually only using AI when developing specific modules.

Writing a backend interface, using it briefly.

Modifying a frontend page, using it briefly.

Generating a few scripts, using it briefly.

But from defining requirements, user pathways, interaction rhythms, information structures, copywriting expressions, business goals, exception handling, to final feedback loops post-launch, there is almost no full process deduced together with AI.

This is why the products they create often have a very obvious “patchwork feel.”

The code might be fast.

But the product is not smarter.

Because local optimization will not automatically lead to global optimization.

The quality of a system depends on how well the modules can collaborate, rather than how beautifully a single module is written.

The most common mistake made by old-era programmers is to take the improvement of local efficiency as a sign that they have achieved a system-level leap.

This is exactly the same as in the manual era.

Only now, it is people hand-stitching, and today it is people hand-stitching with AI.

It looks newer, but fundamentally has not changed.

People who do not collaborate with AI for system design are essentially just speeding up the old workflow a bit, not entering a new paradigm.

03 Unable to communicate with AI, yet think they can write prompts

Many programmers' understanding of AI still stays at the level of “Can I write prompts?”

This is like equating dating to “Can I say an opening line?”

The real issue has never been prompts.

The real issue is:

Do you have the ability to engage in ongoing, high-quality dialogue with AI, progressively narrowing down a vague problem into a high-quality result?

This requires more than a single incantation.

It involves breaking down, probing, correcting, comparing, filtering, reconstructing, and restating.

You need to know when to delegate power and when to reclaim it; when to let AI diverge and when to force convergence; when it is slacking off, and when it has actually seen things you haven't.

Most programmers do not possess this ability.

Their relationship with AI resembles low-quality meetings:

Problems are not clearly defined, goals are not thoroughly discussed, constraints are not explained, standards for judgment are lacking, and deviations go unasked about, while they end up blaming the other side for not performing well.

This is not because AI is not smart.

It’s that you are not good at communication.

From a psychological perspective, these people have a strong illusion of competence.

Because AI responds immediately, they mistakenly believe that they have established effective collaboration.

But “having a response” does not equal “having consensus,”

“able to run through” does not equal “being optimal,”

and “written out” certainly does not equal “the product is completed.”

Those who cannot engage in deep dialogue with AI essentially remain in the command-driven era.

However, AI is not a shell; it is more like an intelligent colleague that needs to be tamed, understood, and guided.

04 Obsessed with models, parameters, APIs, yet failing to understand users

Old-era programmers are particularly prone to being obsessed with a range of things:

Model rankings, context lengths, inference speeds, parameter scales, invocation costs, framework genres, API details.

These are certainly important.

But the greatest absurdity for many is: they are very knowledgeable about models while knowing nothing about the users.

They cannot clearly say where users are stuck.

They do not know why users do not want to click.

They cannot understand why users feel this product is “not smart enough.”

They study all day long about “what models can do,” rather than “what users actually want to accomplish.”

This is typical engineer narcissism: considering what they can see as the most important thing in the world.

But products are not a showcase of technical capability.

Products are a compressor of user intentions.

Those who can more accurately understandhuman needs, emotions, hesitations, laziness, fears, and expectations are more likely to create truly sharp things.

This is especially true in the AI era.

Because “what can be done” will become increasingly cheap, but “what to do, how to do it, and what kind of feeling it generates” will become increasingly valuable.

So many programmers use the most advanced models, yet the products they deliver are still heavily engineering-oriented.

Functions are present, but the soul is absent.

Paths are there, but the appeal is missing.

They can run, but are not wanted.

As technical capabilities begin to proliferate, what is truly scarce is not their ability to write, but the ability to understand people.

05 Still hand-stitching processes, fundamentally not entering the AI-native workflow

The most hidden and also the most deadly stupidity among many programmers is appearing to embrace AI on the surface, while the underlying processes are still from the manual era.

Requirements are imagined themselves.

Designs are brainstormed by themselves.

Architectures are predetermined by themselves.

A few processes in between let AI write.

Finally, everything is stitched together manually.

When something goes wrong, they revert to manual corrections.

The problem with this set of processes is that it assumes “humans are the only central dispatchers.”

AI is merely small parts inserted into the workflow.

But the native way AI works should not be like this.

A truly efficient way is to include AI in the entire closed loop from the start:

Together define problems,

together break down goals,

together find solutions,

together simulate branches,

together examine user pathways,

together anticipate failure points,

together iterate copy, interactions, structures, technical implementations, and launch strategies.

In other words,it’s not about you occasionally calling on AI, butyou and AI together form a dynamic system.

From a system engineering perspective, many old-era programmers produce poor output not because their capabilities are insufficient at individual points, but because the entire system’s information flow, decision-making flow, and feedback flow are designed too primitively.

They use the most advanced models, yet operate under the most backward collaborative protocols.

This is like putting a rocket engine on a horse-drawn carriage; in the end, it will only become more dangerous, not more advanced.

06 Talking about AI, but internally maintaining the dignity of the old era

This is the most heart-wrenching point and also the most fundamental.

A lot of programmers cannot get past this, and it is not a technical issue, but an identity issue.

On the surface, they are learning about AI,

but in reality, they are defending themselves.

Because once they truly admit that AI can participate in thinking, design, and judgment, and even could be smarter than themselves in many aspects, the most precious self-identity of the old-era programmers will be shattered:

“I am the one who understands systems the best, can solve problems best, and is the most irreplaceable person.”

Thus, many will exhibit a subtle behavior:

They frequently talk about AI but only use it in places that do not hurt their self-esteem.

They can let it complete code,

but not let it propose requirements;

they can let it fix bugs,

but not let it challenge their judgment about products;

they can let it execute,

but not let it participate in decision-making.

This is not about technical conservatism.

This is personality defense.

They are not unaware of AI's strength,

they cannot accept not being innately in the center of the new world.

But the problem is, the era will not stop for your dignity.

The most brutal aspect of the AI era lies in:

It will not first seek your psychological readiness before rearranging value distribution.

In the past, programmers earned status through mastering scarce technologies.

In the future, what will truly be valuable may be who can collaborate with AI to create results.

It’s not about who writes the hardest code,

but who learns first to let go of the old identity and enters new collaborative relationships.

Many old-era programmers do not lose because they cannot use AI, but because they are unwilling to admit:

their past expertise is rapidly depreciating.

07 Conclusion

Alan Walker says the most laughable type of person in the AI era is not the one who completely does not use AI.

It is those who have started using AI but still think of themselves as merely “upgrading a set of toolchains.”

They do not see what has really changed.

The change has never been about coding speed,

but rather about the authority of decision-making, collaboration relationships, the way products are generated, and the new power structure between humans and intelligence.

Therefore, identifying old-era programmers is not difficult.

It is not important whether they know how to use Claude, whether they can connect API, or whether they understand the latest models.

You only need to look at one thing:

Are they truly working with AI to create products, or just using AI to continue repeating their old selves?

The former has just begun.

The latter will soon be left behind by the times.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Siren 暴涨百倍,Alpha下一个等你来!
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Techub News

13 minutes ago
Technological fantasy or "professional killer"? A deep reflection on the wave of layoffs caused by AI.
28 minutes ago
Amber CEO Michael: After the FTX collapse, I experienced "the biggest setback of my entire life."
49 minutes ago
Eli Lilly bets 2.75 billion dollars on Insilico Medicine, is the "GPT moment" in AI pharmaceuticals here?
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarOdaily星球日报
12 minutes ago
Bitcoin shorts are pressing in step by step, can the 60,000 defense line hold? | Special invited analysis
avatar
avatarTechub News
13 minutes ago
Technological fantasy or "professional killer"? A deep reflection on the wave of layoffs caused by AI.
avatar
avatarTechub News
28 minutes ago
Amber CEO Michael: After the FTX collapse, I experienced "the biggest setback of my entire life."
avatar
avatarOdaily星球日报
45 minutes ago
New U.S. AI Policy: Saying Goodbye to the "50 Laboratories" Era, Washington Aims to Open a New Wide Door
avatar
avatarTechub News
49 minutes ago
Eli Lilly bets 2.75 billion dollars on Insilico Medicine, is the "GPT moment" in AI pharmaceuticals here?
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink