In the face of a highly automated future, the core values of humanity will return to aesthetics, judgment, and deep understanding, "You can outsource thinking, but you cannot outsource understanding."
Written by: Bao Yilong
Source: Wall Street Journal
Andrej Karpathy, co-founder of OpenAI, pointed out in a recent interview that large language models are fully reshaping computing architecture as a "new type of computer."
On April 29, Andrej Karpathy, a leading figure in AI who played a key role in the development of Tesla's Autopilot and holds significant status at OpenAI, delivered an in-depth analysis of the current technological leap of AI agents and its profound impact on the software and hardware ecosystem during an event hosted by AI Sent.

Karpathy stated that he began to realize that workflows centered around agents had truly become usable since December of last year, a shift marking the substantial arrival of the Software 3.0 era. He said:
Many people still have an impression of AI from last year centered on ChatGPT, but you need to rethink this, especially starting from December—things have fundamentally changed.
He also proposed the new concept of "agentic engineering" to distinguish it from the "vibe coding" he coined last year, with the former referring to the continuity and acceleration of quality standards in professional software development.
He frankly stated that a large amount of existing code and applications "should not exist" under the new paradigm, and that the recruitment processes, development tools, and infrastructures of most organizations are still designed for humans rather than agents.
Dawn of Software 3.0: Power Transfer of Underlying Computing Architecture
The tech industry stands at a crossroads of qualitative change from quantitative change.
December of last year was a key turning point, and Karpathy admitted that he experienced profound shock in the face of the latest AI models:
The system-generated code blocks are becoming increasingly perfect; I can't even remember the last time I modified one. I simply trust this system more and more... (this made me) never feel so behind as a programmer.
This shock is a complete subversion of the computing paradigm. In Karpathy's view, the market is currently underestimating the depth of this change.
He pointed out that we are bidding farewell to "Software 1.0 (writing code)" and "Software 2.0 (organizing datasets to train neural networks)," and officially stepping into the "Software 3.0" era.
In this new epoch, large language models themselves are a "new type of computer." He said:
Your programming now turns into writing prompts, and the content within the contextual window is the lever you use to control the large language model acting as an interpreter to execute computations in the digital information space.
What catches the market's attention even more is his bold prediction regarding the evolution of future underlying hardware architecture. Currently, neural networks still run in a virtualized form on existing computers, but he believes this subject-object relationship will be reversed in the future:
You can imagine that neural networks will become the main process, while CPUs will become some sort of co-processor. Neural networks will take on the bulk of the heavy work.
This means that the strategic core position of "intelligent computing power," which dominates capital expenditure across the market, will be further solidified in the future.
Next-Generation Infrastructure: Reconstructing an "Agent-native" Ecosystem
As execution and coding are taken over by machines, where will human core values and the future form of infrastructure go?
Karpathy stated bluntly:
Everything must be rewritten.
The current documentation for various frameworks and libraries on the internet is still "written for humans," which greatly frustrates him. Karpathy complained:
Why should you still tell me how to do it? I don’t want to do anything. What text should I copy and paste to my AI agent?
The future market opportunities lie in building "agent-first" infrastructure.
In this world, systems are dismantled into "sensors" that perceive the world and "executors" that transform the world, and the data structures must make large language models highly readable, with machine agents representing individuals and institutions interacting in the cloud.
In such a highly automated future, the core scarcity of humanity will return to aesthetics, judgment, and the deepest understanding of business.
Karpathy quoted a phrase he has repeatedly pondered as a summary:
You can outsource your thinking, but you cannot outsource your understanding.
Agentic Engineering: An Explosion of Productivity Beyond "10x Engineers"
In the most concerning dimension of increasing productivity, Karpathy distinguished two core concepts: "Vibe coding" and "Agentic engineering."
He pointed out that "vibe coding" raises the lower limit for everyone to develop software, while "agentic engineering" aims to maintain the upper quality limit of professional software.
"Agentic engineering" is not just about speeding up; it requires developers to coordinate those "potentially error-prone, random yet extremely powerful" AI agents to move forward at full speed without sacrificing quality.
This will also greatly expand the imaginative space for corporate output. Karpathy pointed out:
People used to talk about '10x engineers'; 10x is no longer enough to describe the speedup you can achieve. In my opinion, those who excel in this field can produce outputs far exceeding 10x.
In the face of this explosion of productivity, corporate organizational structures and talent selection logic must be reconstructed.
He suggested companies abandon traditional algorithmic problem-solving interviews and instead assess candidates on how they can use multiple AI agents to collaboratively build large projects while withstanding attacks from other AI agents.
Key Focus Points for AI Commercialization
For entrepreneurs and investors currently eager to find AI application scenarios, Karpathy provided a highly practical assessment framework: verifiability.
Currently, AI capabilities exhibit a bizarre "sawtooth" pattern. He illustrated:
The most advanced models today can simultaneously reconstruct a codebase of 100,000 lines or find zero-day vulnerabilities, yet they still tell me to walk to a car wash 50 meters away; it’s just crazy.
The reason for this disconnect is that cutting-edge labs (like OpenAI) have poured massive reinforcement learning resources into areas with easily verifiable outcomes, such as "math" and "code."
Therefore, as long as one is within verifiable commercial scenarios, AI can exert a tremendous impact.
Karpathy hinted that there are still many high-value, verifiable reinforcement learning environments in the market that have not yet received attention from top labs, which is a huge blue ocean for startups to fine-tune and commercialize.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。