a16z: After AI gives humans superpowers, where should we go?

CN
3 hours ago
Original Title: AI just gave you superpowers—now what?
Original Source: a16z crypto
Original Translation: Luffy, Foresight News

A new paper titled "The Minimal Economics of AGI" is spreading widely. To explore this, we engaged in a conversation with the paper's author, covering:

· Automation and Validation: Core Economic Domains

· Why AI systems today feel a lot like what is happening with entry-level positions for colleagues, along with "the curse of the coder"

· The value of "meaning creators," consensus, and status economics

· Why cryptocurrency could become key infrastructure for identity, provenance, and trust

· Two possible futures: Hollow Economy vs Augmented Economy

This episode features Christian Catalini, founder of the MIT Cryptoeconomics Lab, and Eddy Lazzarin, CTO of a16z crypto, in conversation with Robert Hackett, discussing how automation is reshaping the labor market and the nature of intelligence.

What do these changes mean for startups, the future of work, and your career?

The following is the dialogue content:

Robert Hackett: Hello everyone. Today we have Christian Catalini, co-founder of Lightspark, founder of the MIT Cryptoeconomics Lab, along with Eddy Lazzarin from a16z crypto.

We are going to discuss Christian's latest published paper "The Minimal Economics of AGI."

My first question is: What prompted you to start researching the economic relationships between AI and the real world?

Christian Catalini: I would say it stems from a sort of half-existential crisis. We are all facing rapid technological advancement and how fast everything is changing.

I am an optimist, but the core question remains: What should we be doing? What should we focus on? What is worth our time, effort, and attention?

A few months ago, we wrote an article about measurement, with the core idea being: Anything that can be measured will eventually be automated. This does not sound like good news. The essence of this second paper is: If this hypothesis holds, what would happen if we pushed it to its limits?

What would the economy look like? What would be the nature of labor? What should startups do? What should existing giants do? Ultimately, what will the future look like?

Some judgments will be correct, and some will be wrong. Hopefully, our direction is right. Now that the paper is public, we are looking at what ideas resonate and which do not.

Robert: You mentioned it stems from a half-existential crisis?

Christian: I have three main insights. First, this technology is still under our control. Second, its positive value is several orders of magnitude larger than what the pessimists claim. Third, I believe all of us have a set of guidelines for action.

We can think: Where do we create value? What type of things are we doing at work? Work often consists of a combination of tasks. When some of those tasks or parts of the work are automated, people become very anxious.

I believe programming is currently experiencing this process: Many talented individuals who wrote elegant, exceptional code over the past few decades now find themselves saying, "Wow, AI is doing the work I used to do."

AI Systems: From Tool to Colleague

Robert: I want to dive a bit deeper. We also have Eddy Lazzarin here today, who has been the CTO at a16z crypto for several years. Eddy, how do you view these changes?

Eddy Lazzarin: Let me first lay out the timeline and context of the paper. Many people feel that a qualitative shift occurred in December 2025. The change lies in the cumulative incremental improvements in a series of intelligent capabilities that have reached a critical point: AI systems can now perform long-cycle tasks.

A year ago, the feeling was still: I let the system do a small task, it does it great, but I have to give the next command, step by step.

Now, you can give it less guidance. It may not be perfect yet, but suddenly, it feels like working with a person.

You no longer need to break tasks down into tiny pieces and follow each step—that's extreme micromanagement. Now, as long as you communicate clearly, it will jump in and do it, coming back with results in a day or two. This qualitative change opens up vast imaginative space, and everyone is starting to face this reality.

This facing part includes emotional fluctuations, but the more interesting part is how to maximize value in real production and business scenarios.

People are gradually realizing that AI can produce an enormous amount of work, with some outcomes being extraordinarily good, taking only a fraction of the time compared to the past. However, it often exhibits subtle defects that had not been adequately recognized before.

Take software engineering, for example, which is being redefined. Previously, people thought software engineering was just about sitting down to write a bunch of code: thinking through problems, understanding requirements, and then coding—code was the output.

However, the truth is that AI helps us better deconstruct and comprehend this. It’s a very nuanced, iterative process of correcting, collecting feedback, and synthesizing, not just typing out code line by line. It is a holistic task. Therefore, the focus of excellent engineers' work is rapidly shifting.

The process of experimenting, guiding, and taking risks is what Christian refers to as validation in the paper.

The change is that the work structure required of excellent engineers is changing. The proportion of mental energy spent writing code line by line is becoming negligible, and for some extreme "vibe coding" scenarios, it's nearly zero. Now, the vast majority of the work is validation.

Automation vs Validation: The Core Economic Domains

Christian: The automation part is quite intuitive. AI systems are inherently able to do more of what humans used to do. However, currently, they are still somewhat limited by observable domains. All the codebases they have learned from during training or fine-tuning form their foundation.

Many people will say, "Then they can’t innovate, they have no creativity, no taste."

I completely disagree. In fact, innovation is largely just a reorganization of thought. Humans have only explored a tiny fraction of the possible combinations between disciplines. So, I believe that simply by leveraging the knowledge we provide them, these systems will be highly innovative.

In the new economy, validation is an important cost. What is validation cost? Validation begins with the concept of measurement. If you agree that AI is very good at replicating processes when there is data, then you will begin to ask: What is still immeasurable today?

Some things are immeasurable because they fundamentally cannot be measured. Economists call this Knightian uncertainty, named after the economist Frank Knight.

In simple terms, it is the difference between being able to assign probabilities to future events and being completely unable to assign probabilities.

Robert: For those without an economics background, they might be more familiar with Donald Rumsfeld's concept of "unknown unknowns."

Christian: Yes.

The unknown unknowns essentially constitute the immeasurable parts, typically associated with the future. This is why, even if you throw AI systems into the stock market, they may perform decently on average—even better than your financial adviser—but they are likely unable to cope with drastic changes in the environment, such as geopolitical shifts, etc. These are all immeasurable things. Of course, there are many other examples.

Thus, in the paper, validation essentially is: the act of humans applying all implicit measures from birth to career.

Two people may have very similar knowledge and career experiences, but their combined judgments will never be exactly the same. When people say "that person has great taste," "is an excellent curator," "has strong judgment,"... one inspiration for this paper is that everyone is looking for various excuses to comfort themselves, such as "machines will never be able to do X, Y, Z."

But these excuses are all very vague. How do we define taste? How do we define good judgment? Worse still, the judgment needed by an excellent engineer three months ago may be vastly greater than what is needed now.

So we need to find more fundamental, concrete things. Our conclusion is: As long as there is data behind it that can be used for automation, it will be automated.

Three Human Roles in the Future Economy

Robert: Recently, you categorized various tasks and roles in the economy into three types based on their level of automability, or rather, their measurable degree of output and behavior.

Christian: I think there is still a vast space for humans to be irreplaceable on many dimensions. First, of course, is validation.

Now, the leverage an individual has in their career, compared to before December 2025, is immense. This means we should all be more ambitious and rethink existing workflows, what we call the AI sandwich.

A company or startup can have just one human, whom we call the orchestrator, responsible for directing validation, ensuring that the system can be corrected when it deviates from expectations. There might only be one person or a small team at the top.

In the middle layer, there will be a large group of intelligent agents. We have already seen that people are trying out various new things.

At the bottom layer, there will be a batch of top validators. With the right tools, top experts in each field will be responsible for ensuring the system's output meets expectations. This is extremely important work. For a long time, domain experts will shine in this part.

But there is bad news here: When you are doing this work, you are also creating labeled data for your own potential replacement. We've seen the simplest version of this before: people labeling pictures for AI companies, participating in training, now this work is no longer needed.

Now, large foundational model labs are hiring top experts from various fields, such as finance. These individuals are creating evaluation standards and training data, which will ultimately replace their counterparts. Therefore, the validation layer is crucial, and many people will find success in it; it rewards super specialization. If you are the one who can give the ultimate unlocking capability, your leverage is enormous.

Robert: This is the first type. And this role of validators, you refer to as the curse of the coder.

Christian: The curse of the coder is a mechanism where, if you are a top validator, you must continually level up because the technology will become stronger.

What I just mentioned as the orchestrator is essentially the person driving intention. Entrepreneurs are the orchestrators; they see the future and envision a pathway to realization.

Then there is another type of work, where we must acknowledge it is very easy to automate. These positions have already disappeared or are about to disappear. Society has not yet truly addressed these impacts, and there will be a massive retraining demand in the future to push people toward more cutting-edge knowledge domains.

People sometimes misunderstand the paper: we say human validation is the last step, but often AI will validate AI. Before reaching humans, there will be a long chain of validation.

There is another type of role that is the hardest to define, which we call meaning creators. These are people very skilled at understanding trends, social change, issues of societal concern—those things that require everyone to reach consensus. Art is one such area, and the crypto network is somewhat similar.

These meaning creators do not operate in measurable domains. People sometimes say that these tasks require "human warmth." But I genuinely believe that people seriously overestimate the importance of this human warmth. For instance, in psychological counseling, elderly care, and child care.

I believe people will initially have various concerns, but no one truly considers the massive drop in costs. If it becomes 100 times or 1000 times cheaper, people will rapidly change their views. In fact, we already know that people are widely using large models to answer all sorts of very intimate and personal questions.

There is also a type of work where "human-made" will become a very important label. Cryptocurrency will play a significant role here because, without robust cryptographic technology, we would quickly lose the essence of this identity. But the reason "human-made" is valuable is simply because human time and attention are scarce.

Not because it’s better, but simply because you know a human invested scarce time and attention to create that experience. These things are still important.

The Role of Cryptocurrency in the AI World: Identity, Provenance, Trust

Robert: You mentioned cryptography; what is the role of cryptocurrency in this world?

Christian: Very important.

When we first researched, many people pointed out that large models and AI are probabilistic, while cryptocurrency is deterministic. You can imagine setting up guardrails for intelligent agents with smart contracts or giving them the capability to buy and sell resources.

These logics all hold. But I believe there is a deeper complementarity between AI and cryptocurrency. Perhaps today it’s not obvious in the economy because the side effects haven’t manifested, namely identity or issues of digital information provenance.

I believe in the coming months, as these capabilities truly become powerful, we will enter a completely unknown domain. Every digital platform must confront a reality: content previously generated by humans (posts, pictures, anything) may now come from intelligent agents.

As this trend develops, society will have to completely reconstruct the identity system. In an environment where trust is becoming more scarce, cryptographic primitives will shine in numerous applications. Everything built in the past decade will become more foundational. Returning to validation: when underlying information is on the blockchain, the cost of validation is lower, more reliable, and more credible.

Eddy: The cost of automation is rapidly decreasing. The generalized validation costs we just mentioned are also decreasing, although not at the same pace, creating an interesting gap.

You can describe this gap in many ways; some may call it opportunity. This is Christian’s judgment on human labor: If there exists such a bottleneck—a measurable gap produced due to human adaptability, experience, and generality—then humans may be able to specialize in the validation process faster than machines.

Machines do face some challenges in handling validation in the short term. I don’t think this is permanent in the long run, but it certainly is in the short term.

Cryptography and blockchain are validation tools. Provenance proof is just a series of cryptographic evidence showing that something has passed through certain people, certain paths, or undergone certain verifiable transformations, which provides us with signals that make cross-category validation easier. Thus, anything that simplifies validation will help fill this gap.

The Hidden Costs of Automation: Systemic Risks and Responsibility

Eddy: Can we talk about the "Trojan Horse" issue? We've discussed the risks to workers, and there’s much more to say, but from the economic production efficiency perspective, what risks might arise from the extremely low costs of automation?

Christian: We have already seen some signs; many companies are saying that now X% of the code is machine-generated.

Product release cycles have shortened. But simultaneously, we also know that humans cannot audit all code; it likely carries technical debt.

We have all felt this temptation: ask a large model a question, glance at it, and release it as our own work without thorough validation, because the model is getting better. But whether it’s incorrect sentences, erroneous code, or vulnerabilities sneaking into codebases, I believe we will see more and more of these issues.

The paper’s stance is that releasing AI-generated code, copy, or any output that potentially carries errors is a perfectly rational choice because you cannot fully validate it. If taken to a societal level, this means we may be accumulating a degree of systemic risk.

In the acceleration of development, we hope to develop better validation tools to review the content we may have released. However, from a medium- to long-term perspective, companies face the dilemma that investing in the development of better validation tools (including cryptographic primitives) today is costly and may slow down the pace of development. The benefits manifest in the future, while companies are eager to release products and achieve growth.

So I think we will see two types of founders: One type focuses on long-term responsibility, building in the right way. We have already seen some signs that could be called "liability as software." When we deploy AI systems as employees, the issues of liability and insurance will become increasingly important. This isn’t the most glamorous topic, but we will see systemic failures in reality.

Eddy: This idea is very interesting. Because if prior software production was mainly done directly by humans, you could assume that many steps were observed and quality-checked by people. It’s not to say there were never errors, but along the way, there was always someone touching every step.

However, as the degree of automation increases, risks rise, and values escalate, the responsibility also increases. The benefits are skyrocketing, so we are willing to tolerate it. But the ability to supervise, limit, and understand risk boundaries must expand.

Thus, introducing mechanisms akin to insurance, ascribing value to failure risks, may become an essential part of managing businesses that cannot be fully supervised. You would want to delegate the quantification of risks and understanding of issues to experts.

I find it interesting that even software development may encounter entirely new financial dimensions that were not present before.

Christian: Returning to cryptocurrency, everything we have built over the past decade has pushed forward the boundaries of how we measure and weigh risks. You can draw from DeFi, prediction markets; these primitives suddenly become vital.

If you are deploying software and intelligent agents, it is crucial to have a tech stack that allows intelligent agents to see better signals. For example, I spoke with a founder doing trading and payments with intelligent agents; he found that when he switched from traditional payment systems to stablecoin payments, the system performed more reliably because all signals were on-chain. Intelligent agents could better understand what was happening, rather than just calling an API that has no feedback; they could see the complete context of actions.

An interesting added point related to what you just mentioned about insurance and liability: some say that network effects will be the sustainable moat of the AI era. I think reality is more subtle. AI systems and autonomous systems excel at breaking down many of the defensive moats that bilateral platforms have. The costs of launching these platforms and the costs of cold-starting bilateral markets are decreasing.

But another type of network effect is becoming more vital: If you own critical proprietary data generated within your business, and that data allows you to extend validation from humans to machines, you can better underwrite risks, make better decisions, and provide safer products at lower costs.

Therefore, when we compare existing enterprises with startups: existing enterprises with a complete database of failure cases will become extremely valuable. Startups focused on building positive feedback loops around validation (such as involving top experts and learning from decisions) will achieve significant success.

Eddy: This further proves that proprietary data may be one of the most defensive assets.

Two Futures: Hollow Economy vs Augmented Economy

Robert: I have a question I’d like to explore; the paper mentions the hollow economy and the augmented economy. Can you explain this? What are the key differences?

Christian: Sure, let’s start with the hollow economy. There are already early signs that tech companies will realize they can do more with fewer people.

Of course, they will first start with below-average or ordinary employees because AI is already up to the task; and younger practitioners, as the capabilities of senior staff can now be expanded by tenfold or a hundredfold, depending on the task. This is one of the forces driving change.

The second thing we mentioned is the curse of the coder. When experts are training and making decisions, they are essentially generating labeled data. This data can be used in the future to make those same decisions without experts.

Finally, there's alignment drift. Simply put: you cannot treat alignment as a one-time process—"We trained the model, aligned it, and everything's great,"—but more like raising a child, requiring continuous correction and ongoing feedback.

Putting these three dynamics together, coupled with the fact that the incentive to release unverified AI is extremely high because I can gain immediate productivity (e.g., "60% of code is machine-generated"), but some costs will manifest in the future, we may be rushing toward a type of economy where we are no longer cultivating future validators.

Entry-level talent (our future top validators) is becoming increasingly scarce. This group is shrinking. We are creating potential risks that may ultimately lead to what is called a hollow economy.

Once again, I am an optimist. I believe we will eventually move towards an augmented economy. The question is how quickly we can get there and whether we can enable those needing retraining and adaptation to transition as smoothly as possible.

The augmented economy is the opposite. We realize: Entry-level talent has not been developed. But the good news is: AI has immense magical potential in accelerating mastery capabilities. You can discover a young person’s true talent, rather than shoving them into standardized curricula.

You need to fast-track their growth, help them find their true selves, discover what they genuinely love, and locate what they can fully engage in. At least, that’s how we think about it with our own children. No one knows what will be most valuable in the future, but if you build around true talent, your chances of success will be far higher.

I believe AI will play a huge role in this. These are excellent learning tools we must build; I think we currently lack scalable tools of this kind.

Secondly, returning to the curse of the coder: these individuals must continuously retrain and ascend the value chain, realizing "I now have immense leverage; I can become the orchestrator."

Many have talked about the importance of autonomy. I think this hits home: you must realize you can be the orchestrator; you can do much more than before.

In terms of alignment, through secure R&D and better validation tools, if we can enhance our capabilities, we can better validate and become true peers.

Putting these together, you enter a scenario where many previously expensive things are now nearly free. Anything measurable can be automated.

Then we will invent new things. A large number of new jobs, including status economics and immeasurable economics, all build on a strong validation stack, so we have a factual basis. We will not be overwhelmed by false identities or characters attempting to initiate witch hunts.

Overall, the future looks quite bright. Many things that governments have long wanted to achieve, such as quality education and healthcare, may become cheap and ubiquitous.

But we must invest in building throughout the process, rather than scraping through the transition period with extreme decisions like dismantling data centers. That’s not possible and will never work.

Robert: So if you are early in your career, you should use these tools to simulate the environment you will encounter, training yourself. If you are later in your career, you need a sense of urgency, realizing you can accomplish more with fewer resources.

Eddy: It’s hard to say how long this all will last until another wave of unpredictable change arrives. But human expertise lies in being able to see the big picture, overseeing the entire project, knowing where to put more attention, and where to allocate more resources, as well as how to adjust the entire project.

If I am a young person just starting out today, I would indeed feel a bit sad: the glory of spending an entire summer writing a piece of exquisitely elegant, efficient code has vanished. Now it has become a hobby.

But conversely, I would try to get my parents to give me some money to harness a large cluster of computers to see if I can efficiently utilize $5000 of computing power. For instance, can I guide a large number of machines to accomplish one task?

There’s been a meme circulating in tech circles for years: one person can start a billion-dollar startup. Isn’t that how it’s realized?

The ability to control a variety of machines and data while maintaining a global perspective on things has never been developed. Developing this skill has never made sense.

But if you want to undertake a large project, you have always needed to learn how to mobilize many people; that is your path to leverage. As the labor structure changes, this method is changing too. Now you need to learn how to harness this new thing.

A new dividend is emerging. Learning to utilize it is the lesson for young people.

Things are not over—it’s absurd. You’ve just been told you have superpowers. What will you do?

Christian: To summarize succinctly, the apprenticeship may have died, but real work is just beginning.

Many areas that were difficult to enter in the past, such as hardware, can now be seized as long as you have curiosity.

If I were to classify it, the most positive signal from this model is that experimentation cycles are compressed, and people will truly be able to rapidly scale their ideas.

Investment Perspective: Small Teams, High Value, the Inevitability of Cryptocurrency

Robert: Eddy, have you seen this trend in the companies you evaluate for investment?

Eddy: Absolutely. We’ve already seen companies like Block and X lay off large numbers of staff.

I haven’t seen formal analysis, but many crypto projects like Hyperliquid, Uniswap, have extremely high value, yet fewer than 20 employees.

If just a few people can start a company, then there will be a plethora of such companies in the future, right? If that’s the case, they will need to coordinate, and coordination is very complex.

You need reputation, you need identity, proof of data provenance, proof of payment source. We talked about the concept of insurance earlier.

The reason blockchain networks are so attractive is that they are credibly neutral. You don't have to worry about the specific reputation of the 50 billionth company you interact with; you only need to trust smart contracts and verifiable AI models to ensure that transactions occur as expected and payments are completed as required.

I believe this is almost inevitable. I am confident that blockchain will play a core role in this story.

Christian: I completely agree. We have been laying the groundwork and infrastructure for a long time, and I think it will become increasingly useful.

Robert: Christian, after completing all this research and exploration, how do you integrate these findings into your work and life?

Christian: To be honest, without Gemini, ChatGPT, Grok, Claude, we wouldn’t have been able to write this paper. They are excellent co-authors. Of course, they occasionally veer off track, continuously deleting the paragraphs we need.

We even left some Easter eggs in the paper for the large models. At one point, I was chatting with Gemini, and it said it really liked this egg and made a very cute comment.

In that moment, you could really sense the intelligence. It’s not rote; it’s creative. That was a landmark moment—you felt it as a peer, not a tool.

Robert: Alright. If anyone wants to read this paper, the title is "The Minimal Economics of AGI." I highly recommend you check it out. There are insights that could impact your life and how you should approach the future.

Original Link

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink