How can AI economy surpass the DeFi TVL myth?

CN
3 hours ago

This article will explore new primitives that can form the pillars of an AI-native economy.

Author: LazAI

Introduction

Decentralized Finance (DeFi) has ignited a story of exponential growth through a series of simple yet powerful economic primitives, transforming blockchain networks into a global permissionless market and fundamentally disrupting traditional finance. In the rise of DeFi, several key metrics have become the universal language of value: Total Value Locked (TVL), Annual Percentage Yield (APY/APR), and liquidity. These concise metrics inspire participation and trust. For instance, in 2020, the TVL of DeFi (the dollar value of assets locked in protocols) skyrocketed 14 times, and then quadrupled again in 2021, peaking at over $112 billion. High yields (with some platforms claiming APYs as high as 3000% during the liquidity mining craze) attracted liquidity, while the depth of liquidity pools indicated lower slippage and more efficient markets. In short, TVL tells us "how much capital is involved," APR tells us "how much yield can be earned," and liquidity indicates "the ease of trading assets." Despite their flaws, these metrics built a financial ecosystem worth billions from scratch. By converting user participation into direct financial opportunities, DeFi created a self-reinforcing adoption flywheel, leading to rapid proliferation and driving mass participation.

Today, AI stands at a similar crossroads. However, unlike DeFi, the current narrative of AI is dominated by large general models trained on massive internet datasets. These models often struggle to deliver effective results in niche areas, specialized tasks, or personalized needs. Their "one-size-fits-all" approach, while powerful, is fragile; it is general yet misaligned. This paradigm urgently needs to shift. The next era of AI should not be defined by the scale or generality of models but should focus on bottom-up—smaller, highly specialized models. Such customized AI requires a whole new type of data: high-quality, human-aligned, and domain-specific data. However, acquiring such data is not as simple as web scraping; it requires proactive and conscious contributions from individuals, domain experts, and communities.

To drive this new era of specialized, human-aligned AI, we need to build an incentive flywheel similar to what DeFi designed for finance. This means introducing new AI-native primitives to measure data quality, model performance, agent reliability, and alignment incentives—these metrics should directly reflect the true value of data as an asset (rather than merely as input).

This article will explore these new primitives that can form the pillars of an AI-native economy. We will elaborate on how, if the right economic infrastructure is established (i.e., generating high-quality data, reasonably incentivizing its creation and use, and being individual-centered), AI can thrive. We will also analyze platforms like LazAI as examples of how they are pioneering the construction of these AI-native frameworks, leading to a new paradigm of pricing and rewarding data, fueling the next leap in AI innovation.

The Incentive Flywheel of DeFi: TVL, Yields, and Liquidity—A Quick Review

The rise of DeFi was not accidental; its design made participation both profitable and transparent. Key metrics such as Total Value Locked (TVL), Annual Percentage Yield (APY/APR), and liquidity are not just numbers; they are primitives that align user behavior with network growth. Together, these metrics create a virtuous cycle that attracts users and capital, driving further innovation.

  • Total Value Locked (TVL): TVL measures the total capital deposited in DeFi protocols (such as lending pools and liquidity pools), becoming synonymous with the "market cap" of DeFi projects. The rapid growth of TVL is seen as a sign of user trust and protocol health. For example, during the DeFi boom from 2020 to 2021, TVL surged from under $10 billion to over $100 billion, and by 2023, it exceeded $150 billion, showcasing the scale of value participants are willing to lock into decentralized applications. High TVL creates a gravitational effect: more capital means higher liquidity and stability, attracting more users seeking opportunities. Although critics point out that blindly chasing TVL may lead protocols to offer unsustainable incentives (essentially "buying" TVL), obscuring inefficiencies, without TVL, the early DeFi narrative would lack a concrete way to track adoption.

  • Annual Percentage Yield (APY/APR): Yield promises transform participation into tangible opportunities. DeFi protocols began offering astonishing APRs to liquidity or capital providers. For instance, Compound launched the COMP token in mid-2020, pioneering the liquidity mining model—rewarding liquidity providers with governance tokens. This innovation sparked a frenzy of activity. Using the platform became not just a service but an investment. High APY attracted yield seekers, further boosting TVL. This reward mechanism incentivized early adopters directly with substantial returns, driving network growth.

  • Liquidity: In finance, liquidity refers to the ability to transfer assets without causing significant price fluctuations—this is the cornerstone of a healthy market. In DeFi, liquidity is often initiated through liquidity mining programs (where users earn tokens for providing liquidity). The deep liquidity of decentralized exchanges and lending pools means users can trade or borrow with low friction, enhancing user experience. High liquidity leads to higher trading volumes and utility, attracting more liquidity—a classic positive feedback loop. It also supports composability: developers can build new products (derivatives, aggregators, etc.) on top of liquid markets, driving innovation. Thus, liquidity becomes the lifeblood of the network, propelling adoption and the emergence of new services.

These primitives together form a powerful incentive flywheel. Participants who create value by locking assets or providing liquidity are immediately rewarded (through high yields and token incentives), encouraging more participation. This transforms individual involvement into widespread opportunities—users earn profits and governance influence—and these opportunities, in turn, generate network effects, attracting thousands of users. The results are remarkable: by 2024, the number of DeFi users exceeded 10 million, with its value growing nearly 30 times in just a few years. Clearly, large-scale incentive alignment—turning users into stakeholders—is key to the exponential rise of DeFi.

The Missing Elements of the Current AI Economy

If DeFi demonstrated how bottom-up participation and incentive alignment can ignite a financial revolution, the current AI economy still lacks foundational primitives to support a similar transformation. Today's AI is dominated by large general models trained on massive scraped datasets. These foundational models are impressive in scale but are designed to solve all problems, often failing to serve anyone particularly effectively. Their "one-size-fits-all" architecture struggles to adapt to niche areas, cultural differences, or individual preferences, leading to fragile outputs, blind spots, and an increasing disconnect from real-world needs.

The definition of the next generation of AI will no longer be solely about scale but will also encompass contextual understanding—the ability of models to understand and serve specific domains, professional communities, and diverse human perspectives. However, this contextual intelligence requires different inputs: high-quality, human-aligned data. And this is precisely what is currently missing. There is no widely recognized mechanism to measure, identify, value, or prioritize such data, nor is there an open process for individuals, communities, or domain experts to contribute their perspectives and improve the intelligent systems that increasingly impact their lives. As a result, value remains concentrated in the hands of a few infrastructure providers, while the upward potential of the masses in the AI economy is disconnected. Only by designing new primitives that can discover, validate, and reward high-value contributions (data, feedback, alignment signals) can we unlock the participatory growth cycle that DeFi relies on for its prosperity.

In short, we must also ask:

How should we measure the value created? How can we build a self-reinforcing adoption flywheel to drive bottom-up, individual-centered data participation?

To unlock an "AI-native economy" similar to DeFi, we need to define new primitives that transform participation into opportunities for AI, catalyzing network effects that have yet to be seen in the field.

AI-Native Tech Stack: New Primitives for a New Economy

We are no longer just transferring tokens between wallets; we are inputting data into models, transforming model outputs into decisions, and putting AI agents into action. This requires new metrics and primitives to quantify intelligence and alignment, just as DeFi metrics quantify capital. For example, LazAI is building the next generation of blockchain networks by introducing new asset standards for AI data, model behavior, and agent interactions to address the issue of AI data alignment.

The following outlines several key primitives for defining the on-chain AI economy's value:

  • Verifiable Data (the new "liquidity"): Data for AI is akin to liquidity for DeFi—the lifeblood of the system. In AI (especially large models), having the right data is crucial. However, raw data may be of poor quality or misleading, necessitating on-chain verifiable high-quality data. A possible primitive here is "Proof of Data (PoD)/Proof of Data Value (PoDV)." This concept will measure the value of data contributions based not only on quantity but also on quality and its impact on AI performance. It can be seen as the counterpart to liquidity mining: contributors providing useful data (or labels/feedback) will be rewarded based on the value their data brings. Early designs of such systems are already taking shape. For example, a blockchain project's PoD consensus treats data as a primary resource for validation (similar to energy in proof of work or capital in proof of stake). In this system, nodes are rewarded based on the quantity, quality, and relevance of the data they contribute.

Expanding this to the general AI economy, we might see "Total Data Value Locked (TDVL)" as a metric: an aggregate measure of all valuable data in the network, weighted by verifiability and usefulness. Verified data pools could even be traded like liquidity pools—for instance, a verified medical imaging pool for on-chain diagnostic AI could have quantifiable value and utility. Data provenance (understanding the source and modification history of data) will be a key part of this metric, ensuring that the data input into AI models is trustworthy and traceable. Essentially, if liquidity pertains to available capital, verifiable data pertains to available knowledge. Metrics like Proof of Data Value (PoDV) can capture the amount of useful knowledge locked in the network, while on-chain data anchoring achieved through LazAI's Data Anchoring Token (DAT) makes data liquidity a measurable and incentivized economic layer.

  • Model Performance (a New Asset Class): In the AI economy, trained models (or AI services) themselves become assets—potentially viewed as a new asset class alongside tokens and NFTs. Well-trained AI models hold value due to the intelligence encapsulated in their weights. But how do we represent and measure this value on-chain? We may need on-chain performance benchmarks or model certifications. For example, accuracy on standard datasets or win rates in competitive tasks could serve as performance scores recorded on-chain. This could be seen as an on-chain "credit rating" or KPI for AI models. Such scores could be adjusted as models are fine-tuned or data is updated. Projects like Oraichain have explored combining AI model APIs with reliability scores (validating AI outputs against expected results through test cases) on-chain. In AI-native DeFi ("AiFi"), we can envision staking based on model performance—e.g., if developers believe their model performs well, they can stake tokens; if independent on-chain audits confirm their performance, they receive rewards (if the model underperforms, they lose their stake). This would incentivize developers to report honestly and continuously improve their models. Another idea is tokenized model NFTs carrying performance metadata—the "floor price" of a model NFT might reflect its utility. Such practices are already emerging: some AI marketplaces allow the buying and selling of model access tokens, and protocols like LayerAI (formerly CryptoGPT) explicitly view data and AI models as an emerging asset class in the global AI economy. In short, while DeFi asks, "How much capital is locked?", AI-DeFi will ask, "How much intelligence is locked?"—not just in terms of computing power (though equally important), but also in terms of the effectiveness and value of models running in the network. New metrics might include "Model Quality Proof" or a temporal index of on-chain AI performance improvements.

  • Agent Behavior and Utility (On-Chain AI Agents): One of the most exciting and challenging new elements in AI-native blockchains is the autonomous AI agents running on-chain. They could be trading bots, data curators, customer service AIs, or complex DAO governors—essentially software entities capable of perceiving, making decisions, and acting on behalf of users or even autonomously in the network. The DeFi world has only basic "bots"; in the AI blockchain world, agents could become first-class economic entities. This creates a demand for metrics around agent behavior, trustworthiness, and utility. We might see mechanisms similar to "agent utility scores" or reputation systems. Imagine each AI agent (potentially represented as NFTs or semi-fungible tokens (SFTs)) accumulating reputation based on their actions (completing tasks, collaborating, etc.). Such ratings would be akin to credit scores or user ratings, but for AI. Other contracts could decide whether to trust or use agent services based on these ratings. In the iDAO (individual-centered DAO) concept proposed by LazAI, each agent or user entity has its own on-chain domain and AI assets. We can envision these iDAOs or agents establishing measurable records.

Platforms have already begun tokenizing AI agents and providing on-chain metrics: for example, Rivalz's "Rome protocol" creates NFT-based AI agents (rAgents), with their latest reputation metrics recorded on-chain. Users can stake or lend these agents, with rewards depending on the agents' performance and impact within the collective AI "swarm." This is essentially DeFi for AI agents and demonstrates the importance of agent utility metrics. In the future, we might discuss "active AI agents" as we do "active addresses," or discuss "agent economic impact" as we do trading volume.

  • Attention Trajectories may become another primitive—recording what agents focus on during decision-making (which data, signals). This could make black-box agents more transparent and auditable, attributing the success or failure of agents to specific inputs. In summary, agent behavior metrics will ensure accountability and alignment: to allow autonomous agents to manage large amounts of capital or critical tasks, their reliability must be quantified. High agent utility scores may become a prerequisite for on-chain AI agents managing significant funds (similar to how high credit scores are a threshold for large loans in traditional finance).

  • Usage Incentives and AI Alignment Metrics: Finally, the AI economy needs to consider how to incentivize beneficial use and alignment. DeFi grows through liquidity mining, early user airdrops, or fee rebates; in AI, mere usage growth is insufficient; we need to incentivize the use that improves AI outcomes. At this point, metrics tied to AI alignment become crucial. For example, human feedback loops (such as users rating AI responses or providing corrections through iDAOs, which will be detailed later) could be recorded, and feedback contributors could earn "alignment rewards." Or envision "attention proofs" or "participation proofs," where users who invest time in improving AI (by providing preference data, corrections, or new use cases) receive rewards. Metrics could include attention trajectories, capturing high-quality feedback or human attention invested in optimizing AI.

Just as DeFi requires block explorers and dashboards (like DeFi Pulse, DefiLlama) to track TVL and yields, the AI economy also needs new browsers to track these AI-centric metrics—imagine a dashboard like "AI-llama" displaying total alignment data, active AI agents, cumulative AI utility rewards, and more. It shares similarities with DeFi, but the content is entirely new.

Moving Towards a DeFi-Style AI Flywheel

We need to build an incentive flywheel for AI—viewing data as a first-class economic asset, thereby transforming AI development from a closed endeavor into an open, participatory economy, just as DeFi turned finance into a user-driven liquidity open field.

Early explorations in this direction are already emerging. For instance, projects like Vana are beginning to reward users for participating in data sharing. The Vana network allows users to contribute personal or community data to a DataDAO (decentralized data pool) and earn dataset-specific tokens (which can be exchanged for network-native tokens). This is an important step towards monetizing data contributors.

However, merely rewarding contribution behavior is insufficient to replicate the explosive flywheel of DeFi. In DeFi, liquidity providers are rewarded not only for depositing assets, but the assets they provide also have transparent market value, and yields reflect actual usage (trading fees, borrowing interest plus incentive tokens). Similarly, the AI data economy needs to go beyond generic rewards and directly price data. Without economic pricing based on data quality, scarcity, or the degree of improvement to models, we may fall into shallow incentives. Simply distributing token rewards for participation may encourage quantity over quality, or stagnate when tokens lack actual AI utility linkage. To truly unleash innovation, contributors need to see clear market-driven signals, understand the value of their data, and receive returns when their data is actually used in AI systems.

We need an infrastructure that focuses more on directly valuing and rewarding data to create a data-centered incentive loop: the more high-quality data people contribute, the better the models become, attracting more usage and data demand, thereby increasing contributor rewards. This will shift AI from a closed competition for big data to an open market for trustworthy, high-quality data.

How do these ideas manifest in real projects? Take LazAI as an example—this project is building the next generation of blockchain networks and foundational primitives for a decentralized AI economy.

Introduction to LazAI—Aligning AI with Humanity

LazAI is a next-generation blockchain network and protocol designed to address the AI data alignment problem by introducing new asset standards for AI data, model behavior, and agent interactions, constructing the infrastructure for a decentralized AI economy.

LazAI offers one of the most forward-looking approaches by making data verifiable, incentivized, and programmable on-chain to solve the AI alignment issue. The following will illustrate how the LazAI framework puts the aforementioned principles into practice.

Core Issue—Data Misalignment and Lack of Fair Incentives

AI alignment often boils down to the quality of training data, while the future requires new data that is aligned with human perspectives, trustworthy, and governed. As the AI industry shifts from centralized general models to contextualized, aligned intelligence, the infrastructure must evolve in tandem. The next era of AI will be defined by alignment, precision, and traceability. LazAI directly addresses the challenges of data alignment and incentives, proposing a fundamental solution: aligning data at the source and directly rewarding the data itself. In other words, ensuring that training data verifiably represents human perspectives, is denoised/bias-corrected, and rewards based on data quality, scarcity, or the degree of improvement to models. This represents a paradigm shift from patching models to organizing data.

LazAI not only introduces primitives but also proposes a new paradigm for data acquisition, pricing, and governance. Its core concepts include Data Anchoring Tokens (DAT) and individual-centered DAOs (iDAOs), both of which achieve data pricing, traceability, and programmable use.

Verifiable and Programmable Data—Data Anchoring Tokens (DAT)

To achieve this goal, LazAI introduces a new on-chain primitive—Data Anchoring Tokens (DAT), a new token standard designed for the assetization of AI data. Each DAT represents a piece of on-chain anchored data and its lineage information: contributor identity, evolution over time, and use cases. This creates a verifiable historical record for each piece of data—similar to a version control system for datasets (like Git), but secured by blockchain. Since DATs exist on-chain, they possess programmability: smart contracts can manage their usage rules. For example, data contributors can specify that their DATs (such as a set of medical images) are accessible only to specific AI models or used under certain conditions (enforced through code to implement privacy or ethical constraints). The incentive mechanism is reflected in the fact that DATs can be traded or staked—if the data is valuable to the model, the model (or its owner) may pay for access to the DAT. Essentially, LazAI builds a market for data tokenization and traceability. This directly echoes the earlier discussion of the "verifiable data" metric: by examining DATs, one can confirm whether they have been verified, how many models have used them, and what kind of performance improvements they have brought to models. Such data will receive higher valuations. By anchoring data on-chain and linking economic incentives to quality, LazAI ensures that AI training occurs on trustworthy and measurable data. This is a solution to the alignment problem through incentives—high-quality data is rewarded and stands out.

Individual-Centered DAO (iDAO) Framework

The second key component is LazAI's iDAO (individual-centered DAO) concept, which redefines the governance model in the AI economy by placing individuals (rather than organizations) at the core of decision-making and data ownership. Traditional DAOs typically prioritize collective organizational goals, inadvertently weakening individual will. iDAOs disrupt this logic. They are personalized governance units that allow individuals, communities, or domain-specific entities to directly own, control, and verify the data and models they contribute to AI systems. iDAOs support customized, aligned AI: as a governance framework, they ensure that models consistently adhere to the values or intentions of contributors. From an economic perspective, iDAOs also imbue AI behavior with programmability for the community—rules can be set to restrict how models use specific data, who can access the models, and how the outputs of the models are distributed. For example, an iDAO could stipulate that whenever its AI model is called (such as through an API request or task completion), a portion of the revenue will be returned to the DAT holders who contributed relevant data. This establishes a direct feedback loop between agent behavior and contributor rewards—similar to the mechanism in DeFi where liquidity provider rewards are tied to platform usage. Additionally, iDAOs can achieve composable interactions through protocols: one AI agent (iDAO) can call the data or models of another iDAO under negotiated terms.

By establishing these primitives, LazAI's framework turns the vision of a decentralized AI economy into reality. Data becomes an asset that users can own and profit from, models shift from private silos to collaborative projects, and every participant—from individuals curating unique datasets to developers building specialized models—can become a stakeholder in the AI value chain. This incentive alignment is expected to replicate the explosive growth of DeFi: when people realize that participating in AI (by contributing data or expertise) directly translates into opportunities, they will engage more actively. As the number of participants increases, network effects will kick in—more data leads to better models, attracting more users, which in turn generates more data and demand, creating a positive feedback loop.

Building the Trust Foundation for AI: Verified Computing Framework

In this ecosystem, LazAI's Verified Computing Framework is the core layer for building trust. This framework ensures that every generated DAT, every iDAO (individualized autonomous organization) decision, and every incentive distribution has a verifiable traceability chain, making data ownership enforceable, governance processes accountable, and agent behavior auditable. By transforming iDAOs and DATs from theoretical concepts into reliable, verifiable systems, the Verified Computing Framework achieves a paradigm shift in trust—from reliance on assumptions to a certainty guaranteed by mathematical verification.

Value Realization in the Decentralized AI Economy
The establishment of these foundational elements makes the vision of a decentralized AI economy truly tangible:

  • Data Assetization: Users can assert ownership of data assets and earn profits.

  • Model Collaboration: AI models transition from closed silos to open collaborative products.

  • Participation Rights: From data contributors to vertical model developers, all participants can become stakeholders in the AI value chain.

This incentive-compatible design is expected to replicate the growth momentum of DeFi: when users realize that participating in AI development (by contributing data or expertise) can directly translate into economic opportunities, their enthusiasm for participation will be ignited. As the scale of participants expands, network effects will emerge—more high-quality data leads to better models, attracting more users, which in turn generates more data demand, creating a self-reinforcing growth flywheel.

Conclusion: Moving Towards an Open AI Economy

The journey of DeFi shows that the right primitives can unleash unprecedented growth. In the upcoming AI-native economy, we stand at a similar breakthrough threshold. By defining and implementing new primitives that value data and alignment, we can transform AI development from a centralized engineering endeavor into a decentralized, community-driven enterprise. This journey is not without challenges: we must ensure that economic mechanisms prioritize quality over quantity and avoid ethical pitfalls to prevent data incentives from harming privacy or fairness. But the direction is clear. Practices like LazAI's DAT and iDAO are paving the way to translate the abstract concept of "AI aligned with humanity" into concrete mechanisms of ownership and governance.

Just as early DeFi experimented with optimizing TVL, liquidity mining, and governance, the AI economy will iterate its new primitives. In the future, debates and innovations surrounding data value measurement, fair reward distribution, AI agent alignment, and benefits will undoubtedly emerge. This article only scratches the surface of the incentive models that could drive the democratization of AI, hoping to spark open discussions and in-depth research: How can we design more AI-native economic primitives? What unexpected consequences or opportunities might arise? Through broad community participation, we are more likely to build an AI future that is not only technologically advanced but also economically inclusive and aligned with human values.

The exponential growth of DeFi is not magic—it is driven by incentive alignment. Today, we have the opportunity to drive a renaissance in AI through similar practices with data and models. By transforming participation into opportunities and opportunities into network effects, we can kickstart a flywheel that reshapes value creation and distribution in the digital age.

Let us build this future together—starting with a verifiable dataset, an aligned AI agent, and a new primitive.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

OKX:注册返20%
链接:https://www.okx.com/zh-hans/join/aicoin20
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink