How does cryptocurrency become the cornerstone of trust in the AI agency economy, surpassing the "cathedral" of the "market"?

CN
PANews
Follow
4 hours ago

Author: Daniel Barabander

Translated by: Tim, PANews

If the future internet evolves into a marketplace where AI agents pay each other for services, then to some extent, cryptocurrencies will align with mainstream products and markets that we could only dream of before. While I am confident that there will be service payments between AI agents, I remain cautious about whether the marketplace model can prevail.

By "marketplace," I refer to a decentralized, permissionless ecosystem composed of independently developed, loosely coordinated agents. This kind of internet resembles an open market rather than a centrally planned system. The most typical "success" case is Linux. In contrast, the "cathedral" model is a tightly integrated service system controlled by a few giants, with Windows as a typical representative. (The term originates from Eric Raymond's classic essay "The Cathedral and the Bazaar," which describes open-source development as seemingly chaotic but adaptable. It is an evolutionary system that can surpass meticulously designed systems over time.)

Let us analyze the two prerequisites for realizing this vision: the widespread adoption of payment by intelligent agents and the rise of a marketplace-style economy. Then, I will explain why, when both become a reality, cryptocurrencies will not only be practical but also indispensable.

Condition 1: Payments will be integrated into most agent transactions

The cost-subsidy model of the internet as we know it relies on advertising based on human traffic to application pages. However, in a world dominated by intelligent agents, humans will no longer need to visit websites personally to obtain online services. Applications will increasingly shift towards an architecture based on intelligent agents rather than traditional user interface models.

Agents do not have "eyeballs" (i.e., user attention) to sell for advertising, so applications urgently need to change their monetization strategies to charge agents directly for services. This is essentially similar to the current API business model. For example, LinkedIn offers its basic services for free, but if you want to access its API (i.e., the "bot" user interface), you must pay a corresponding fee.

Thus, it seems likely that payment systems will be integrated into most intelligent agent transactions. When agents provide services, they will charge users or other agents through microtransactions. For instance, you might ask your personal agent to find excellent job candidates on LinkedIn, at which point your personal agent will interact with the LinkedIn recruiting agent, which will charge a service fee in advance.

Condition 2: Users will rely on agents built by independent developers, equipped with highly specialized prompts, data, and tools, forming a "marketplace" where there is no trust relationship between the agents.

This condition makes sense theoretically, but I am unsure how it will operate in practice.

Here are the reasons why a marketplace model will form:

Currently, humans bear the vast majority of service work, solving specific tasks through the internet. However, with the rise of intelligent agents, the range of tasks that technology can take over will expand exponentially. Users will need specialized agents with exclusive prompt instructions, tool invocation capabilities, and data support to complete specific tasks. The diversity of these task sets will far exceed the coverage capabilities of a few trusted companies, just as the iPhone must rely on a vast ecosystem of third-party developers to unleash its full potential.

Independent developers will take on this role, gaining the ability to create specialized intelligent agents through extremely low development costs (like Video Coding) combined with open-source models. This will give rise to a long-tail market composed of a vast number of niche agents, forming a marketplace-like ecosystem. When users request agents to perform tasks, these agents will call upon other agents with specific expertise to collaborate, and the called agents will continue to invoke even more specialized agents, thus forming a layered collaborative network.

In this marketplace scenario, the vast majority of service-providing agents will be relatively untrusted among themselves, as these agents are provided by unknown developers and serve niche purposes. Agents at the long tail will find it difficult to establish sufficient reputation to gain trust. This trust issue will be particularly pronounced in a chain model, where services are delegated layer by layer, and as the distance between the service agents and the initially trusted (or even reasonably identifiable) agents increases, user trust will gradually diminish at each delegation stage.

However, when considering how to implement this in practice, many unresolved questions remain:

Let’s start with professional data as a primary application scenario for agents in the marketplace, deepening our understanding through specific cases. Suppose there is a small law firm handling a large volume of transactions for crypto clients, which has accumulated hundreds of negotiated term sheets. If you are a crypto company undergoing seed funding, you can envision a scenario where an agent fine-tuned based on these term sheets can effectively assess whether your funding terms meet market standards, which would have significant practical value.

But we need to think deeper: does it really align with the law firm's interests to provide reasoning services for such data through agents?

Opening this service to the public in the form of an API essentially commodifies the firm's proprietary data, while the firm's true business demand is to obtain premium returns through the professional service time of lawyers. From a legal regulatory perspective, high-value legal data is often subject to strict confidentiality obligations, which is at the core of its commercial value and is also a significant reason why public models like ChatGPT cannot access such data. Even if neural networks have the characteristic of "information fogging," under the lawyer-client confidentiality framework, can the mere non-explainability of algorithmic black boxes assure the firm that sensitive information will not leak? This poses significant compliance risks.

Considering comprehensively, the firm's better strategy may be to internally deploy AI models to enhance the accuracy and efficiency of legal services, building a differentiated competitive advantage in the professional service track, continuously leveraging the intellectual capital of lawyers as the core profit model, rather than risking data asset monetization.

In my view, the "best application scenarios" for professional data and intelligent agents should meet three conditions:

  1. Data has high commercial value
  2. Comes from non-sensitive industries (non-medical/legal, etc.)
  3. Is a "data byproduct" outside the main business.

For example, in the case of a shipping company (a non-sensitive industry), the data generated during its logistics operations, such as vessel positioning, cargo volume, and port turnover (data "waste" outside the core business), may have predictive market value for commodity hedge funds. The key to monetizing such data lies in: the marginal cost of data acquisition approaches zero and does not involve core business secrets. Similar scenarios may exist in areas such as: retail foot traffic heat maps (commercial real estate valuation), regional electricity data from power companies (industrial production index forecasting), and user viewing behavior data from streaming platforms (cultural trend analysis).

Known typical cases include airlines selling on-time performance data to travel platforms and credit card companies selling regional consumption trend reports to retailers.

Regarding prompts and tool invocation, I am not sure what value independent developers can provide that has not been productized by mainstream brands. My simple logic is: if a prompt and tool invocation combination is valuable enough to allow independent developers to profit, wouldn’t trusted big brands directly enter the market to commercialize it?

This may stem from my lack of imagination; the long-tail distribution of niche code repositories on GitHub provides a good analogy for the intelligent agent ecosystem, and I welcome sharing specific cases.

If real-world conditions do not support the marketplace model, then the vast majority of service-providing agents will be relatively trustworthy, as they will be developed by well-known brands. These agents can limit their interaction scope to a filtered set of trusted agents and enforce service guarantees through a trust chain mechanism.

Why are cryptocurrencies indispensable?

If the internet becomes a marketplace composed of specialized but fundamentally untrustworthy agents (Condition 2), and these agents earn rewards by providing services (Condition 1), then the role of cryptocurrencies will become much clearer: they provide the necessary trust assurance to support transactions in a low-trust environment.

When users use free online services, they will invest without hesitation (because the worst outcome is just wasting time), but when it comes to monetary transactions, users will strongly demand certainty that "payment yields results." Currently, users achieve this assurance through a "trust but verify" process, trusting the trading counterpart or service platform during payment and then retrospectively verifying performance after the service is completed.

However, in a market composed of numerous agents, trust and post-verification will be much harder to achieve than in other scenarios.

Trust. As mentioned earlier, agents in the long tail distribution will find it difficult to accumulate sufficient credibility to gain the trust of other agents.

Post-verification. Agents will call upon each other in a long chain structure, making it significantly more challenging for users to manually check and identify which agent has failed or acted improperly.

The key is that the "trust but verify" model we currently rely on will be unsustainable in this (technical) ecosystem. This is precisely where cryptographic technology shines, enabling value exchange in a trustless environment. Cryptographic technology replaces the reliance on trust, reputation systems, and post-hoc manual verification in traditional models through the dual assurance of cryptographic verification mechanisms and cryptoeconomic incentive mechanisms.

Cryptographic verification: The executing service agent can only receive payment after providing cryptographic proof to the requesting service agent, confirming that it has completed the promised task. For example, the agent can prove through a Trusted Execution Environment (TEE) or Zero-Knowledge Transport Layer Security (zkTLS) (provided we can achieve such verification at a sufficiently low cost or sufficiently fast speed) that it indeed scraped data from a specified website, ran a specific model, or contributed a specific amount of computational resources. Such work has deterministic characteristics and can be relatively easily verified through cryptographic technology.

Cryptoeconomics: The executing service agent needs to stake some asset, which will be forfeited if cheating is discovered. This mechanism ensures honest behavior through economic incentives, even in a trustless environment. For instance, an agent can research a topic and submit a report, but how do we determine whether it "performed excellently"? This is a more complex form of verifiability because it is not deterministic, and achieving precise fuzzy verifiability has long been the ultimate goal of cryptographic projects.

However, I believe that by using AI as a neutral arbitrator, we can finally achieve fuzzy verifiability. We can envision a scenario in a trust-minimized environment like a Trusted Execution Environment, where an AI committee runs dispute resolution and forfeiture processes. When one agent questions another agent's work, each AI in the committee will receive the input data, output results, and relevant background information (including its historical dispute records and past work on the network). They can then rule on whether to impose a penalty. This will form an optimistic verification mechanism that fundamentally prevents cheating by participants through economic incentives.

From a practical perspective, cryptocurrencies enable us to achieve the atomicity of payments through service proofs, meaning that all work must be verified as completed before AI agents can receive payment. In a permissionless agent economy, this is the only scalable solution that can provide reliable assurance at the network edge.

In summary, if the vast majority of agent transactions do not involve monetary payments (i.e., do not meet Condition 1) or are conducted with trusted brands (i.e., do not meet Condition 2), then we may not need to build cryptocurrency payment channels for agents. This is because when funds are secure, users do not mind interacting with untrusted parties; when it comes to monetary transactions, agents only need to limit the interactive objects to a whitelist of a few trusted brands and institutions, ensuring the fulfillment of service commitments through a trust chain.

However, if both conditions are met, cryptocurrencies will become an indispensable infrastructure, as they are the only way to validate work and enforce payments on a large scale in a low-trust, permissionless environment. Cryptographic technology provides the "marketplace" with a competitive tool that surpasses the "cathedral."

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Bitget:注册返10%, 送$100
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink