Intelligent Agents in the Market

CN
4 hours ago

The "market" refers to a decentralized, permissionless ecosystem composed of independently developed, loosely coordinated agents.

Written by: Daniel Barabander, Investment Partner at Variant Fund

Translated by: AIMan@Golden Finance

If the future of the internet is a marketplace where agents (also known as intelligent agents) pay each other for services, then cryptocurrencies will find a mainstream product-market fit that they could only dream of before. While I believe agents will pay each other for services, I am less certain whether this market model will succeed.

What I mean by "market" is a decentralized, permissionless ecosystem composed of independently developed, loosely coordinated agents—where the internet resembles an open market rather than a centralized planned system. Linux is a typical example of the "market" model. This contrasts with the "cathedral" model: services that are tightly controlled and vertically integrated, managed by a few large participants. Windows is a classic example. (The term originates from Eric Raymond's classic essay "The Cathedral and the Bazaar," which describes open-source development as a chaotic yet adaptive system—an evolutionary system capable of surpassing carefully crafted structures over time.)

Let’s analyze each condition—agent payments and the rise of the market—then explain why, if both are realized, cryptocurrency becomes not only useful but necessary.

Two Conditions

Condition 1: Payments will be integrated into most agent transactions.

The internet as we know it subsidizes costs by selling ads based on page views. But in a world composed of agents, people will no longer need to visit websites to access online services. Applications will increasingly be based on agents rather than user interfaces.

Agents lack the eyeballs to attract advertising, so applications have every reason to adjust their monetization strategies to charge agents directly for services. This is essentially the current state of APIs—services like LinkedIn are free, but if you want to use the API (i.e., "bot" users), you have to pay.

Given this, payment functionality is likely to be integrated into most agent transactions. Agents will provide services and charge users/agents fees in microtransactions. For example, you could have your personal agent search for suitable job candidates on LinkedIn. The personal agent would communicate with LinkedIn's recruiting agent, which would charge a service fee upfront.

Condition 2: Users will rely on agents built by independent developers with hyper-specialized prompts/data/tools, forming a trustless agent marketplace of inter-calling services.

This scenario makes sense in principle, but I am unsure how it will play out in practice.

Here are the reasons for the formation of the market:

Currently, humans bear the vast majority of service work, using the internet to solve discrete tasks. But with the rise of intelligent agents, the range of tasks we delegate to technology will expand dramatically. Users will need intelligent agents equipped with specialized prompts, tool calls, and data to perform their specific tasks. These tasks are diverse, and a small group of trusted companies is unlikely to cover them all, just as the iPhone relies on a vast ecosystem of third-party developers to realize its full potential.

Independent agent developers will fill this role, able to create specialized agents at extremely low development costs (e.g., ambient coding) and through open-source models. This will form a long-tail market composed of agents providing highly precise data/prompts/tools, creating a "market." Users will request agents to perform tasks, and these agents will call upon other specialized agents to complete the tasks, which in turn will call upon other agents, forming a long daisy chain (Note: a daisy chain is a network topology where devices are connected in a chain or ring).

In this market scenario, the vast majority of service-providing agents are relatively untrusted, as they are provided by unknown developers and serve niche purposes. Agents in the long tail struggle to build sufficient reputation to gain trust. This trust issue is particularly severe in the daisy chain paradigm, as trust diminishes along each link of the daisy chain as services are delegated to agents that users trust (or can reasonably identify).

However, there are still many unresolved questions when considering how to implement this in practice:

Let’s start with specialized data as the primary use case for agents in the market and look at an example to lay the groundwork. Imagine a small law firm handling numerous transactions for cryptocurrency clients. The firm has hundreds of negotiation term checklists. If you are a cryptocurrency company going through a seed round of financing, you can imagine an agent using a model fine-tuned based on these checklists to determine whether your checklist meets market demand, which would be very useful.

But upon deeper reflection, does it really serve the law firm's interests to infer this data through agents? Offering this service as an API to the public essentially commodifies the law firm's data, while what the law firm truly wants is to increase the billable hours of its lawyers. What about legal/regulatory considerations? The most valuable data often has legal requirements for strict confidentiality—this largely explains its value and is also why ChatGPT cannot access this data. But law firms are strictly limited in sharing this data due to confidentiality obligations. Even if the underlying data is not directly shared, I am very skeptical whether the "fog" of neural networks is sufficient to reassure law firms that information will not be leaked. Given all this, shouldn’t law firms use this model internally to provide better legal services than their competitors while continuing to sell lawyers' time?

In my view, the "best intersection" of specialized data and agents lies in high-value data generated by non-sensitive businesses (e.g., healthcare, law) that can serve as auxiliary to their core fee-generating services. For example, a shipping company (a non-sensitive business) generates a lot of valuable data in its shipping operations (this is just my guess; I know nothing about the shipping industry). Therefore, this shipping company might be very willing to hire agents to leverage this data and charge a fee, as this data would otherwise go to waste. This data could be very valuable to certain parties (like hedge funds). But how many such scenarios exist? (This is not a rhetorical question; if you know of some good scenarios, please let me know.)

Regarding prompts and tool calls, I am just not sure what independent developers will provide that is not mainstream enough to be productized by trusted brands. My simple thought is that if a prompt/tool call is valuable enough for independent developers to profit from, wouldn’t a trusted brand step in and build a business around it? I think this is just my lack of imagination—many ecological code repositories on GitHub serve as a good analogy for the situation with agents. I welcome everyone to propose some excellent use case examples.

If the actual situation does not support the market scenario, the vast majority of service-providing agents will be relatively trustworthy, as they will be developed by major brands. Agents can limit their interactions to a select group of trusted agents and rely on trust chains to enforce service guarantees.

Why Cryptocurrency

If the internet becomes a marketplace composed of professional but fundamentally untrusted agents (Condition 2) providing payment services (Condition 1), then the role of cryptocurrency becomes clearer: it provides the guarantees needed to underwrite transactions in a low-trust environment.

While users will recklessly use online services for free (since the worst-case scenario is just wasting time), when it comes to money, users need to ensure they receive the services they paid for. Nowadays, users obtain this assurance through a "trust but verify" process. You trust the counterparty or platform you are paying for services and verify afterward that you received the service.

But in an agent marketplace, trust and post-verification are nearly impossible to achieve.

  • Trust. As mentioned, agents in the long tail struggle to build sufficient reputation for other agents to trust them.
  • Post-verification. Agents will connect with other agents in a long chain, making it more challenging for users to manually verify work and identify which agent made a mistake or acted improperly.

The end result is that the "trust but verify" model we currently rely on will not be sustainable in this universe. And this is precisely where cryptocurrency shines—exchanging value in untrusted environments. Cryptocurrency achieves this by replacing trust, reputation, and post-verification with cryptography and cryptoeconomics.

  • Cryptography: Service-providing agents can only receive payment when they can cryptographically prove to the requesting agent that they have indeed completed the promised task. For example, an agent could provide TEE certification or zkTLS proof (provided the cost is low enough / speed is fast enough) to demonstrate that it scraped data from a website, ran a model, or contributed a certain amount of computational power. These are deterministic tasks that are relatively easy to verify cryptographically.
  • Cryptoeconomics: Service-providing agents will stake an asset, and if found cheating, will be penalized, creating an economic incentive for them to act honestly even in the absence of trust. For example, an agent could research a topic and provide a report—but how do we know if it "did well"? This is a more challenging form of verification because it is not deterministic, and achieving correct fuzzy verifiability has long been the holy grail of cryptographic projects. But I hope we can ultimately achieve fuzzy verifiability by using AI as a neutral arbiter. We can imagine a dispute resolution/penalty process run by an AI committee in a trust-minimized environment (e.g., a trusted execution environment (TEE)). When one agent disputes the work of another agent, every AI in the committee can access the input, output, and details about that agent's work (past disputes/network work history, etc.). They can then decide whether to impose a penalty. This would serve as a form of optimistic verifiability, where economic incentives would first prevent parties from cheating.

In practice, cryptocurrency enables us to atomize payments through service proofs—no agent will receive payment unless the work has been verifiably completed. In a permissionless agent economy, this is the only scalable way to ensure edge reliability.

In summary, if the vast majority of agent transactions do not involve payments (i.e., do not meet Condition 1) or are conducted with trusted brands (i.e., do not meet Condition 2), we may not need to set up cryptocurrency channels for agents. This is because when money is not involved, users can safely interact with untrusted third parties; when money is involved, agents only need to whitelist a limited number of trusted brands/institutions to interact, and the trust chain can ensure that the promises of services provided by each agent are fulfilled.

But if both conditions are met, cryptocurrency will become an indispensable infrastructure, the only scalable way to verify work and execute payments in low-trust, permissionless environments. Cryptocurrency provides the tools for the market to surpass the cathedral.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Bitget:注册返10%, 送$100
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink