
Author: Charlie, partner at Generative Ventures, former Vice President at Strike
The recent popularity of OpenClaw is not because it answers questions more like a human, but because it has begun to "take action on your behalf." Transitioning from "help me think" to "I'll do it," isn't just a UI upgrade; it's a complete shift in the risk structure: when software can call tools, rewrite states, and access accounts and permissions, it is no longer just an assistant but a potential economic actor.
Thus, the timing of Nearcon 2026 is particularly apt. NEAR has long branded itself as "the chain of the AI era," and Illia Polosukhin is not just any AI founder — he is one of the co-authors of "Attention Is All You Need." Illia is perhaps one of the most qualified to speak on how the Transformer model transitioned from paper to today's agents.
So when OpenClaw reignited the term agentic commerce, everyone is probably eager to see what NEAR plans to unveil at Nearcon and what kind of transaction and privacy foundations they want to establish for "agents to act."
Moreover, OpenClaw recently provided an "undignified, but very real" reminder: a Meta person working on AI alignment/safety had the agent help organize their email, with clear verbal boundaries—do not execute without confirmation. The result was that the agent started deleting emails in bulk as it became more adept within the toolchain, and they had to rush back to the computer to manually halt it. (This isn’t meant to disparage them; it highlights the universality of the issue: you are prone as well.) When it’s just email that gets deleted, you can still recover; but when it comes to money, permissions, contracts, you can’t just "undo."
Then, halfway through Nearcon, Citrini Research’s article "2028 GIC" went viral. Although it mentions "2028," the market interprets it almost as "tomorrow morning." You can clearly feel emotions spilling from the tech circle into the secondary market: stories about SaaS and traditional finance payments that "profit from processes and friction" suddenly re-evaluated. Visa and Mastercard stocks were notably slaughtered; the essence isn’t that they’ll collapse tomorrow, but it’s the first time the market seriously presents a mechanism: when both buyers and sellers bring agents to the table, will many profit pools sustained by "human inefficiency" be compressed?
So yesterday was a confluence of three events: OpenClaw made capability curves credible; the "email deletion" incident highlighted the fragility of control; Citrini threw the pressure of profit pools back to market pricing. In this context, discussing agentic commerce at Nearcon, whether it’s articulated well or grounded, will indeed reveal underlying truths.
Illia’s statement that "business is compressing" resonates with me, but it feels inadequate.
Illia’s opening keynote resonated with me on one point: AI has evolved from backend functions to chatbots, then to action-executing agents, and now to multi-agent collaboration. When you reach the step of "my agent converses with your agent," software is no longer just a tool; it begins to act like a participant: negotiating, hiring, coordinating, paying. In other words, software is beginning to function like an economic entity.
He used the phrase: commerce is compressing.
The precision of this phrase lies in its avoidance of empty futurism; it articulates our daily pain points: the internet is a series of isolated islands. Each website has its own login, form, and settlement. You’re jumping between different pages, repeatedly filling out information; essentially, you are the "human middleware" stitching together fragmented systems. (Many folks don’t realize that one of the most expensive resources on the modern internet is "your attention," which you waste daily on repetitive inputs.)
The future Illia wants to describe is: you express intent, and the system executes — intent-driven execution. You say, "I want to move to San Francisco," and the agent breaks down tasks, asks preferences, and pushes for execution. It sounds great, and I believe the direction is correct.
But what makes Illia more honest than many crypto narratives is that he did not dodge the pitfall of "transparency." He stated directly — on-chain transparency often feels anti-human in daily life. When you search for a house, hire movers, pay tuition or medical bills, making balances, counterparties, and transaction amounts all public effectively turns life into a permanently indexable ledger. Most people do not want that kind of "freedom."
Thus, Nearcon elevated "privacy" to a high position: with near.com as the entry point, it emphasized not to make users worry about chains and gas; combined with the so-called confidential mode, treating the privacy of balances, transfers, and transactions as first-class citizens. Here, I am willing to give it high marks—not because "privacy sounds sophisticated," but because it faces an adoption barrier: for an agent to spend your money, people must first be willing to put their money in.
Citrini's talk on "where the money comes from" was exciting, but Nearcon made me more concerned with "who covers the costs if something goes wrong."
Why was Citrini’s article able to stir the market? Because it translated agentic commerce into profit pool language: if agents do search, price comparison, negotiation, ordering, reconciliation, refunds on behalf of users, the segments that relied on "human friction" for rentals will be squeezed. I don’t disagree with that directional judgment.
But what made me more cautious at Nearcon was the point that commercial friction is not all bad friction. Much of that friction is doing the work of "trust." Anti-fraud, permission controls, responsibility allocation, dispute resolution, audit trails, privacy boundaries — although these seem bothersome, they allow commerce to function.
Removing humans from processes does not make those costs disappear; it just causes them to appear in another form, making them harder to explain, harder to price, and easier to cause significant accidents.
This is why I increasingly dislike the formula: agent + stablecoin = agentic commerce. Stablecoins are important, and programmability for settlement marks a fundamental infrastructure-level change. But stablecoins address "how money moves," not "why money can move, who permits it to move, what happens when it moves incorrectly, who is responsible, how to pursue accountability, and how to reimburse."
The more valuable aspect of Nearcon is that it is at least trying to address "the missing layer": intent routing, privacy execution, architectural security, and an entry point that can bring people into the system. It doesn’t seem like it’s selling a "smarter agent," but rather saying: if you want agents to become economic actors, you first need to establish a business foundation.
The example of "moving to San Francisco" is clever but also risky.
Illia used his moving experience as an example, which I quite like. Because it’s not just a toy task: it has a long chain, multiple subjects, large amounts, and many details, making it the easiest to expose "where the agent gets stuck."
But precisely because it is real, it makes the problems even more naked. Moving is never just a matter of "pressing a button," but involves three much more complex things.
First, responsibility. When the agent signs terms, pays deposits, and hires service providers, who is actually signing? If there's a dispute, who is responsible? "My agent hires your agent" sounds very futuristic, but once the service falls short, goods do not arrive, or terms are problematic, it immediately becomes lawyer-letter language. Real-world business does not end at "execution;" real-world business is about "after execution, you must continue to exist."
Second, boundaries. Moving is not just one sentence; it involves a lot of micro-authorizations: how much can be spent without consulting me; which information can be shared with which suppliers; which terms I must confirm; which irreversible payments must require secondary confirmation. The incident with Meta’s mistaken email deletion is shocking because it reminds us: you think you’ve set boundaries, but the system may not "remember." When it’s deleting emails or code, you can still recover; but when it’s touching money, you’re not "reversing actions," you’re "reversing trust."
Third, compliance and anti-automation. Real-world business systems often contain designs against "bots": CAPTCHAs, risk control interceptions, KYC processes. Illia mentioned the need for new intent-based APIs, the need for more neutral execution tracks that can be combined, rather than being blocked by Cloudflare-style anti-bot mechanisms — this suggests that today’s internet is designed for human interaction, not for agent transactions. To make agents into economic actors, you must rewrite a layer of "machine-readable" commercial interfaces.
Until these three issues are addressed, agentic commerce will always remain in a "looks futuristic" video. Once they are resolved, it can turn into an uncomfortable yet tangible reality—like payments, risk control, and all real infrastructures.
George poured cold water on OpenClaw: don’t expect users to be cautious, security needs to be built into architecture.
Head of Near AI, George Zeng (a former member of South Park Commons like myself) made a second keynote speech that finally made me feel like someone is treating agent matters as a production system.
What he said is actually not complicated: many agent frameworks today are inadequate for production environments because they expose keys, lack network control, and lack protective architecture against prompt injections. Prompt injection is not a gossip of "models not listening"; it is more about exploiting at the workflow level: agents reading untrusted content like web pages, emails, and PDFs, where embedded directives can lead them to call tools, leak information, or make erroneous operations. As long as an agent has permissions, this chain can be very dangerous.
Even more critical is the skills market. Once you allow third-party skills to be installed, you are essentially creating a new app store, but this "store" contains "applications" that can access your files, accounts, and money. While in growth phases, this is called ecological prosperity; in adversarial phases, it’s called supply chain security. (And you’ll find that attackers are always more knowledgeable about "distribution" than you are.)
George emphasized that "security must be at the architectural level," not left up to the user to "think twice before installation." I completely agree with this statement. The security of mature financial systems has never depended on "users being careful," but rather on "defaults being secure." As agents begin to spend money, this point will become even more extreme.
What did NEAR do right? What is still lacking?
I am willing to give NEAR a positive review for this Nearcon: at least it put several critical modules for success on the table — intent, privacy, architectural security, agent market, and a more public-facing entry point (near.com). From narrative to product, it does not seem to be selling a slogan but is attempting to piece together "agentic commerce" into a system.
But I must also say, it still lacks a few "truly decisive elements for scalability," and those elements are often not the most photogenic at press conferences.
First, policy needs to become product-level. It’s not just "write prompts better," but rather verifiable, inheritable, and auditable authorization policies: budgets, thresholds, secondary confirmations, and brake mechanisms for irreversible operations, preferably as system defaults. Otherwise, so-called autonomy often just means "betting that it hasn’t forgotten today."
Second, traceability needs to be established alongside privacy. Privacy is not a black box. Privacy should be "invisible to the outside, accountable internally." Enterprises will not accept "you just have to trust me"; they need post-event audits: what was done, why it was done, which tools were invoked, which counterparties were reached. NEAR has discussed "confidentiality" broadly, but "how to provide auditability within confidentiality" needs to be answered more specifically and productively.
Third, there must be answers for responsibility and liability. Once the agent market grows, accidents are inevitable. Who is responsible? How is arbitration conducted? How is compensation handled? Is there an insurance pool? Is there a reputation system against witch-hunts? These aren’t afterthoughts, they are prerequisites for scaling. Because once money and contracts are involved, the speed of expansion depends on whether risks can be priced and assumed.
It is precisely because of these constraints that my judgment on the story told by Citrini is: the direction is likely correct, but the pace may not be linear. Many profits do not arise from information asymmetry but from risk assumption. Those who can assume risks are the only ones qualified to collect fees. The business world has never opposed new technologies; it only opposes "no one being responsible."
Conclusion: post-OpenClaw & pre-2028, I am more inclined to bet on "bounded power" rather than complete autonomy.
If I were to summarize the insights Nearcon gave me in one sentence: agentic commerce is not simply about removing humans from processes but about redistributing "trust costs." Stablecoins make settlements programmable, but the decisive factors lie in permissions, privacy, security, audit, and accountability mechanisms.
Thus, I am now more willing to bet on a more realistic path: in the short term, what scales won’t be "agents buying groceries for you," but "agents doing dirty and cumbersome work for enterprises within policy boundaries." Procurement and supplier management, accounts receivable and payable, reconciliation and reimbursement, cross-border settlements, and compliance-driven process automation — these scenarios offer quantifiable ROI while naturally requiring human oversight and safety nets. It may not be romantic, but it will generate real transaction volume and compel systems to establish a responsibility framework.
OpenClaw sparked the fire, Citrini calculated the costs, and NEAR is attempting to complete the foundation. In the coming year, what is most worth watching isn’t whose agent is smarter, but who can make brakes, boundaries, audits, and liability as reliable as financial infrastructure.
In a world where software can spend money, true innovation often lies not in stronger accelerators but in more trustworthy brakes.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。