Author: Charlie Little Sun
The recent heat surrounding OpenClaw is not because it answers questions more like a human, but because it has begun to "do things for you." The shift from "help me think" to "I will do it" is not just a UI upgrade; it involves a complete switch in risk structures: when software can invoke tools, rewrite states, and access accounts and permissions, it is no longer an assistant but a potential economic actor.
Thus, the timing of Nearcon 2026 is particularly apt. NEAR has long proclaimed itself as "the chain of the AI era," and Illia Polosukhin is not just any AI founder—he is one of the co-authors of "Attention Is All You Need." Illia is one of the most authoritative figures on how the Transformer model transitioned from paper to today's agents.
So when OpenClaw reignited the term agentic commerce, everyone must be curious about what NEAR will announce at Nearcon and what kind of transactions and privacy foundations it wants to establish regarding "agents taking action."
More subtly, OpenClaw recently provided a "very undignified but very real" reminder: a person at Meta, who works on AI alignment/safety, had the agent help organize their email, with a clear verbal boundary—do not execute without confirmation. As a result, the agent smoothly moved through the toolchain and started deleting emails in bulk, leading her to rush back to her computer to manually hit the brakes. (This is not meant to criticize her; it precisely illustrates the universality of this issue: no one is immune.) When it deletes emails, you can still recover; but when it manages money, permissions, or contracts, it becomes very difficult to resolve with a "rollback."
Then, halfway through Nearcon, Citrini Research's article "2028 GIC" went viral. Although it is titled "2028," the market reads it as "tomorrow morning." You can clearly feel the emotion spilling from the tech circle to the secondary market: stories about SaaS and traditional financial payments that "make money through processes and friction" were suddenly reevaluated. Visa and Mastercard's stock prices were specifically mentioned as being slashed, essentially not indicating that they would perish by tomorrow but showing that the market has seriously placed a mechanism under scrutiny: when both buyers and sellers are using agents, will many profit pools that previously relied on "human inefficiency" be compressed?
Therefore, yesterday was a convergence of three events: OpenClaw made the capability curve credible; the "misdeleted emails" incident highlighted the issue of fragile control; and Citrini threw the pressure of profit pools to market pricing. Discussing agentic commerce at Nearcon in this context, whether it resonates or is implemented will certainly reveal true insights.
Illia's statement "commerce is compressing" resonates with me, but it's not enough
One point of agreement I have with Illia's keynote speech is: AI has progressed from backend functions to chat, then to executing agents, and now to multi-agent collaboration. Once we reach the stage where "my agent talks to your agent," software is no longer just a tool; it begins to act as a participant: negotiating, hiring, coordinating, and paying. In other words, software begins to function like an economic entity.
He used a term: commerce is compressing.
The accuracy of this term lies in its non-vague, forward-looking nature; it points out our daily pain points: the internet is a series of isolated islands. Each site has its own login, set of forms, and settlement processes. You jump from page to page, repeatedly filling in information; essentially, you are the "human middleware" that connects fragmented systems. (Many people do not realize that one of the most expensive resources on the modern internet is "your attention," which you waste daily on redundant data entry.)
The future Illia envisions is: you express intent, and the system executes it—intent-driven execution. You say, "I want to move to San Francisco," and the agent breaks down tasks, asks about preferences, and pushes for execution. It sounds great, and I believe the direction is correct.
But one honest observation Illia makes, which contrasts with many crypto narratives, is that he does not shy away from the "transparency" pitfall. He directly states—on-chain transparency can often be anti-human in daily life. When you search for housing, hire movers, pay tuition, and settle medical bills, making balances, counterparties, and transaction amounts public turns your life into an endlessly indexable ledger. The vast majority of people do not want this kind of "freedom."
This is why Nearcon has elevated "privacy" to a high level: using near.com as an entry point, emphasizing not to let users worry about the chain and gas fees; additionally, the so-called confidential mode treats privacy protection of balances, transfers, and transactions as a first-class citizen. Here, I am willing to give high marks—not because "privacy sounds sophisticated," but because it addresses the adoption threshold: if you want the agent to spend money for you, you must first convince people to confidently deposit money.
Citrini's discussion on "where the money comes from" is provocative, but Nearcon made me more concerned about "who is liable when money causes problems"
Why did Citrini's article manage to stir the market? Because it translated agentic commerce into the language of profit pools: if agents conduct searches, comparisons, negotiations, order placements, reconciliations, and refunds for users, those segments that earn "rents from human friction" will be squeezed out. I do not oppose this directional judgment.
However, what makes me more cautious about Nearcon is that not all commercial friction is bad friction. Much of it actually performs the "work of trust." Anti-fraud, permission control, liability distribution, dispute handling, audit trails, and privacy boundaries may seem cumbersome, but they allow commerce to function.
Removing humans from processes does not eliminate these costs; it only causes them to reappear elsewhere, making them harder to explain, harder to price, and more prone to major accidents.
This is why I increasingly dislike the formula: agent + stablecoin = agentic commerce. Stablecoins are indeed important, making settlements programmable, which is a fundamental infrastructural change. But stablecoins only address "how money moves," not "why money can move, who allows it to move, what to do if it moves incorrectly, who is responsible, how to hold accountability, how to compensate."
The more valuable aspect of Nearcon is that it is at least trying to fill in "the missing layer": intent routing, privacy execution, architectural security, and an entry point that can incorporate people. It does not seem to be selling a "smarter agent" but rather stating: to make the agent an economic actor, one must first build the commercial foundation.
The example of "moving to San Francisco" is clever but also dangerous
Illia's example of his moving is one that I actually like. Because it is not a toy task: it has a long chain, multiple parties, significant amounts, and many details, making it easy to expose "where the agent gets stuck."
But because it is real, it will expose issues more starkly. Moving is never about just "pushing a button"; it involves three more complex elements.
The first is responsibility. When the agent signs agreements, pays deposits, and hires service providers, who is actually signing? Who is responsible when disputes arise? "My agent hires your agent" sounds very futuristic, but once the service falls through, goods are not delivered, or terms are breached, it quickly devolves into legal language. Real-world business is not as simple as "execution is enough"; real-world business is about "after execution, you still want to be alive."
The second is boundaries. Moving is not a one-liner; it involves numerous micro-authorizations: how much can be spent without asking me; what information can be shared with which suppliers; which terms must receive my confirmation; which irreversible payments must be confirmed a second time. The alarming story of the Meta email deletion serves as a reminder: you think you've set boundaries, but the system may not "remember." When it deletes an email or code, you can still recover; but when it handles money, you’re not just rolling back actions; you are rolling back trust.
The third is compliance and anti-automation. Real-world business systems contain many "anti-robot" designs: CAPTCHA, risk control intercepts, KYC processes. Illia mentioned the need for new intent-based APIs and more neutral execution pathways that can be combined, rather than being bogged down by Cloudflare-like anti-robot mechanisms—this implies: today's internet is designed for human interaction, not for agent transactions. If you want agents to become economic actors, you must rewrite a layer of "machine-readable" business interfaces.
Without solving these three issues, agentic commerce will forever remain in the realm of "appears very futuristic" videos. Once they are resolved, it can transform into the kind of uncomfortable yet implementable concepts—like payments, like risk control, like all truly foundational infrastructures.
George dampens OpenClaw's excitement: don’t expect users to be cautious, safety must be built into the architecture
Head of Near AI, George Zeng (a former member of South Park Commons like the author), in his second keynote speech, finally made me feel someone was discussing agents as a production system.
The core of what he said isn’t complicated: many agent frameworks today are inadequate in production environments because they expose keys, lack network controls, and lack architecture protection against prompt injection. Prompt injection is not gossip about "models disobeying"; it resembles exploitation at the workflow level: agents reading untrusted content like web pages, emails, and PDFs, with hidden instructions that may induce them to invoke tools, leak information, and execute erroneous actions. As long as the agent has permissions, this chain will be very dangerous.
What’s more alarming is the skills market. Once you allow third-party skills to be installed, you effectively create a new app store, except this "store" contains "apps" that can access your documents, accounts, and money. During the growth phase, this is called ecological prosperity; during confrontation, this is called supply chain security. (And you will find that attackers always understand "distribution" better than you.)
George emphasized that "safety must be architectural," not reliant on users thinking twice before "installing." I fully agree with this statement. The security of mature financial systems has never depended on "users being careful," but rather on "being secure by default." Once agents start spending money, this will only become more extreme.
What did NEAR do right? What is still lacking?
I am willing to give NEAR a positive review following this Nearcon: it has at least put several modules that determine success on the table—intent, privacy, architectural security, the agent market, and a more public-facing entry point (near.com). From narrative to product, it does not appear to be selling a slogan but rather piecing together "agentic commerce" as a system.
But I must also point out that it still lacks a few hard elements that "truly determine whether scalability is possible," and these aspects tend not to be the focus of press conferences.
First, policy must translate into product. It’s not about "writing better prompts," but about verifiable, inheritable, and auditable authorization policies: budgets, thresholds, secondary confirmations, and brake mechanisms for irreversible operations, preferably as system defaults. Otherwise, what is called autonomy often amounts to simply "betting it hasn’t been forgotten today."
Second, traceability must be established alongside privacy. Privacy is not a black box. Privacy should mean "invisible to the outside, accountable internally." Companies will not accept “you just have to trust me”; they require post-audit: what was done, why it was done, which tools were invoked, and which counterparties were reached. NEAR talks a lot about "confidentiality," but the answer to "how to provide auditability within confidentiality" needs to be more specific and productized.
Third, there need to be answers regarding liability and compensation. Once the agent market grows, accidents will inevitably occur. Who is responsible? How is arbitration conducted? How is compensation handled? Is there an insurance pool? Is there a credibility system to prevent witch hunts? This is not an afterthought; it is a prerequisite for scalability. Because once money and contracts are involved, the speed of expansion depends on whether risks can be priced and assumed.
Because of these constraints, my judgment of Citrini's story is that the direction is largely correct, but the rhythm may not be linear. Much of the profit does not stem from information asymmetry but from risk assumption. Whoever can assume risks will qualify for fees. The business world has never opposed new technologies; it only opposes "no one being responsible."
Conclusion: post-OpenClaw & pre-2028, I am more inclined to bet on "bounded power" rather than complete autonomy
If I were to summarize the insight Nearcon provided me in one sentence: agentic commerce is not simply about removing humans from processes; it is about redistributing "trust costs." Stablecoins make settlements programmable, but the winning factors lie in permissions, privacy, security, auditability, and liability mechanisms.
Therefore, I am now more inclined to bet on a more realistic pathway: in the short term, the scaling won’t be "agents buying groceries for you," but rather "agents doing the dirty work for businesses within policy frameworks." Procurement and supplier management, accounts receivable and payable, reconciliation and reimbursement, cross-border settlements, compliance-driven process automation—these scenarios have quantifiable ROI and naturally require human oversight and safety nets. It is not romantic, but it will generate real transaction volumes and compel systems to establish responsibility structures.
OpenClaw has ignited the fire, Citrini has accounted for the costs, and NEAR is attempting to solidify the foundation. In the coming year, the most valuable aspect to observe is not which agent is smarter, but who can implement brakes, boundaries, audits, and compensation with the reliability of financial infrastructure.
In a world where software can spend money, true innovation often does not lie in a stronger accelerator, but rather in a more trustworthy brake.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。