Visa's Head of Crypto: Eight Evolutionary Directions for Crypto and AI in 2026

CN
1 day ago

Original Author: Cuy Sheffield, Vice President and Head of Crypto at Visa

Original Translation: Saoirse, Foresight News

As cryptocurrencies and AI gradually mature, the most significant shift in these two fields is no longer about being "theoretically feasible," but rather about being "reliably implementable in practice." Currently, both technologies have crossed critical thresholds, achieving significant performance improvements, but the actual application adoption rates remain uneven. The core development dynamics of 2026 stem from this gap between "performance and adoption."

Below are several core themes I have been closely monitoring, along with my preliminary thoughts on the development directions of these technologies, areas of value accumulation, and why the ultimate winners may be starkly different from industry pioneers.

Theme 1: Cryptocurrencies are transitioning from speculative asset classes to quality technologies

The first decade of cryptocurrency development was characterized by "speculative advantages"—its market is global, continuous, and highly open, with extreme volatility making cryptocurrency trading more vibrant and attractive than traditional financial markets.

However, at the same time, the underlying technology was not ready for mainstream applications: early blockchains were slow, costly, and lacked stability. Outside of speculative scenarios, cryptocurrencies have rarely surpassed existing traditional systems in terms of cost, speed, or convenience.

Today, this imbalance is beginning to reverse. Blockchain technology has become faster, more economical, and more reliable, and the most attractive application scenarios for cryptocurrencies are no longer speculation but rather in the infrastructure sector—especially in settlement and payment processes. As cryptocurrencies gradually become more mature technologies, the core position of speculation will gradually weaken: it will not disappear entirely, but it will no longer be the primary source of value.

Theme 2: Stablecoins are a clear result of cryptocurrencies' "pure practicality"

Unlike previous narratives around cryptocurrencies, the success of stablecoins is based on specific, objective standards: in certain scenarios, stablecoins are faster, cheaper, and more widely accessible than traditional payment channels, while also seamlessly integrating into modern software systems.

Stablecoins do not require users to view cryptocurrencies as an "ideology" to believe in; their applications often occur "implicitly" within existing products and workflows—this has allowed institutions and enterprises that previously viewed the cryptocurrency ecosystem as "too volatile and not transparent enough" to finally understand its value clearly.

It can be said that stablecoins help cryptocurrencies re-anchor on "practicality" rather than "speculation," establishing a clear benchmark for "how cryptocurrencies can successfully land."

Theme 3: When cryptocurrencies become infrastructure, "distribution capability" is more important than "technological novelty"

In the past, when cryptocurrencies primarily played the role of "speculative tools," their "distribution" was intrinsic—new tokens only needed to "exist" to naturally accumulate liquidity and attention.

However, as cryptocurrencies become infrastructure, their application scenarios are shifting from the "market level" to the "product level": they are embedded in payment processes, platforms, and enterprise systems, often without end users being aware of their existence.

This shift is highly beneficial for two types of entities: first, companies with existing distribution channels and reliable customer relationships; second, institutions with regulatory licenses, compliance systems, and risk control infrastructure. Relying solely on "novelty of protocols" is no longer sufficient to drive the large-scale implementation of cryptocurrencies.

Theme 4: AI agents possess practical value, impacting beyond the coding domain

The practicality of AI agents is becoming increasingly evident, but their role is often misunderstood: the most successful agents are not "autonomous decision-makers," but rather "tools that reduce coordination costs in workflows."

Historically, this has been most apparent in the software development field—agent tools have accelerated the efficiency of coding, debugging, code refactoring, and environment setup. However, in recent years, this "tool value" has significantly spread to more areas.

Take tools like Claude Code as an example; although it is positioned as a "developer tool," its rapid adoption reflects a deeper trend: agent systems are becoming "interfaces for knowledge work," rather than being limited to programming. Users are beginning to apply "agent-driven workflows" to research, analysis, writing, planning, data processing, and operational tasks—these tasks lean more towards "general professional work" rather than traditional programming.

The key is not "ambient coding" itself, but the core patterns behind it:

  • Users delegate "goal intentions," not "specific steps";
  • Agents manage "contextual information" across files, tools, and task management;
  • Work modes shift from "linear progression" to "iterative, dialogic."

In various knowledge work scenarios, agents excel at gathering context, executing defined tasks, reducing handoff processes, and accelerating iteration efficiency, but they still have shortcomings in "open-ended judgment," "responsibility attribution," and "error correction."

Therefore, most agents currently used in production scenarios still need to be "limited in scope, supervised, and embedded in systems," rather than operating completely independently. The actual value of agents comes from "restructuring knowledge workflows," rather than "replacing labor" or "achieving complete autonomy."

Theme 5: The bottleneck of AI has shifted from "intelligence level" to "trustworthiness"

The intelligence level of AI models has rapidly improved, and the limiting factors are no longer "single language fluency or reasoning ability," but rather "reliability in actual systems."

Production environments have zero tolerance for three types of issues: first, AI "hallucinations" (generating false information), second, inconsistent output results, and third, opaque failure modes. Once AI is involved in customer service, financial transactions, or compliance processes, "roughly correct" results can no longer be accepted.

Establishing "trust" requires four foundations: first, traceability of results, second, memory capability, third, verifiability, and fourth, the ability to proactively expose "uncertainty." Until these capabilities are sufficiently mature, the autonomy of AI must be limited.

Theme 6: Systems engineering determines whether AI can land in production scenarios

Successful AI products view "models" as "components" rather than "finished products"—their reliability stems from "architectural design," not "prompt optimization."

This "architectural design" includes state management, control flow, evaluation and monitoring systems, as well as fault handling and recovery mechanisms. Therefore, the development of AI is increasingly approaching "traditional software engineering," rather than "cutting-edge theoretical research."

Long-term value will tilt towards two types of entities: first, system builders, and second, platform owners who control workflows and distribution channels.

As agent tools expand from coding to research, writing, analysis, and operational processes, the importance of "systems engineering" will further highlight: knowledge work is often complex, reliant on state information, and context-intensive, making agents that can reliably manage memory, tools, and iterative processes (rather than just generating outputs) more valuable.

Theme 7: The contradiction between open models and centralized control raises unresolved governance issues

As AI systems' capabilities enhance and their integration with the economic sector deepens, the question of "who owns and controls the most powerful AI models" is generating core contradictions.

On one hand, R&D in the cutting-edge AI field remains "capital-intensive," and is increasingly influenced by "computing power acquisition, regulatory policies, and geopolitical factors," leading to rising concentration; on the other hand, open-source models and tools continue to iterate and optimize under the impetus of "broad experimentation and convenient deployment."

This pattern of "concentration and openness coexisting" has raised a series of unresolved issues: dependency risks, auditability, transparency, long-term bargaining power, and control over critical infrastructure. The most likely outcome is a "hybrid model"—cutting-edge models drive technological capability breakthroughs, while open or semi-open systems integrate these capabilities into "widely distributed software."

Theme 8: Programmable money gives rise to new types of agent payment flows

As AI systems play roles in workflows, their demand for "economic interactions" is increasing—such as paying for services, calling APIs, compensating other agents, or settling "usage-based interaction fees."

This demand has brought "stablecoins" back into focus: they are seen as "machine-native currencies," possessing programmability and auditability, and can complete transfers without human intervention.

Taking x402, a "developer-oriented protocol," as an example, although it is still in the early experimental stage, its direction is very clear: payment flows will operate in the form of "APIs," rather than traditional "checkout pages"—this allows for "continuous, refined transactions" between software agents.

Currently, this field is still immature: transaction scales are small, user experiences are rough, and security and permission systems are still being improved. However, innovations in infrastructure often begin with such "early explorations."

It is worth noting that the significance is not "autonomy for the sake of autonomy," but rather "when software can complete transactions through programming, new economic behaviors will become possible."

Conclusion

Whether in cryptocurrencies or artificial intelligence, the early development stages favored "eye-catching concepts" and "technological novelty"; in the next phase, "reliability," "governance capability," and "distribution capability" will become more important competitive dimensions.

Today, technology itself is no longer the main limiting factor; "embedding technology into actual systems" is key.

In my view, the hallmark of 2026 will not be "a breakthrough technology," but rather "the steady accumulation of infrastructure"—these facilities, while operating quietly, are also subtly reshaping "the way value flows" and "the way work is conducted."

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink