From Federated Learning to Decentralized Agent Networks: An Analysis of the ChainOpera Project

CN
链捕手
Follow
6 hours ago

Author: 0xjacobzhao

In the research report from June titled "The Holy Grail of Crypto AI: Frontier Exploration of Decentralized Training", we mentioned federated learning as a "controlled decentralization" solution that lies between distributed training and decentralized training: its core is data local retention and centralized parameter aggregation, meeting the privacy and compliance needs of sectors like healthcare and finance. Meanwhile, we have continuously focused on the rise of agent networks in our previous reports—their value lies in the autonomous and division of labor among multiple agents to collaboratively complete complex tasks, driving the evolution from "large models" to a "multi-agent ecosystem."

Federated learning establishes the foundation for multi-party collaboration with "data not leaving the local domain and incentives based on contributions." Its distributed genes, transparent incentives, privacy protection, and compliance practices provide directly reusable experiences for the Agent Network. The FedML team is following this path, upgrading the open-source gene to TensorOpera (the AI industry infrastructure layer), and further evolving to ChainOpera (decentralized Agent network). Of course, the Agent Network is not an inevitable extension of federated learning; its core lies in the autonomous collaboration and task division of multiple agents, which can also be constructed directly based on multi-agent systems (MAS), reinforcement learning (RL), or blockchain incentive mechanisms.

I. Federated Learning and AI Agent Technology Stack Architecture

Federated Learning (FL) is a framework for collaborative training without centralizing data. Its basic principle is that each participant trains models locally and only uploads parameters or gradients to a coordinating endpoint for aggregation, thus achieving privacy compliance with "data not leaving the domain." After practical applications in typical scenarios such as healthcare, finance, and mobile, federated learning has entered a relatively mature commercial stage but still faces bottlenecks such as high communication overhead, incomplete privacy protection, and low convergence efficiency due to device heterogeneity. Compared to other training modes, distributed training emphasizes centralized computing power to pursue efficiency and scale, while decentralized training achieves fully distributed collaboration through an open computing power network. Federated learning lies between the two, representing a "controlled decentralization" solution: it can meet industry needs for privacy and compliance while providing a feasible path for cross-institution collaboration, making it more suitable for transitional deployment architectures in the industrial sector.

In the entire AI Agent protocol stack, we previously divided it into three main levels:

  • Infrastructure Layer: This layer provides the foundational operational support for agents and is the technical basis for all Agent systems.

  • Core Modules: Including Agent Framework (agent development and operation framework) and Agent OS (lower-level multi-task scheduling and modular runtime), providing core capabilities for the lifecycle management of agents.

  • Supporting Modules: Such as Agent DID (decentralized identity), Agent Wallet & Abstraction (account abstraction and transaction execution), Agent Payment/Settlement (payment and settlement capabilities).

  • Coordination & Execution Layer focuses on collaboration among multiple agents, task scheduling, and system incentive mechanisms, which are key to building the "collective intelligence" of agent systems.

  • Agent Orchestration: Refers to the command mechanism used to unify scheduling and manage the agent lifecycle, task allocation, and execution processes, suitable for workflows with central control.

  • Agent Swarm: A collaborative structure emphasizing distributed agent cooperation, with high autonomy, division of labor capabilities, and flexible collaboration, suitable for tackling complex tasks in dynamic environments.

  • Agent Incentive Layer: Constructs the economic incentive system for the Agent network, stimulating the enthusiasm of developers, executors, and validators, providing sustainable momentum for the agent ecosystem.

  • Application & Distribution Layer

  • Distribution Subclass: Includes Agent Launchpad, Agent Marketplace, and Agent Plugin Network.

  • Application Subclass: Covers AgentFi, Agent Native DApp, Agent-as-a-Service, etc.

  • Consumption Subclass: Primarily Agent Social / Consumer Agent, aimed at consumer social and other lightweight scenarios.

  • Meme: Speculative hype around the Agent concept, lacking actual technical implementation and application landing, driven solely by marketing.

II. FedML as a Benchmark for Federated Learning and the TensorOpera Full-Stack Platform

FedML is one of the earliest open-source frameworks aimed at federated learning and distributed training, originating from an academic team (USC) and gradually becoming a core product of TensorOpera AI. It provides researchers and developers with data collaboration training tools across institutions and devices. In academia, FedML has frequently appeared at top conferences such as NeurIPS, ICML, and AAAI, becoming a universal experimental platform for federated learning research; in the industry, FedML has a high reputation in privacy-sensitive scenarios such as healthcare, finance, edge AI, and Web3 AI, regarded as a benchmark toolchain in the field of federated learning.

TensorOpera is the full-stack AI infrastructure platform upgraded from FedML for enterprises and developers: while maintaining federated learning capabilities, it expands to GPU Marketplace, model services, and MLOps, thus tapping into a larger market in the era of large models and agents. The overall architecture of TensorOpera can be divided into three levels: Compute Layer (base layer), Scheduler Layer (scheduling layer), and MLOps Layer (application layer):

1. Compute Layer (Base Layer)

The Compute Layer is the technical foundation of TensorOpera, continuing FedML's open-source gene. Its core functions include Parameter Server, Distributed Training, Inference Endpoint, and Aggregation Server. Its value proposition lies in providing distributed training, privacy-protecting federated learning, and scalable inference engines, supporting the three core capabilities of "Train / Deploy / Federate," covering the complete link from model training and deployment to cross-institution collaboration, serving as the foundational layer of the entire platform.

2. Scheduler Layer (Middle Layer)

The Scheduler Layer acts as the computing power trading and scheduling hub, composed of GPU Marketplace, Provision, Master Agent, and Schedule & Orchestrate, supporting resource calls across public clouds, GPU providers, and independent contributors. This layer is a key turning point in the upgrade from FedML to TensorOpera, enabling larger-scale AI training and inference through intelligent computing power scheduling and task orchestration, covering typical scenarios of LLM and generative AI. Additionally, the Share & Earn model of this layer reserves incentive mechanism interfaces, with the potential to be compatible with DePIN or Web3 models.

3. MLOps Layer (Upper Layer)

The MLOps Layer is the service interface of the platform directly facing developers and enterprises, including modules such as Model Serving, AI Agent, and Studio. Typical applications cover LLM Chatbot, multimodal generative AI, and developer Copilot tools. Its value lies in abstracting the underlying computing power and training capabilities into high-level APIs and products, lowering the usage threshold, providing ready-to-use agents, low-code development environments, and scalable deployment capabilities, positioning itself against next-generation AI Infra platforms like Anyscale, Together, and Modal, acting as a bridge from infrastructure to application.

In March 2025, TensorOpera upgraded to a full-stack platform for AI Agents, with core products including AgentOpera AI App, Framework, and Platform. The application layer provides a multi-agent entry similar to ChatGPT, the framework layer evolves into "Agentic OS" with graph-structured multi-agent systems and Orchestrator/Router, while the platform layer deeply integrates with TensorOpera's model platform and FedML, achieving distributed model services, RAG optimization, and hybrid edge-cloud deployment. The overall goal is to create "one operating system, one agent network," allowing developers, enterprises, and users to co-build a new generation of Agentic AI ecosystem in an open and privacy-protecting environment.

III. ChainOpera AI Ecosystem Overview: From Co-Creators to Technical Foundation

If FedML is the technical core, providing the open-source gene for federated learning and distributed training; TensorOpera abstracts the research achievements of FedML into a commercially viable full-stack AI infrastructure, then ChainOpera is about "putting TensorOpera's platform capabilities on-chain," creating a decentralized Agent network ecosystem through AI Terminal + Agent Social Network + DePIN model and computing layer + AI-Native blockchain. The core transformation lies in that TensorOpera primarily targets enterprises and developers, while ChainOpera, leveraging Web3 governance and incentive mechanisms, incorporates users, developers, and GPU/data providers into co-creation and co-governance, allowing AI Agents to be not just "used," but "co-created and co-owned."

Co-creators Ecosystem

ChainOpera AI provides a toolchain, infrastructure, and coordination layer for ecosystem co-creation through the Model & GPU Platform and Agent Platform, supporting model training, agent development, deployment, and collaborative expansion.

The co-creators of the ChainOpera ecosystem include AI Agent developers (designing and operating agents), tool and service providers (templates, MCP, databases, and APIs), model developers (training and publishing model cards), GPU providers (contributing computing power through DePIN and Web2 cloud partners), and data contributors and annotators (uploading and annotating multimodal data). The three core supplies—development, computing power, and data—jointly drive the continuous growth of the agent network.

Co-owners Ecosystem

The ChainOpera ecosystem also introduces a co-ownership mechanism, building the network through collaboration and participation. AI Agent creators are individuals or teams who design and deploy new types of agents through the Agent Platform, responsible for building, launching, and maintaining them, thus driving innovation in functionality and applications. AI Agent participants come from the community, participating in the agent's lifecycle by acquiring and holding Access Units, supporting the growth and activity of agents during usage and promotion. These two roles represent the supply and demand sides, respectively, forming a value-sharing and collaborative development model within the ecosystem.

Ecosystem Partners: Platforms and Frameworks

ChainOpera AI collaborates with multiple parties to enhance the platform's usability and security, focusing on the integration of Web3 scenarios: through the AI Terminal App, it combines wallets, algorithms, and aggregation platforms to achieve intelligent service recommendations; introduces diverse frameworks and no-code tools in the Agent Platform to lower development barriers; relies on TensorOpera AI for model training and inference; and establishes exclusive cooperation with FedML to support privacy-protecting training across institutions and devices. Overall, it forms an open ecosystem that balances enterprise-level applications and Web3 user experiences.

Hardware Entry: AI Hardware & Partners

Through partnerships with DeAI Phone, wearables, and Robot AI, ChainOpera integrates blockchain and AI into smart terminals, achieving dApp interaction, edge training, and privacy protection, gradually forming a decentralized AI hardware ecosystem.

Central Platform and Technical Foundation: TensorOpera GenAI & FedML

TensorOpera provides a full-stack GenAI platform covering MLOps, Scheduler, and Compute; its sub-platform FedML has grown from academic open-source to an industrial framework, enhancing the capability of AI to "run anywhere and scale arbitrarily."

ChainOpera AI Ecosystem

IV. Core Products of ChainOpera and Full-Stack AI Agent Infrastructure

In June 2025, ChainOpera officially launched the AI Terminal App and decentralized technology stack, positioning itself as the "decentralized version of OpenAI." Its core products encompass four major modules: Application Layer (AI Terminal & Agent Network), Developer Layer (Agent Creator Center), Model & GPU Layer (Model & Compute Network), and CoAI protocol and dedicated chain, covering a complete closed loop from user entry to underlying computing power and on-chain incentives.

The AI Terminal App has integrated BNBChain, supporting on-chain transactions and DeFi scenarios for agents. The Agent Creator Center is open to developers, providing capabilities such as MCP/HUB, knowledge base, and RAG, with community agents continuously joining; it also initiates the CO-AI Alliance, collaborating with partners like io.net, Render, TensorOpera, FedML, and MindNetwork.

According to the on-chain data from BNB DApp Bay in the past 30 days, it has 158.87K unique users and a transaction volume of 2.6 million, ranking second overall in the BSC "AI Agent" category, demonstrating strong on-chain activity.

Super AI Agent App – AI Terminal (https://chat.chainopera.ai/)

As a decentralized ChatGPT and AI social entry point, AI Terminal offers multimodal collaboration, data contribution incentives, DeFi tool integration, cross-platform assistance, and supports AI Agent collaboration and privacy protection (Your Data, Your Agent). Users can directly invoke the open-source large model DeepSeek-R1 and community agents on mobile, with language tokens and encrypted tokens transparently circulating on-chain during interactions. Its value lies in transforming users from "content consumers" to "intelligent co-creators," enabling them to use a dedicated agent network in scenarios such as DeFi, RWA, PayFi, and e-commerce.

AI Agent Social Network (https://chat.chainopera.ai/agent-social-network)

Positioned similarly to LinkedIn + Messenger but aimed at the AI Agent community. Through virtual workspaces and Agent-to-Agent collaboration mechanisms (MetaGPT, ChatDEV, AutoGEN, Camel), it promotes the evolution of a single agent into a multi-agent collaborative network, covering applications in finance, gaming, e-commerce, research, and gradually enhancing memory and autonomy.

AI Agent Developer Platform (https://agent.chainopera.ai/)

Provides developers with a "Lego-style" creative experience. It supports no-code and modular expansion, with blockchain contracts ensuring ownership, and DePIN + cloud infrastructure lowering barriers. The Marketplace offers distribution and discovery channels. Its core is to enable developers to quickly reach users, with ecosystem contributions transparently recorded and incentivized.

AI Model & GPU Platform (https://platform.chainopera.ai/)

As the infrastructure layer, it combines DePIN and federated learning to address the pain points of Web3 AI relying on centralized computing power. Through distributed GPUs, privacy-protecting data training, model and data markets, and end-to-end MLOps, it supports multi-agent collaboration and personalized AI. Its vision is to promote a shift from "big company monopoly" to "community co-construction" in infrastructure paradigms.

V. Roadmap Planning for ChainOpera AI

Apart from the officially launched full-stack AI Agent platform, ChainOpera AI firmly believes that general artificial intelligence (AGI) comes from a collaborative network of multimodal and multi-agent systems. Therefore, its long-term roadmap is divided into four phases:

  • Phase One (Compute → Capital): Build decentralized infrastructure, including GPU DePIN network, federated learning, and distributed training/inference platform, and introduce a Model Router to coordinate multi-end inference; incentivize computing power, models, and data providers to receive usage-based rewards.

  • Phase Two (Agentic Apps → Collaborative AI Economy): Launch AI Terminal, Agent Marketplace, and Agent Social Network to form a multi-agent application ecosystem; connect users, developers, and resource providers through the CoAI protocol, and introduce a user demand-developer matching system and credit system to promote high-frequency interactions and continuous economic activities.

  • Phase Three (Collaborative AI → Crypto-Native AI): Implement in DeFi, RWA, payments, e-commerce, and expand into KOL scenarios and personal data exchange; develop dedicated LLMs for finance/crypto, and launch Agent-to-Agent payment and wallet systems to promote "Crypto AGI" scenario applications.

  • Phase Four (Ecosystems → Autonomous AI Economies): Gradually evolve into autonomous subnet economies, where each subnet independently governs and tokenizes operations around applications, infrastructure, computing power, models, and data, and collaborates through cross-subnet protocols to form a multi-subnet collaborative ecosystem; simultaneously transition from Agentic AI to Physical AI (robots, autonomous driving, aerospace).

Disclaimer: This roadmap is for reference only; timelines and functionalities may be dynamically adjusted due to market conditions and do not constitute a delivery guarantee.

VII. Token Incentives and Protocol Governance

Currently, ChainOpera has not announced a complete token incentive plan, but its CoAI protocol centers on "co-creation and co-ownership," achieving transparent and verifiable contribution records through blockchain and Proof-of-Intelligence mechanisms: contributions from developers, computing power, data, and service providers are measured in a standardized manner and rewarded, while users utilize services, resource providers support operations, and developers build applications, with all participants sharing in the growth dividends; the platform maintains a cycle through a 1% service fee, reward distribution, and liquidity support, promoting an open, fair, and collaborative decentralized AI ecosystem.

Proof-of-Intelligence Learning Framework

Proof-of-Intelligence (PoI) is the core consensus mechanism proposed by ChainOpera under the CoAI protocol, aimed at providing a transparent, fair, and verifiable incentive and governance system for decentralized AI. It is based on a blockchain collaborative machine learning framework of Proof-of-Contribution, designed to address the issues of insufficient incentives, privacy risks, and lack of verifiability in the practical application of federated learning (FL). This design centers around smart contracts, combined with decentralized storage (IPFS), aggregation nodes, and zero-knowledge proofs (zkSNARKs), achieving five major goals: ① Fair reward distribution based on contribution, ensuring that trainers are incentivized based on actual model improvements; ② Maintaining localized data storage to protect privacy; ③ Introducing robustness mechanisms to counteract poisoning or aggregation attacks from malicious trainers; ④ Ensuring the verifiability of key computations such as model aggregation, anomaly detection, and contribution assessment through ZKP; ⑤ Being applicable to heterogeneous data and different learning tasks in terms of efficiency and generality.

Token Value in Full-Stack AI

ChainOpera's token mechanism operates around five major value streams (LaunchPad, Agent API, Model Serving, Contribution, Model Training), with the core being service fees, contribution confirmation, and resource allocation, rather than speculative returns.

  • AI Users: Use tokens to access services or subscribe to applications, and contribute to the ecosystem by providing/annotating/staking data.

  • Agent/Application Developers: Use platform computing power and data for development and receive protocol recognition for their contributed agents, applications, or datasets.

  • Resource Providers: Contribute computing power, data, or models, receiving transparent records and incentives.

  • Governance Participants (Community & DAO): Participate in voting, mechanism design, and ecosystem coordination through tokens.

  • Protocol Layer (COAI): Maintain sustainable development through service fees and balance supply and demand using automated allocation mechanisms.

  • Nodes and Validators: Provide verification, computing power, and security services to ensure network reliability.

Protocol Governance

ChainOpera adopts DAO governance, allowing participants to propose and vote by staking tokens, ensuring transparency and fairness in decision-making. The governance mechanism includes: a reputation system (to verify and quantify contributions), community collaboration (proposals and voting to promote ecosystem development), and parameter adjustments (data usage, security, and validator accountability). The overall goal is to avoid power concentration and maintain system stability and community co-creation.

VIII. Team Background and Project Financing

The ChainOpera project was co-founded by Professor Salman Avestimehr, who has profound expertise in the field of federated learning, and Dr. Aiden Chaoyang He. Other core team members come from top academic and tech institutions such as UC Berkeley, Stanford, USC, MIT, Tsinghua University, as well as Google, Amazon, Tencent, Meta, and Apple, combining academic research with industry practical capabilities. As of now, the ChainOpera AI team has grown to over 40 members.

Co-founder: Salman Avestimehr

Professor Salman Avestimehr is the Dean’s Professor in the Department of Electrical and Computer Engineering at the University of Southern California (USC) and serves as the founding director of the USC-Amazon Trusted AI Center, while also leading the USC Information Theory and Machine Learning Laboratory (vITAL). He is the co-founder and CEO of FedML and co-founded TensorOpera/ChainOpera AI in 2022.

Professor Salman Avestimehr graduated with a Ph.D. in EECS from UC Berkeley (Best Paper Award). As an IEEE Fellow, he has published over 300 high-level papers in information theory, distributed computing, and federated learning, with over 30,000 citations, and has received multiple international honors including PECASE, NSF CAREER, and the IEEE Massey Award. He led the creation of the FedML open-source framework, widely applied in healthcare, finance, and privacy computing, and has become a core technological cornerstone of TensorOpera/ChainOpera AI.

Co-founder: Dr. Aiden Chaoyang He

Dr. Aiden Chaoyang He is the co-founder and president of TensorOpera/ChainOpera AI, holding a Ph.D. in Computer Science from USC and is the original creator of FedML. His research focuses on distributed and federated learning, large-scale model training, blockchain, and privacy computing. Before entrepreneurship, he worked in R&D at Meta, Amazon, Google, and Tencent, holding core engineering and management positions at Tencent, Baidu, and Huawei, leading the implementation of several internet-level products and AI platforms.

In academia and industry, Aiden has published over 30 papers, with over 13,000 citations on Google Scholar, and has received the Amazon Ph.D. Fellowship, Qualcomm Innovation Fellowship, and best paper awards at NeurIPS and AAAI. The FedML framework he developed is one of the most widely used open-source projects in the field of federated learning, supporting an average of 27 billion requests per day; he is also a core author of the FedNLP framework and hybrid model parallel training methods, which are widely applied in decentralized AI projects like Sahara AI.

In December 2024, ChainOpera AI announced the completion of $3.5 million in seed funding, bringing the total financing with TensorOpera to $17 million. The funds will be used to build a blockchain L1 and AI operating system for decentralized AI Agents. This round of financing was led by Finality Capital, Road Capital, and IDG Capital, with participation from Camford VC, ABCDE Capital, Amber Group, Modular Capital, and support from well-known institutions and individual investors such as Sparkle Ventures, Plug and Play, USC, and EigenLayer founders Sreeram Kannan and BabylonChain co-founder David Tse. The team stated that this round of financing will accelerate the realization of the vision of a decentralized AI ecosystem co-owned and co-created by "AI resource contributors, developers, and users."

IX. Analysis of the Federated Learning and AI Agent Market Landscape

The federated learning framework has four main representatives: FedML, Flower, TFF, and OpenFL. Among them, FedML is the most full-stack, combining federated learning, distributed large model training, and MLOps, making it suitable for industrial implementation; Flower is lightweight and easy to use, with an active community, focusing on education and small-scale experiments; TFF heavily relies on TensorFlow, has high academic research value but weak industrialization; OpenFL focuses on healthcare/finance, emphasizing privacy compliance, and has a relatively closed ecosystem. Overall, FedML represents an industrial-grade all-in-one path, Flower emphasizes usability and education, TFF leans towards academic experiments, while OpenFL has advantages in compliance within vertical industries.

In terms of industrialization and infrastructure, TensorOpera (commercialization of FedML) is characterized by inheriting the technical accumulation of the open-source FedML, providing integrated capabilities for cross-cloud GPU scheduling, distributed training, federated learning, and MLOps. The goal is to bridge academic research and industrial applications, serving developers, small and medium enterprises, and the Web3/DePIN ecosystem. Overall, TensorOpera can be seen as a combination of "open-source FedML's Hugging Face + W&B," offering a more complete and versatile full-stack distributed training and federated learning capability, distinguishing itself from other platforms that focus on community, tools, or single industries.

Among the representatives of innovation, both ChainOpera and Flock attempt to combine federated learning with Web3, but their directions show significant differences. ChainOpera builds a full-stack AI Agent platform, covering four layers of architecture: entry, social, development, and infrastructure. Its core value lies in transforming users from "consumers" to "co-creators," achieving collaborative AGI and community co-construction through the AI Terminal and Agent Social Network. In contrast, Flock focuses more on blockchain-enhanced federated learning (BAFL), emphasizing privacy protection and incentive mechanisms in decentralized environments, primarily targeting collaborative verification at the computing power and data layers. ChainOpera leans towards the application and Agent network layer implementation, while Flock emphasizes strengthening the underlying training and privacy computing.

At the Agent network level, the most representative project in the industry is Olas Network. ChainOpera originates from federated learning, constructing a full-stack closed loop of model-computing power-agent, and uses the Agent Social Network as an experimental field to explore multi-agent interaction and social collaboration. Olas Network, on the other hand, stems from DAO collaboration and the DeFi ecosystem, positioning itself as a decentralized autonomous service network, launching directly applicable DeFi yield scenarios through Pearl, showcasing a distinctly different path from ChainOpera.

X. Investment Logic and Potential Risk Analysis

Investment Logic

ChainOpera's advantages primarily lie in its technological moat: from FedML (the benchmark open-source framework for federated learning) to TensorOpera (enterprise-level full-stack AI Infra), and then to ChainOpera (Web3-based Agent network + DePIN + Tokenomics), forming a unique continuous evolution path that combines academic accumulation, industrial implementation, and crypto narrative.

In terms of application and user scale, the AI Terminal has formed hundreds of thousands of daily active users and a thousand-level Agent application ecosystem, ranking first in the BNBChain DApp Bay AI category, demonstrating clear on-chain user growth and real transaction volume. Its multi-modal scenarios covering the crypto-native field are expected to gradually spill over to a broader Web2 user base.

In terms of ecological cooperation, ChainOpera initiated the CO-AI Alliance, collaborating with partners such as io.net, Render, TensorOpera, FedML, and MindNetwork to build multi-faceted network effects around GPU, models, data, and privacy computing. Additionally, it has partnered with Samsung Electronics to validate mobile multi-modal GenAI, showcasing the potential for expansion into hardware and edge AI.

Regarding the token and economic model, ChainOpera distributes incentives based on the Proof-of-Intelligence consensus, revolving around five major value streams (LaunchPad, Agent API, Model Serving, Contribution, Model Training), and forms a positive cycle through a 1% platform service fee, incentive distribution, and liquidity support, avoiding a single "speculative token" model and enhancing sustainability.

Potential Risks

Firstly, the difficulty of technological implementation is high. The five-layer decentralized architecture proposed by ChainOpera spans a wide range, and cross-layer collaboration (especially in large model distributed inference and privacy training) still faces performance and stability challenges, which have not yet been validated through large-scale applications.

Secondly, the ecological user stickiness still needs to be observed. Although the project has achieved initial user growth, whether the Agent Marketplace and developer toolchain can maintain long-term activity and high-quality supply remains to be tested. Currently, the launched Agent Social Network primarily focuses on LLM-driven text dialogue, and user experience and long-term retention still need further improvement. If the incentive mechanism design is not sufficiently refined, there may be a phenomenon of high short-term activity but insufficient long-term value.

Finally, the sustainability of the business model remains to be confirmed. At this stage, revenue mainly relies on platform service fees and token circulation, and stable cash flow has not yet formed. Compared to applications with more financial or productivity attributes like AgentFi or Payment, the current model's commercial value still needs further validation; at the same time, the mobile and hardware ecosystem is still in the exploratory stage, and the market prospects carry a certain degree of uncertainty.

Disclaimer: This article was created with the assistance of the AI tool ChatGPT-5. The author has made efforts to proofread and ensure the information is true and accurate, but there may still be omissions. It is particularly noted that the cryptocurrency asset market generally exhibits a divergence between project fundamentals and secondary market price performance. The content of this article is for information integration and academic/research exchange only and does not constitute any investment advice, nor should it be viewed as a recommendation for buying or selling any tokens.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink