How does the Mira protocol achieve greater honesty in AI through a decentralized consensus mechanism?

CN
13 hours ago

Mira provides a new direction: instead of relying on a single AI to determine answers, it depends on a group of independent models to "vote for the truth."

Author: Messari

Translation: Elponcho, Chain News

In today's thriving generative AI landscape, we still struggle to address a fundamental issue: AI sometimes produces nonsensical outputs with complete seriousness. This phenomenon is known in the industry as "hallucination." Mira, a decentralized protocol designed for AI output verification, is attempting to enhance the "factual credibility" of AI through a multi-model consensus mechanism and cryptographic auditing. Below, we will explore how Mira operates, why it is more effective than traditional methods, and its current achievements in real-world applications.

This report is based on a research report published by Messari.

Decentralized Fact Verification Protocol: The Basic Operating Principle of Mira

Mira is not an AI model but an embedded verification layer. When an AI model produces a response (such as a chatbot answer, summary, automated report, etc.), Mira breaks down the output into a series of independent factual claims. These claims are sent to its decentralized verification network, where each node (i.e., verifier) runs different architectures of AI models to assess whether these claims are true.

Each node will provide a judgment of "correct," "incorrect," or "uncertain" regarding the claims, and the system will make an overall decision based on the majority consensus. If the majority of models recognize a claim as true, it will be approved; otherwise, it will be flagged, rejected, or warned.

This process is completely transparent and auditable. Each verification generates a cryptographic certificate indicating the models involved in the verification process, voting results, timestamps, etc., for third-party verification.

Why Does AI Need Verification Systems Like Mira?

Generative AI models (such as GPT, Claude) are not deterministic tools; they predict the next character based on probabilities and do not have built-in "factual awareness." This design allows them to write poetry and tell jokes, but it also means they can produce false information with complete seriousness.

The verification mechanism proposed by Mira aims to address four core issues currently faced by AI:

  1. Widespread Hallucinations: Numerous cases of AI fabricating policies, inventing historical events, and misquoting sources abound.

  2. Black Box Operation: Users do not know where AI's answers come from and cannot trace them back.

  3. Inconsistent Outputs: The same question may yield different answers from AI.

  4. Centralized Control: Most AI models are monopolized by a few companies, preventing users from verifying their logic or seeking a second opinion.

Limitations of Traditional Verification Methods

Current alternatives, such as human review (Human-in-the-loop), rule-based filters, and model self-checking, all have their shortcomings:

  • Human Review is difficult to scale, slow, and costly.

  • Rule-based Filtering is limited to predetermined scenarios and is ineffective against creative errors.

  • Model Self-Review is ineffective, as AI often exhibits overconfidence in incorrect answers.

  • Centralized Ensemble can cross-check but lacks model diversity, making it prone to "collective blind spots."

Mira's Innovative Mechanism: Combining Consensus Mechanism with AI Division of Labor

Mira's key innovation is introducing the blockchain consensus concept into AI verification. Each AI output, after passing through Mira, becomes multiple independent factual statements that various AI models "vote" on. Only when a certain proportion of models reach consensus is the content considered credible.

Core Design Advantages of Mira Include:

  • Model Diversity: Models from different architectures and data backgrounds reduce collective bias.

  • Error Tolerance: Even if some nodes make mistakes, it does not affect the overall result.

  • Full Chain Transparency: Verification records are on-chain and available for auditing.

  • Strong Scalability: Capable of verifying over 3 billion tokens daily (equivalent to millions of text segments).

  • No Human Intervention Required: Automated processes without the need for manual verification.

Decentralized Infrastructure: Who Provides the Nodes and Computing Resources?

Mira's verification nodes are provided by global decentralized computing contributors. These contributors are known as Node Delegators, who do not directly operate nodes but lease GPU computing resources to certified node operators. This "computing as a service" model significantly expands Mira's processing scale.

Major Partner Node Providers Include:

  • Io.Net: Provides a DePIN architecture GPU computing network.

  • Aethir: Focuses on decentralized cloud GPU for AI and gaming.

  • Hyperbolic, Exabits, Spheron: Several blockchain computing platforms also provide infrastructure for Mira nodes.

Node participants must undergo a KYC video verification process to ensure network uniqueness and security.

Mira Verification Increases AI Accuracy to 96%

According to data from the Mira team in the Messari report, after filtering through its verification layer, the factual accuracy of large language models increased from 70% to 96%. In real-world scenarios such as education, finance, and customer service, the frequency of hallucinated content decreased by 90%. Importantly, these improvements were achieved without retraining the AI models, solely through "filtering."

Mira has now been integrated into multiple application platforms, including:

  • Educational tools

  • Financial analysis products

  • AI chatbots

  • Third-party Verified Generate API services

The entire Mira ecosystem encompasses over 4.5 million users, with more than 500,000 daily active users. Although most people have not directly interacted with Mira, their AI responses have quietly undergone the verification mechanism behind it.

Mira Builds a Trustworthy Foundation for AI

As the AI industry increasingly pursues scale and efficiency, Mira offers a new direction: instead of relying on a single AI to determine answers, it depends on a group of independent models to "vote for the truth." This architecture not only makes the output results more credible but also establishes a "verifiable trust mechanism" with high scalability.

As the user base expands and third-party audits become more prevalent, Mira has the potential to become an indispensable infrastructure in the AI ecosystem. For any developers and enterprises hoping their AI can stand firm in real-world applications, the "decentralized verification layer" represented by Mira may be one of the key pieces of the puzzle.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Bitget:注册返10%, 送$100
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink