Discovered a medical AI tool QBio, focusing on breast density classification and transparent report generation. Upload an X-ray, and within minutes, it can tell you whether the breast density is A, B, C, or D, along with a detailed report explaining the decision-making process.
It was developed in collaboration between Fetch and Hybrid, with QBio being just an appetizer; the real star is ASI-1 Mini.
Fetch is a very old project. During the years when DeFi captured the entire market's attention, Fetch focused on AI + Crypto, consistently working on the research and application of general technology for multi-model agents.
What is ASI-1 Mini
In February of this year, Fetch launched the world's first Web3 native large language model (LLM) — ASI-1 Mini. What does Web3 native mean? Simply put, it means it integrates seamlessly with blockchain, allowing you to not only use AI but also invest in, train, and own AI through the $FET token and ASI wallet.
So what exactly is ASI-1 Mini?
It is a large language model designed specifically for agentic AI, capable of coordinating multiple AI agents to handle complex multi-step tasks.
For example, the ASI TRAIN/> reasoning agent behind QBio is part of ASI-1 Mini. It can not only classify breast density but also explain the decision-making process, addressing the AI "black box problem." Even more impressive, ASI-1 Mini can run on just two GPUs, making it significantly cheaper compared to other LLMs (like DeepSeek, which requires 16 H100 GPUs), making it suitable for small and medium-sized institutions.
How ASI-1 Mini Innovates
ASI-1 Mini's performance is comparable to leading LLMs, but its hardware costs are significantly lower. It features dynamic reasoning modes and advanced adaptive capabilities, enabling more efficient and context-aware decision-making.
MoM and MoA
These are acronyms, don't worry, it's simple: Mixture of Models (MoM) and Mixture of Agents (MoA).
Imagine a team of AI experts, each focusing on different tasks, working smoothly together. This not only enhances efficiency but also makes the decision-making process more transparent. For instance, in medical image analysis, MoM might select one model specialized in image recognition and another specialized in text generation, while MoA coordinates the outputs of these two models to ensure the final report is both accurate and easy to read.
Transparency and Scalability
Traditional LLMs are often "black boxes"; you ask them a question, they give you an answer, but why they answered that way, sorry, no comment. ASI-1 Mini is different; through continuous multi-step reasoning, it can tell you, "I chose this answer for these reasons," which is crucial, especially in the medical field.
The context window of ASI-1 Mini will expand to 10 million tokens, supporting multimodal capabilities (such as image and video processing). In the future, the Cortex series models will be launched, focusing on cutting-edge fields like robotics and biotechnology.
Hardware Efficiency
Other LLMs require high hardware costs, while ASI-1 Mini can run on just two GPUs. This means that even a small clinic can afford it, without needing a million-dollar data center.
Why is it so efficient? Because ASI-1 Mini's design philosophy is "less is more." It maximizes the use of limited computational resources through optimized algorithms and model structures. In contrast, other LLMs often pursue larger-scale models, resulting in massive resource consumption.
Community-Driven
Unlike other large language models, ASI-1 Mini is trained through decentralized methods and is community-driven. ASI-1 Mini is a tiered freemium product aimed at $FET holders, who can connect their Web3 wallets to unlock all features. The more FET tokens held in the wallet, the more features of the model can be explored.
This community-driven model is like crowdfunding, but for training and validating artificial intelligence, high-tech that is no longer just for the elite but for everyone to participate.
In today's relatively mature LLM landscape, why create a separate ASI-1 Mini? It's easy to understand; it fills the gap between Web3 and AI integration.
Current LLMs (like ChatGPT, Grok) mainly serve centralized environments, while ASI-1 Mini is the first LLM designed for decentralized ecosystems. It not only makes AI more transparent and efficient but also allows community members to directly benefit from AI growth.
The emergence of ASI-1 Mini marks a shift of AI from "black box" to "transparent," from "centralized" to "decentralized," and from "tool" to "asset." It can play a role in the medical field (like QBio) and also show potential in finance, law, research, and other areas.
This month, Fetch partnered with Rivalz to integrate ASI-1 Mini into Rivalz's Agentic Data Coordination System (ADCS), enabling on-chain AI reasoning. With this collaboration, decentralized applications can directly access advanced AI reasoning capabilities on the blockchain.
Traditional blockchain environments are resource-constrained, and smart contracts can only handle lightweight tasks, usually obtaining simple data (like prices) through oracles, and cannot directly run complex AI models. ADCS perfectly solves this problem, with complex AI reasoning computations done off-chain, and the results securely returned to the blockchain, ensuring decentralization and trust.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。