Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Tether Launches Bitnet AI Framework for Smartphones, Cutting Need for Nvidia GPUs

CN
bitcoin.com
Follow
3 hours ago
AI summarizes in 5 seconds.

On Tuesday, Tether unveiled a cross-platform LoRA fine-tuning framework for Microsoft’s Bitnet models, introducing what it described as the first system capable of training and running 1-bit large language models across consumer devices, including smartphones and laptops.

The release is part of Tether’s QVAC Fabric stack and is designed to reduce the heavy compute and memory demands typically associated with artificial intelligence development, which has largely been confined to cloud providers and high-end Nvidia hardware.

By supporting heterogeneous hardware—including chips from Intel, AMD, and Apple, as well as mobile GPUs—the framework allows developers to fine-tune models locally without relying on centralized infrastructure.

In practice, that means AI workloads once reserved for data centers can now run on devices sitting in a backpack or a pocket, a shift that could lower costs and broaden access for developers across the United States and globally.

Tether said its engineers successfully demonstrated Bitnet fine-tuning on mobile GPUs, including Adreno, Mali and Apple Bionic chips, marking a first for the emerging 1-bit model architecture.

Performance benchmarks released by the company show a 125 million-parameter model can be fine-tuned in about 10 minutes on a Samsung S25 device, while a 1 billion-parameter model completes the same task in roughly 1 hour and 18 minutes on the same hardware.

On Apple devices, the company reported similar results, with a 1 billion-parameter model fine-tuned in approximately 1 hour and 45 minutes on an iPhone 16, and experimental runs pushing models up to 13 billion parameters on-device.

The framework also showed measurable gains in inference speed, with mobile GPUs delivering between two and 11 times the performance of CPUs, according to Tether’s internal benchmarks.

Memory efficiency is another key selling point, with Bitnet-1B using up to 77.8% less VRAM than comparable 16-bit models and more than 65% less than other widely used architectures, enabling larger models to run on limited hardware.

Tether said the system also enables LoRA fine-tuning on non-Nvidia hardware for the first time in this category, a move that could reduce reliance on specialized chips and cloud services while keeping sensitive data stored locally on user devices.

The company added that the approach could make federated learning more practical by allowing models to be trained across distributed devices without centralizing data, an area of growing interest in privacy-focused AI development.

“By enabling meaningful large-model training on consumer hardware, including smartphones, Tether’s QVAC is proving that advanced AI can be decentralized, inclusive, and empowering for everyone,” Tether CEO Paolo Ardoino said in a statement, adding that the company plans continued investment in on-device AI infrastructure.

The technical release, including benchmarks and implementation details, has been published through Hugging Face, signaling an effort to reach developers directly rather than gate the technology behind proprietary systems.

  • What is Tether’s new AI framework?
    Tether’s QVAC Fabric introduces a cross-platform system for training and running Bitnet AI models on consumer devices like phones and laptops.
  • Can smartphones really train AI models?
    Yes, Tether’s benchmarks show billion-parameter models can be fine-tuned on devices like the Samsung S25 and iPhone 16 within hours.
  • Why is this important for U.S. developers?
    It reduces reliance on expensive cloud infrastructure and specialized GPUs, lowering costs and increasing access to AI development.
  • What makes Bitnet different from other models?
    BitNet uses a 1-bit architecture that significantly reduces memory usage and improves efficiency compared to traditional 16-bit models.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

邀好友领 $50 龙虾金,平分 2 万刀!
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by bitcoin.com

25 minutes ago
Bitrefill Addresses Attack Linked to North Korea, Confirms Limited Data Exposure
41 minutes ago
OpenAI Faces Backlash After Pentagon AI Deal as ChatGPT Uninstalls Spike in US
1 hour ago
Bitcoin Derivatives Data Shows Wall Street and Crypto Traders Diverging
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarbitcoin.com
25 minutes ago
Bitrefill Addresses Attack Linked to North Korea, Confirms Limited Data Exposure
avatar
avatarbitcoin.com
41 minutes ago
OpenAI Faces Backlash After Pentagon AI Deal as ChatGPT Uninstalls Spike in US
avatar
avatarbitcoin.com
1 hour ago
Bitcoin Derivatives Data Shows Wall Street and Crypto Traders Diverging
avatar
avatarbitcoin.com
1 hour ago
CFTC No-Action Relief Unlocks Crypto Wallet Access to Regulated Derivatives Markets
avatar
avatarbitcoin.com
1 hour ago
Crypto ETFs Rally Continues With $202 Million Inflow for Bitcoin
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink