Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy
BTCBTC
💲71005.31
-
4.01%
ETHETH
💲2190.63
-
5.9%
SOLSOL
💲90.09
-
4.89%
WLDWLD
💲0.3666
-
6.6%
USDCUSDC
💲0.9998
-
0%
HYPEHYPE
💲41.60
+
0.31%

Paolo Ardoino 🤖
Paolo Ardoino 🤖|3月 17, 2026 13:15
Tether AI breakthrough Tether AI team just released new version of QVAC Fabric to include the World’s First Cross-Platform BitNet LoRA Framework to Enable Billion-Parameter AI Training and Inference on Consumer GPUs and Smartphones. Background Microsoft's BitNet uses one bit architecture to dramatically compress models. Traditional LLMs operate on full-precision computation, where weights are stored as complex, high-resolution numbers. The innovation of BitNet is that it shrinks these weights into a tiny ternary range of only -1, 0, and 1. significantly reducing memory usage and computation. LoRA, is a parameter-efficient fine-tuning technique that reduces the number of trainable parameters by up to ninety-nine percent. Together they slash memory and compute requirements. Yet BitNet has mostly been limited to CPU or CUDA NVIDIA backends, and lacked the support of LoRA fine-tuning. Enters QVAC Fabric: the unlock Today, with QVAC Fabric LLM, is the first time BitNet LoRA fine-tuning and inference work cross-platform across GPU vendors and operating systems using Vulkan and Metal backends. That means support for AMD, Intel, Apple Metal and also Mobile GPUs. And for the first time ever, BitNet inference runs efficiently on smartphones using mobile GPUs. On flagship devices, GPU inference is 2 to 11 times faster than CPU while using up to 90% less memory than the full precision models. The biggest unlock: QVAC Fabric LLM support for BitNet LoRA fine-tuning on heterogeneous GPUs. Our team was able to demonstrate this by fine tuning models up to 3.8 billion parameters on all flagships phones such as Pixel 9, S25 and iPhone 16 and up to 13 billion parameter models on the iPhone 16. Github repositories: https://github.com/tetherto/qvac-fabric-llm.cpp : general QVAC Fabric codebase https://github.com/tetherto/qvac-rnd-fabric-llm-bitnet : specific QVAC Fabric's BitNet knowledge base, architecture docs and pre-built binaries What does it mean? What used to require dedicated GPUs now runs on consumer hardware. This breakthrough is the first real-world signal of a local private AI that can truly serve the people. And this is just the beginning. In the next months and years Tether will relentlessly continue to invest significant amounts of resources and capital to continue to research and develop open-source intelligence that can scale and evolve on local devices, providing maximum utility and privacy to its users. The era of Stable Intelligence has just begun. Free as in freedom.(Paolo Ardoino 🤖)
+4
Mentioned
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Timeline

3月 17, 19:06ClawTeam - Agent Collective Intelligence Groundbreaking Release
3月 17, 13:05Tether launches BitNet LoRA framework
3月 17, 05:41PANews launches official Skills toolkit
3月 17, 04:54NVIDIA launches the Vera Rubin platform, significantly reducing inference costs
3月 16, 20:40NVIDIA releases NemoClaw enterprise AI agent framework
3月 16, 18:15Apple and Microsoft release AI computers
3月 15, 16:53DeepSeek V4 and the new Hunyuan model are expected to be released next month
3月 14, 17:29New Indicator Released: Options Max Pain Point
3月 14, 06:02DeepSeek V4 and Tencent's new Mix Yuan model are expected to be released in April
3月 13, 00:08Meta delays the release of its new AI model due to performance issues

HotFlash

|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads