Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy
BTCBTC
💲80276.04
-
1.77%
ETHETH
💲2300.40
-
2.66%
SOLSOL
💲88.26
-
0.29%
TONTON
💲2.43
+
7.05%
ZECZEC
💲563.50
-
1.93%
USDCUSDC
💲0.9999
-
0%

星球日报
星球日报|5月 07, 2026 12:04
Tether releases locally runnable medical AI model QVAC MedPsy Odaily Planet Daily News: Tether AI Research Group has released a new generation of medical AI model QVAC MedPsy, which can run directly on low computing hardware such as smartphones and wearable devices, without relying on cloud servers. At the same time, it has surpassed multiple larger SOTA models in multiple medical benchmark tests. Official data shows that the 1.7 billion parameter version of QVAC MedPsy has an average score of 62.62 in 7 closed healthcare benchmark tests, which is 11.42 points higher than Google's MedGema-1.5-4B-it, although the model size is less than half of the latter. In real-world clinical tests such as HealthBench Hard, the model even surpassed MedGemma 27B with parameter sizes nearly 16 times larger. In addition, the 4 billion parameter version has an average score of 70.54, surpassing multiple large models with scales nearly 7 times larger in multiple medical inference evaluations. Tether stated that the model achieved "small model high performance" through post training medical reasoning optimization, reinforcement learning, and high-quality medical data training. Compared to traditional cloud based AI architectures, QVAC MedPsy significantly reduces inference costs. Its 4 billion parameter version generates an average of about 909 tokens, far lower than the 2953 tokens of similar systems, which can achieve lower latency and lower computing costs. The model also provides a quantified version of GGUF, suitable for local deployment on both mobile and edge devices. Paolo Ardoino stated that the core goal of this model is to improve its efficiency, rather than simply expanding the parameter scale, so that medical AI can run directly on local systems or terminal devices in hospitals, thereby avoiding sensitive medical data from being uploaded to the cloud.
+4
Mentioned
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Timeline

5月 06, 21:31Ethoswarm ecosystem new skills and application release
5月 04, 09:54OpenClaw releases version update 2026.5.3
5月 01, 19:48The industry will face quantum security solutions
5月 01, 03:18OPENAI releases tutorials for Codex and GPT-5.5
4月 29, 09:30GPU Architecture Evolution and Growing Demand for HBM Memory
4月 26, 14:00Litecoin Core v0.21.5.4 Released, Upgrade Recommended
4月 24, 08:20DeepSeek releases the open-source preview version of DeepSeek-V4
4月 24, 00:41OpenAI officially releases GPT-5.5, with significant improvements in intelligence and efficiency.
4月 23, 05:13OpenAI releases the world's most powerful AI image model
4月 21, 23:34ChatGPT Images 2.0 officially released

HotFlash

|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads