back to top

Tether Introduces AI Platform Capable of Running Large Models on Smartphones.

Tether Launches QVAC‑Based AI Framework That Brings Billion‑Parameter Models to Smartphones

June 2026 – New York – Tether, the company behind the world’s most widely used stablecoin USDT, announced on Tuesday a new artificial‑intelligence training platform that promises to make large‑scale language models accessible on consumer‑grade hardware, including mobile phones and GPUs that are not based on Nvidia’s architecture.

The system, integrated into Tether’s QVAC suite, builds on Microsoft’s BitNet 1‑bit model architecture and applies Low‑Rank Adaptation (LoRA) techniques to shrink both memory footprints and compute demands. According to the company’s press release, the framework can fine‑tune models with up to one billion parameters on a standard smartphone in less than two hours, while smaller models can be updated in a matter of minutes. Tether also claims the platform can support inference for models as large as 13 billion parameters on mobile devices, a scale traditionally reserved for data‑center‑class GPUs.

Cross‑platform support

Unlike many AI toolkits that are tightly coupled to Nvidia CUDA, the new framework is engineered for a heterogeneous set of processors:

  • Desktop and laptop GPUs – AMD and Intel graphics cards are fully supported.
  • Apple silicon – Both Mac M‑series and iPhone/iPad chips can run training and inference workloads.
  • Qualcomm mobile GPUs – The system works on Snapdragon‑based devices, opening the door to on‑device AI for Android users.

By exploiting the 1‑bit representation of BitNet, Tether says the solution can slash video‑RAM usage by roughly 78 % compared with conventional 16‑bit models. The reduced footprint enables LoRA‑based fine‑tuning on hardware that would otherwise be unable to host such large networks.

Performance and potential use cases

Testing conducted by Tether’s engineers indicates that BitNet models run on mobile GPUs several times faster than comparable CPU‑only deployments. The speed gains, combined with the lower memory requirements, are seen as a catalyst for several emerging applications:

  • On‑device personalization – Users could adapt large language models to their own language style or domain without relying on cloud services.
  • Federated learning – Distributed training across many devices becomes feasible, allowing model updates to be aggregated without transmitting raw user data to a central server.
  • Edge AI for crypto services – Wallets, decentralized applications, and other blockchain tools could embed sophisticated language capabilities directly on the user’s device, enhancing privacy and reducing latency.

Context: Crypto firms betting on AI infrastructure

Tether’s entry into AI mirrors a broader trend of cryptocurrency‑related companies diversifying into high‑performance compute. Earlier this year, Google secured a multi‑billion‑dollar agreement with Cipher Mining to provide AI‑ready data‑center capacity. Bitcoin miner IREN announced a $3.6 billion capital raise aimed at building AI‑focused facilities, while HIVE Digital reported record revenues driven largely by its AI and HPC services. In parallel, platforms such as Coinbase, Alchemy, and Sentient’s Arena have rolled out tools that let AI agents interact with blockchain ecosystems, often using stablecoins like USDT or USDC for micro‑payments.

The convergence of blockchain and AI is accelerating, and Tether’s new framework could lower the entry barrier for developers seeking to embed large language models into decentralized products without the need for expensive cloud compute.

Key takeaways

What Implication
QVAC + BitNet + LoRA Enables fine‑tuning of 1‑billion‑parameter models on smartphones in under two hours.
Hardware‑agnostic Supports AMD, Intel, Apple silicon, and Qualcomm GPUs, widening the pool of eligible devices.
77 % VRAM reduction 1‑bit architecture dramatically cuts memory needs versus 16‑bit equivalents.
Inference speed‑up Mobile GPUs achieve multiple‑fold faster inference compared with CPUs.
Federated learning ready Allows decentralized model updates, enhancing privacy and reducing cloud dependence.
Strategic fit for crypto Provides a path for on‑device AI in wallets, DeFi platforms, and other blockchain services.

Outlook

If the performance claims hold up in real‑world deployments, Tether’s framework could reshape how AI is integrated into consumer applications, especially within the crypto sector where privacy and decentralization are paramount. By democratizing access to large‑scale language models, the company may encourage a wave of on‑device AI services that operate without heavy reliance on centralized cloud infrastructure—a development that aligns with the broader decentralization ethos of the blockchain community.



Source: https://cointelegraph.com/news/tether-launches-ai-training-framework-for-smartphones-and-consumer-gpus?utm_source=rss_feed&utm_medium=feed&utm_campaign=rss_partner_inbound

Exit mobile version