Is anyone else watching what Qubic is doing with distributed compute and AI training? Seems underreported in AI cirles

Reddit r/artificial / 3/28/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • Qubic is described as using “Useful Proof of Work,” where distributed compute contributes to neural network training for its Aigarth AI project while simultaneously securing the network.
  • The article claims independent verification of 15.52 million transactions per second on live mainnet by CertiK, and attributes the high throughput to a bare-metal architecture without a virtual machine layer.
  • It notes an upcoming Dogecoin (DOGE) mining integration planned for around April 1, where ASIC DOGE Scrypt mining is expected to run in parallel with CPU/GPU workloads for the existing tasks.
  • The author contrasts Qubic with Bittensor, arguing that Bittensor focuses more on AI competition/rewarding subnets rather than using raw distributed compute to train models from scratch.
  • The post questions whether “mining-funded distributed AI training” is receiving serious attention in AI research communities or is viewed as a fundamentally different category from mainstream AI infrastructure.

I follow AI infrastructure pretty closely and Qubic keeps coming up in my research in a way I find intersting but havent seen much discussion of in AI-focused comunities.

Quick background for people who havent heard of it: Qubic uses what they call Useful Proof of Work - instead of hardware solving random hash puzzles, the compute runs neural network training tasks for thier Aigarth AI project. The same hardware is contributing to AI training while securing things.

The network was independently verifed at 15.52 million transactions per second by CertiK on live mainnet. For context, thats faster than Visas theoretical peak throughput. The architecture runs on bare metal hardware without a virtual machine layer, which is aparently what enables the throughput.

Theyre also aparently launching a DOGE mining integration immenantly (around April 1) where thier infrastructure will run Dogecoin mining simultaniously with everything else - the ASIC hardware for DOGE Scrypt mining runs in paralel with thier CPU/GPU hardware for other workloads.

For comparison, people often bring up Bittensor, but from what I see Bittensor is more about competing AIs and subnets rewarding each other rather than actually using the distributed compute to train models from scratch with raw hardware power. Qubic seems different in that the mining itself is the training.

Big companies are pouring billions into building massive data centers and training ever bigger LLMs, but I dont think true AGI is gonna come just from scaling up these trained models no matter how much money they throw at it.

My interest is specifically in the distributed AI compute angle. Is the model of mining-funded distributed AI training something that gets serius discussion in AI research cirles? Or is this considered a fundementaly different category from serius AI infrastructure?

submitted by /u/srodland01
[link] [comments]