Community-Trained
Intelligence

Inspired by Templar's Covenant-72B on Bittensor SN3, the TensorQ Training Network lets anyone contribute GPU power to train a prediction model on proprietary trading data. Workers earn $TENSORQ rewards automatically from a fee-funded pool. No permission needed — just connect and train.

GPU-1 GPU-2 GPU-3 GPU-4 GPU-5 GPU-6 QLLM v1.0
Training Samples
Subnets Covered
Trade Outcomes
0
Active Workers
0
Rounds Complete

The TensorQ Training Network

Decentralized model training powered by community GPUs. Inspired by Templar's Covenant-72B — the largest decentralized LLM pre-training run in history, completed on Bittensor SN3.

Proprietary Data

126 subnets scanned every 30 minutes — emissions, stake flows, alpha prices, pool states. Thousands of labeled observations that don't exist anywhere else.

Permissionless Training

Run the worker script, connect your GPU. The coordinator assigns data shards, you train locally, weights are submitted automatically. Join or leave anytime.

Automatic Rewards

Half of $TENSORQ trading fees flow into a reward pool. Workers receive tokens proportional to their GPU contribution — sent automatically after each training round.

How to Participate

Four steps from download to contribution.

1

Install

pip install torch numpy — that's it. Python 3.8+ required.

2

Run Worker

python worker.py --coordinator https://app.tensorq.xyz --wallet 0x...

3

Auto-Train

Worker pulls data shards, trains on your GPU, pushes weights back. Runs continuously — leave it on and earn.

4

Submit

Your trained weights are automatically submitted to the coordinator. Best model per round gets promoted and earns bonus points.

Model Architecture

A lightweight transformer-based time series forecaster. Takes a 24-hour window of subnet metrics and predicts price changes at multiple horizons. Small enough for consumer GPUs, powerful enough to capture the patterns in Bittensor's alpha markets.

Parameters
~2-5M
Input
48 × 13
Encoder
4 layers
Heads
4 attn
d_model
64
Output
3 + 3
INPUT 48 timesteps × 13 features Feature Embedding + Pos TRANSFORMER ENCODER Self-Attention + FFN (Layer 1) Self-Attention + FFN (Layer 2) Layers 3-4... Mean Pool REGRESSION Δprice 1h / 6h / 24h DIRECTION up/down 1h / 6h / 24h loss = 0.7×MSE + 0.3×BCE

Training Data

Two datasets exported directly from the live agent's database. Updated every scan cycle.

Dataset A: Price/Signal Time Series

Per-subnet, per-scan observations with 13 features and forward price change labels. This is the primary dataset.

{ "netuid": 9, "timestamp": "2026-03-15T12:00:00", "emission_pct": 0.82, "alpha_price": 0.00412, "total_stake_tao": 1502.3, "stake_velocity": 12.5, "registration_rate": 3, "neurons": 64, "pool_tao": 245.1, "pool_alpha_in": 59432.7, "price_change_1h": 0.023, "price_change_6h": -0.015, "price_change_24h": 0.041, "emission_delta_24h": 0.003, "stake_delta_24h": 0.031, "fwd_price_change_1h": 0.018, "fwd_price_change_6h": -0.032, "fwd_price_change_24h": 0.055, "fwd_direction_1h": "up", "fwd_direction_6h": "down", "fwd_direction_24h": "up" }

Dataset B: Trade Outcomes

Closed trades with 14 signal values at entry linked to actual PnL outcomes.

{ "position_id": 42, "subnet_id": 13, "entry_price": 0.00389, "exit_price": 0.00412, "pnl_pct": 5.91, "hold_hours": 5.75, "confidence": 0.85, "signals_at_entry": { "emissionDelta": 0.72, "momentum": 0.45, "stakeVelocity": 0.58, "registrationRate": 0.81, "fundamentalScore": 0.65, "legMomentum": 0.73, ... }, "outcome": "win" }

Reward Pool

Funded automatically from $TENSORQ trading fees. Workers earn real tokens for GPU time.

DEX Trading $TENSORQ fees 50/50 ETH Fees Exolix → TAO Agent Wallet Funds trading $TENSORQ Fees Token portion Reward Pool Pays workers GPU-1 GPU-2 GPU-3 automated fee split → proportional distribution to GPU contributors

Automatic Funding

50% of $TENSORQ DEX trading fees (token portion) flow directly into the reward pool. No manual intervention — fees accumulate as the token trades.

Proportional Distribution

After each training round, the pool distributes tokens proportionally to workers based on contribution. More shards trained = bigger share. Sent directly to your Base wallet.

Your Wallet, Your Tokens

Pass your Base wallet address when starting the worker: --wallet 0x.... Tokens are sent directly — no claims, no lockups.