This chapter builds on: Ch 5 Subnets, Ch 6 Staking & Delegation
Learn / Weights & Scoring

Weights & Scoring

Bittensor's intelligence comes from a competitive loop: validators evaluate miners, assign scores, and submit those scores to the chain. These weight vectors are the raw input to consensus: they determine who gets rewarded and how much.

How Validators Evaluate

Validators are the quality control layer of each subnet. They generate tasks (queries, prompts, or challenges), send them to miners, collect the responses, and evaluate the results. The specific evaluation logic is defined by the subnet's incentive mechanism, custom code that the subnet owner writes and validators run.

For example, on a text generation subnet, a validator might send a prompt to all miners, collect their completions, and score each response on criteria like coherence, accuracy, and speed. On a data-scraping subnet, the validator might request specific web data and verify that miners return correct, fresh results. The scoring criteria vary by subnet, but the mechanism is always the same: validators query, miners respond, validators score.

This evaluation happens off-chain. Validators and miners communicate directly via the axon/dendrite protocol: miners expose an axon endpoint, and validators connect to it via dendrite calls. Only the final scores (weights) are submitted to Subtensor. The chain never sees the actual tasks or responses; it only records who each validator thinks is doing good work.

Setting Weights

After evaluating miners, a validator submits a weight vector to the chain via the set_weights extrinsic. This vector contains one score per miner (identified by UID) in the subnet. The scores are typically normalized to sum to 1, representing the validator's assessment of each miner's relative quality.

Weights are stored on-chain and used as input to Yuma Consensus at each epoch. Validators are expected to set weights regularly; there's a minimum frequency enforced by the network. If a validator goes too long without setting weights, they may be penalized through reduced emissions or eventual deregistration.

The weight-setting process includes several safeguards. Weights must pass validation checks: they must reference valid UIDs, fall within acceptable ranges, and meet version requirements. Validators must also be running a sufficiently recent version of the subnet code, verified on-chain by comparing the validator's reported version against minimum thresholds. These checks prevent stale or malformed weights from polluting consensus.

Trust, Rank & Incentive

Individual validator weights are just opinions. The power of the system comes from aggregating those opinions into network-wide consensus metrics. Three key values emerge from this aggregation:

Trust

Trust measures how consistently a miner receives non-zero weights from validators. A miner that every validator scores highly has high trust. A miner that only one validator scores (even if that score is high) has low trust. This metric acts as a spam filter: a miner needs broad agreement from multiple validators, not just a high score from one friendly validator.

Rank

Rank is the stake-weighted average of the scores a miner receives. Validators with more stake have more influence on a miner's rank. A high score from a well-staked validator contributes more to rank than the same score from a low-stake validator. Rank determines a miner's share of emissions within the subnet.

Incentive

Incentive is the final emission share after applying trust and rank together. It represents a miner's actual proportion of the subnet's emission pool. High trust and high rank together yield high incentive. Even high rank is dampened if trust is low, preventing single-validator collusion from being profitable.

Scenario: Two Validators Score Three Miners

Step 1 of 6

The setup: unequal stake

Subnet 7 has two validators: V1 with 80,000 TAO staked and V2 with 20,000 TAO staked. There are three miners: Miner A, Miner B, and Miner C. Both validators evaluate all three miners on a text generation task.

These three metrics work together to create a system where consistent quality, recognized by many independent validators, is rewarded more than gaming any single relationship. The mathematical details of how weights combine into trust, rank, and incentive are covered in Chapter 9: Yuma Consensus. You can look up any miner's performance and ranking on bittensor.ai.