Queried by: validatorssubnet ownersdevelopersanalytics
The Big Picture
Prometheus endpoints expose operational metrics from neurons - inference latency, request counts, error rates, resource usage, etc. This enables observability across the network. Validators can use this data to make better weight decisions, and operators can debug issues.
Why This Matters
Beyond just connecting to miners, you want to know how they're performing. Prometheus endpoints give you operational metrics - how fast, how reliable, how loaded is this neuron?
Example Scenario
Query Prometheus(netuid=1, uid=47) returns { ip: '1.2.3.4', port: 9090, ip_type: 4, version: 1, block: 7000000 }. Scrape http://1.2.3.4:9090/metrics for Prometheus-format metrics.
Common Questions
- Is Prometheus required?
- No, it's optional. Neurons can participate without exposing metrics. But providing metrics can build trust with validators who value transparency.
- What metrics should neurons expose?
- Common: inference_latency, request_count, error_rate, queue_depth. Subnet-specific metrics depend on the task. Follow Prometheus naming conventions.
- Can validators use this for scoring?
- Yes, sophisticated validators can scrape metrics to inform weight decisions - rewarding neurons with better performance characteristics.
Use Cases
- Build network-wide monitoring dashboards
- Scrape neuron performance metrics for analysis
- Debug neuron health and performance issues
- Create alerting systems based on neuron metrics
- Research network behavior through metrics aggregation
Purpose & Usage
Purpose
Store metrics endpoint location for monitoring and observability.
Common Query Patterns
- Query by netuid-uid for a specific neuron's metrics endpoint
- Iterate to discover all available metrics endpoints
- Build monitoring systems that scrape neuron metrics
Query Keys
Stored Value
value (PrometheusInfo)
Relationships
Modified By
Related Events
Code Examples
import { ApiPromise, WsProvider } from "@polkadot/api";
import { stringCamelCase } from "@polkadot/util";
const provider = new WsProvider("wss://entrypoint-finney.opentensor.ai:443");
const api = await ApiPromise.create({ provider });
// Query Prometheus storage
const key1 = 0;
const key2 = "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY";
const result = await api.query
[stringCamelCase("SubtensorModule")]
[stringCamelCase("Prometheus")](
key1,
key2
);
console.log("Prometheus:", result.toHuman());Runtime Info
View Source- Pallet
- SubtensorModule
- Storage Kind
- Map
- First Version
- v101
- Current Version
- v393