Node Operations
How to run a Subtensor node. This guide covers everything from choosing a node type to installation, configuration, monitoring, and common troubleshooting scenarios.
Overview
Subtensor is a Substrate-based blockchain. Running a node means participating in the network by maintaining a copy of the chain state. Nodes validate transactions, relay blocks , and provide RPC access for querying on-chain data.
- • Decentralization. More nodes means a more resilient and censorship-resistant network
- • Self-sovereignty. Query chain state directly without relying on third-party infrastructure
- • Validation. Participate in block production and finality if running a validator node
- • Data access. Full access to current and (on archive nodes) historical chain state via RPC
Node Types
Subtensor supports two primary node types, each suited to different use cases.
Archive Node
Stores complete historical state for every block ever produced. Cannot prune old state. Required storage: 3 TB+ and growing.
- • Indexers and block explorers
- • Historical state queries
- • Analytics and data providers
Lite Node
Stores only recent state with old blocks pruned. Lower storage requirement of ~128 GB. This is the default mode.
- • General purpose operation
- • Current state queries
- • Transaction submission
| Feature | Archive Node | Lite Node |
|---|---|---|
| Storage | 3 TB+ | ~128 GB |
| Historical queries | Yes | Recent only |
| Use case | Indexers, explorers | General use, queries |
| Pruning | Disabled | Enabled (default) |
Hardware Requirements
Running a Subtensor node requires dedicated hardware. These are the recommended specifications for reliable operation.
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 4 cores | 8+ cores |
| RAM (Lite) | 16 GB | 32+ GB |
| RAM (Archive) | 32 GB | 64+ GB |
| Storage (Lite) | 128 GB SSD | 256 GB NVMe |
| Storage (Archive) | 3 TB SSD | 4 TB+ NVMe |
| Network | IPv4 required | Public internet |
Spinning hard drives (HDD) are not fast enough for blockchain node operation – the database requires the random I/O performance that only SSDs or NVMe drives provide.
Prerequisites
Before building Subtensor from source, you need the Rust toolchain and system build dependencies installed.
System Dependencies
- • Rust toolchain. The Subtensor node is written in Rust and compiled from source
- • Build essentials. C compiler, linker, and make (e.g.,
build-essentialon Debian/Ubuntu) - • Git. To clone the repository
- • clang and libclang-dev. Required by some Substrate dependencies
- • protobuf-compiler. For networking protocol compilation
Do not install Rust from your system package manager. Use the official installer to ensure you get the correct toolchain version.
# Install Rust via rustup
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Restart your shell, then verify
rustc --version
cargo --version
# Add WASM build target (required for runtime compilation)
rustup target add wasm32v1-none
# Install system dependencies (Debian/Ubuntu)
sudo apt update
sudo apt install -y build-essential git clang libclang-dev protobuf-compilerInstallation
Clone the Subtensor repository and build the node binary from source. The build process compiles the Rust code into a native binary optimized for your system.
# Clone the repository
git clone https://github.com/opentensor/subtensor.git
cd subtensor
# Build the node (production profile)
cargo build -p node-subtensor --profile=production --features=metadata-hash
# Binary location
./target/production/node-subtensor The --profile=production flag enables link-time optimization (LTO) and single codegen unit for better performance.
The binary lands in target/production/, not target/release/.
The initial build generally takes 15-45 minutes depending on your hardware. Subsequent builds after code changes are much faster due to incremental compilation.
Running a Node
Once built, start the node with the appropriate flags for your use case. Below are commands for the most common configurations.
Lite Node (Mainnet)
./target/production/node-subtensor \
--chain ./chainspecs/raw_spec_finney.json \
--base-path /var/lib/subtensor \
--sync=warp \
--port 30333 \
--max-runtime-instances 32 \
--database paritydb \
--db-cache 4096 \
--trie-cache-size 2048 \
--rpc-max-response-size 2048 \
--rpc-cors all \
--rpc-port 9944 \
--bootnodes /dns/bootnode.finney.chain.opentensor.ai/tcp/30333/ws/p2p/12D3KooWRwbMb85RWnT8DSXSYMWQtuDwh4LJzndoRrTDotTR5gDC \
--no-mdns \
--rpc-externalArchive Node (Mainnet)
./target/production/node-subtensor \
--chain ./chainspecs/raw_spec_finney.json \
--base-path /var/lib/subtensor \
--sync=full \
--pruning archive \
--db-cache 16384 \
--max-runtime-instances 32 \
--rpc-max-response-size 2048 \
--rpc-cors all \
--rpc-port 9944 \
--rpc-external \
--port 30333 \
--bootnodes /dns/bootnode.finney.chain.opentensor.ai/tcp/30333/ws/p2p/12D3KooWRwbMb85RWnT8DSXSYMWQtuDwh4LJzndoRrTDotTR5gDC \
--no-mdns \
--no-private-ip \
--prometheus-external \
--prometheus-port 9615 Key flag differences from a lite node:
| Flag | Archive | Lite |
|---|---|---|
--pruning | archive | default (256 blocks) |
--sync | full | warp |
--database | default (RocksDB) | paritydb |
--db-cache | 16384 (16 GB) | not set |
- Disk: Expect roughly 3 TB for chain data (all blocks and all historical state since genesis). This grows over time.
- Sync mode:
--sync=fullis mandatory. Warp sync skips historical state, which defeats the purpose of an archive node. - Database: Uses RocksDB (the default), not ParityDB. RocksDB is more battle-tested for archive workloads.
- Memory:
--db-cache 16384allocates 16 GB to RocksDB. Tune this to your available RAM. Archive nodes can peak at ~67 GB memory usage on high-RAM servers. - Sync time: Syncing from scratch typically takes multiple weeks.
rsyncfrom an existing archive node (~3 TB transfer) is much faster and the recommended approach. - Build: Running a native binary compiled from source is recommended over Docker for production archive nodes.
Testnet (Test Finney)
./target/production/node-subtensor \
--chain ./chainspecs/raw_spec_testfinney.json \
--base-path /var/lib/subtensor \
--sync=warp \
--port 30333 \
--rpc-port 9944 \
--rpc-cors all \
--bootnodes /dns/bootnode.test.finney.opentensor.ai/tcp/30333/ws/p2p/12D3KooWPM4mLcKJGtyVtkggqdG84zWrd7Rij6PGQDoijh1X86Vr \
--no-mdns \
--rpc-external The testnet uses testTAO tokens and is useful for development and testing without real TAO .
Configuration
All flags below are defined in the Subtensor node source code (Substrate sc-cli + custom Subtensor flags). Run node-subtensor --help for the full list.
Chain & Data
| Flag | Default | Description |
|---|---|---|
--chain | — | Chain specification. Can be a predefined alias (dev, local, staging) or a path
to a JSON chainspec file |
--base-path | platform default | Custom base path for database, node key, and keystore (e.g. /var/lib/subtensor) |
--database | rocksdb | Database backend. rocksdb (recommended for archive) or paritydb (lighter weight for lite nodes). Alias: --db |
--db-cache | — | Limit the memory the database cache can use, in MiB. Archive nodes should set this
high (e.g. 16384 for 16 GB). Tune to available RAM |
Sync & Pruning
| Flag | Default | Description |
|---|---|---|
--sync | full | Blockchain syncing mode. full downloads and validates
every block (required for archive). warp skips to the latest
finalized state (fast, for lite nodes) |
--pruning | 256 | State pruning mode. A number keeps that many recent blocks of state. archive keeps all state forever. Can only be set on first database creation. Alias: --state-pruning |
--blocks-pruning | archive-canonical | When to prune block bodies and justifications. archive-canonical keeps all finalized blocks. archive keeps everything including
forks. A number keeps only that many recent blocks |
--trie-cache-size | 1073741824 | State trie cache size in bytes (default 1 GB). Set to 0 to disable |
Networking
| Flag | Default | Description |
|---|---|---|
--port | 30333 | P2P protocol TCP port. Must be open inbound and outbound in your firewall |
--bootnodes | from chain spec | Bootstrap nodes in multiaddr format. Can be specified multiple times. Supplements or overrides the bootnodes in the chain spec |
--no-mdns | off | Disable mDNS local network peer discovery. Recommended for production servers where LAN discovery isn't useful |
--no-private-ip | off | Forbid connecting to peers on private IPv4/IPv6 addresses (RFC 1918). Automatically enabled for chains marked as “live” in their chain spec. Does not apply to bootnodes or reserved nodes |
--out-peers | 8 | Number of outgoing peer connections to maintain |
--in-peers | 32 | Maximum number of inbound full-node peers |
--network-backend | litep2p | P2P networking backend. litep2p (default, more
efficient) or libp2p (legacy) |
RPC
| Flag | Default | Description |
|---|---|---|
--rpc-port | 9944 | JSON-RPC server TCP port for HTTP and WebSocket |
--rpc-external | off | Listen on all interfaces instead of localhost only. Not all RPC methods are safe to expose publicly. Use an RPC proxy to filter dangerous methods in production |
--rpc-cors | localhost | Allowed browser origins for HTTP & WS RPC. Comma-separated list or all to disable origin validation. Default allows localhost and polkadot.js.org |
--rpc-max-response-size | 16 | Maximum RPC response payload size in MB. The launch examples use 2048 for large state queries |
--rpc-max-connections | 10000* | Maximum concurrent RPC connections. Substrate default is 100; Subtensor overrides to 10,000 if not set |
--rpc-max-subscriptions-per-connection | 10000* | Maximum concurrent subscriptions per connection. Substrate default is 1,024; Subtensor overrides to 10,000 if not set |
--rpc-rate-limit | disabled* | RPC rate limit in calls per minute per connection. Subtensor disables the rate limiter if not explicitly set |
--rpc-methods | auto | Which RPC methods to expose. auto exposes safe methods
when external, unsafe when local. safe or unsafe to override |
* Subtensor custom defaults, set in node/src/command.rs customise_config(), overriding Substrate defaults.
Metrics
| Flag | Default | Description |
|---|---|---|
--prometheus-port | 9615 | Prometheus exporter TCP port |
--prometheus-external | off | Expose Prometheus exporter on all interfaces instead of localhost only |
--no-prometheus | off | Disable the Prometheus exporter entirely. It is enabled by default |
Runtime
| Flag | Default | Description |
|---|---|---|
--max-runtime-instances | 8 | Size of the WASM runtime instances cache (max: 32). Higher values allow more parallel execution. Match to your CPU thread count for best throughput |
--wasm-execution | compiled | Method for executing WASM runtime code. compiled (faster, default) or interpreted-i-know-what-i-do |
--runtime-cache-size | 2 | Maximum number of different runtimes that can be cached. Only relevant during runtime upgrades |
Subtensor-Specific
| Flag | Default | Description |
|---|---|---|
--initial-consensus | aura | Initial consensus mechanism. aura or babe. After starting, the node automatically switches
to whatever the chain requires. Defined in node/src/cli.rs |
--sealing | — | Sealing method for dev/test: manual (seal via RPC) or instant (seal on each transaction). Not used in production |
Run ./target/production/node-subtensor --help to see all available flags including Ethereum/Frontier configuration, logging, validator
options, and more.
Monitoring
Once your node is running, you can check its health and sync status through the RPC interface.
Check Sync Status
Query the node's health to see if it's syncing, how many peers it has, and whether it should be producing blocks.
# Check sync status via RPC
curl -s -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"system_health","params":[],"id":1}' \
http://127.0.0.1:9944 | jq The response includes isSyncing (true while catching up), peers (connected peer count), and shouldHavePeers.
Check Connected Peers
See how many peers your node is connected to. A healthy node should have several peers.
# Check connected peers
curl -s -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"system_peers","params":[],"id":1}' \
http://127.0.0.1:9944 | jq '.result | length'Node Logs
The node prints sync progress to stdout. Look for lines showing block import progress and finalization status.
# Example log output during sync
Syncing 142.7 bps, target=#7487601 (12 peers), best: #3245100, finalized #3244928
# Once synced, you'll see:
Idle (25 peers), best: #7487601, finalized #7487598Troubleshooting
Common issues and their solutions when running a Subtensor node.
Node Not Syncing
- • Check peers. If 0 peers, your node can't find the network. Ensure
port
30333is open in your firewall - • Firewall rules. Both inbound and outbound traffic on the P2P port (default 30333) must be allowed
- • Bootnodes. Try specifying custom bootnodes with
--bootnodesif built-in bootnodes are unreachable - • Clock sync. Ensure your system clock is synchronized via NTP. Drift can cause peer rejection
High Disk Usage
- • Switch to lite node. If you don't need historical data, set
--pruning 256to enable pruning - • Check pruning setting. Verify your node isn't accidentally running in archive mode
- • Database compaction. Restarting the node can trigger RocksDB compaction and reclaim space
RPC Not Accessible
- • External access. By default, the RPC only listens on localhost.
Add
--rpc-externalto listen on all interfaces - • CORS. Browser-based clients need
--rpc-cors allor a specific origin to connect - • Port conflicts. Check that no other process is using port 9944
with
ss -tlnp | grep 9944
Out of Memory
- • Increase RAM. Archive nodes can peak at ~67 GB memory usage and benefit from 64+ GB RAM. Lite nodes need at least 16 GB
- • Tune db-cache. Reduce
--db-cacheif the node is consuming too much memory. The cache value is in MB - • Reduce pruning depth. A lower pruning number means less state in memory. Consider switching to a lite node if historical queries aren't needed
- • Swap space. As a temporary measure, add swap space, though SSD is required for acceptable performance
Next Steps
The subtensor.com API provides access to chain data, reference information, and search without running your own node. Check the Reference Docs for available endpoints.