mirror of
https://github.com/hl-archive-node/nanoreth.git
synced 2025-12-06 10:59:55 +00:00
docs(book): update node size numbers (#7638)
This commit is contained in:
@ -14,12 +14,12 @@ The hardware requirements for running Reth depend on the node configuration and
|
|||||||
|
|
||||||
The most important requirement is by far the disk, whereas CPU and RAM requirements are relatively flexible.
|
The most important requirement is by far the disk, whereas CPU and RAM requirements are relatively flexible.
|
||||||
|
|
||||||
| | Archive Node | Full Node |
|
| | Archive Node | Full Node |
|
||||||
|-----------|---------------------------------------|-------------------------------------|
|
|-----------|---------------------------------------|---------------------------------------|
|
||||||
| Disk | At least 2.2TB (TLC NVMe recommended) | At least 1TB (TLC NVMe recommended) |
|
| Disk | At least 2.2TB (TLC NVMe recommended) | At least 1.2TB (TLC NVMe recommended) |
|
||||||
| Memory | 8GB+ | 8GB+ |
|
| Memory | 8GB+ | 8GB+ |
|
||||||
| CPU | Higher clock speed over core count | Higher clock speeds over core count |
|
| CPU | Higher clock speed over core count | Higher clock speeds over core count |
|
||||||
| Bandwidth | Stable 24Mbps+ | Stable 24Mbps+ |
|
| Bandwidth | Stable 24Mbps+ | Stable 24Mbps+ |
|
||||||
|
|
||||||
#### QLC and TLC
|
#### QLC and TLC
|
||||||
|
|
||||||
@ -34,14 +34,14 @@ Prior to purchasing an NVMe drive, it is advisable to research and determine whe
|
|||||||
### Disk
|
### Disk
|
||||||
|
|
||||||
There are multiple types of disks to sync Reth, with varying size requirements, depending on the syncing mode.
|
There are multiple types of disks to sync Reth, with varying size requirements, depending on the syncing mode.
|
||||||
As of October 2023 at block number 18.3M:
|
As of April 2024 at block number 19.6M:
|
||||||
|
|
||||||
* Archive Node: At least 2.2TB is required
|
* Archive Node: At least 2.14TB is required
|
||||||
* Full Node: At least 1TB is required
|
* Full Node: At least 1.13TB is required
|
||||||
|
|
||||||
NVMe drives are recommended for the best performance, with SSDs being a cheaper alternative. HDDs are the cheapest option, but they will take the longest to sync, and are not recommended.
|
NVMe drives are recommended for the best performance, with SSDs being a cheaper alternative. HDDs are the cheapest option, but they will take the longest to sync, and are not recommended.
|
||||||
|
|
||||||
As of July 2023, syncing an Ethereum mainnet node to block 17.7M on NVMe drives takes about 50 hours, while on a GCP "Persistent SSD" it takes around 5 days.
|
As of February 2024, syncing an Ethereum mainnet node to block 19.3M on NVMe drives takes about 50 hours, while on a GCP "Persistent SSD" it takes around 5 days.
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
|
|||||||
@ -48,14 +48,14 @@ RUST_LOG=info reth node \
|
|||||||
|
|
||||||
## Size
|
## Size
|
||||||
|
|
||||||
All numbers are as of October 2023 at block number 18.3M for mainnet.
|
All numbers are as of April 2024 at block number 19.6M for mainnet.
|
||||||
|
|
||||||
### Archive Node
|
### Archive Node
|
||||||
|
|
||||||
Archive node occupies at least 2.14TB.
|
Archive node occupies at least 2.14TB.
|
||||||
|
|
||||||
You can track the growth of Reth archive node size with our
|
You can track the growth of Reth archive node size with our
|
||||||
[public Grafana dashboard](https://reth.paradigm.xyz/d/2k8BXz24k/reth?orgId=1&refresh=30s&viewPanel=52).
|
[public Grafana dashboard](https://reth.paradigm.xyz/d/2k8BXz24x/reth?orgId=1&refresh=30s&viewPanel=52).
|
||||||
|
|
||||||
### Pruned Node
|
### Pruned Node
|
||||||
|
|
||||||
@ -64,15 +64,15 @@ If pruned fully, this is the total freed space you'll get, per segment:
|
|||||||
|
|
||||||
| Segment | Size |
|
| Segment | Size |
|
||||||
| ------------------ | ----- |
|
| ------------------ | ----- |
|
||||||
| Sender Recovery | 75GB |
|
| Sender Recovery | 85GB |
|
||||||
| Transaction Lookup | 150GB |
|
| Transaction Lookup | 200GB |
|
||||||
| Receipts | 250GB |
|
| Receipts | 250GB |
|
||||||
| Account History | 240GB |
|
| Account History | 235GB |
|
||||||
| Storage History | 700GB |
|
| Storage History | 590GB |
|
||||||
|
|
||||||
### Full Node
|
### Full Node
|
||||||
|
|
||||||
Full node occupies at least 950GB.
|
Full node occupies at least 1.13TB.
|
||||||
|
|
||||||
Essentially, the full node is the same as following configuration for the pruned node:
|
Essentially, the full node is the same as following configuration for the pruned node:
|
||||||
|
|
||||||
@ -100,16 +100,6 @@ Meaning, it prunes:
|
|||||||
is completed, so the disk space is reclaimed slowly.
|
is completed, so the disk space is reclaimed slowly.
|
||||||
- Receipts up to the last 10064 blocks, preserving all receipts with the logs from Beacon Deposit Contract
|
- Receipts up to the last 10064 blocks, preserving all receipts with the logs from Beacon Deposit Contract
|
||||||
|
|
||||||
Given the aforementioned segment sizes, we get the following full node size:
|
|
||||||
|
|
||||||
```text
|
|
||||||
Archive Node - Receipts - AccountsHistory - StoragesHistory = Full Node
|
|
||||||
```
|
|
||||||
|
|
||||||
```text
|
|
||||||
2.14TB - 250GB - 240GB - 700GB = 950GB
|
|
||||||
```
|
|
||||||
|
|
||||||
## RPC support
|
## RPC support
|
||||||
|
|
||||||
As it was mentioned in the [pruning configuration chapter](./config.md#the-prune-section), there are several segments which can be pruned
|
As it was mentioned in the [pruning configuration chapter](./config.md#the-prune-section), there are several segments which can be pruned
|
||||||
|
|||||||
Reference in New Issue
Block a user