docs(book): update node size numbers (#7638)

This commit is contained in:
Alexey Shekhirin
2024-04-14 19:19:53 +01:00
committed by GitHub
parent 3e8d5c69cf
commit cfbebc1595
2 changed files with 17 additions and 27 deletions

View File

@ -14,12 +14,12 @@ The hardware requirements for running Reth depend on the node configuration and
The most important requirement is by far the disk, whereas CPU and RAM requirements are relatively flexible.
| | Archive Node | Full Node |
|-----------|---------------------------------------|-------------------------------------|
| Disk | At least 2.2TB (TLC NVMe recommended) | At least 1TB (TLC NVMe recommended) |
| Memory | 8GB+ | 8GB+ |
| CPU | Higher clock speed over core count | Higher clock speeds over core count |
| Bandwidth | Stable 24Mbps+ | Stable 24Mbps+ |
| | Archive Node | Full Node |
|-----------|---------------------------------------|---------------------------------------|
| Disk | At least 2.2TB (TLC NVMe recommended) | At least 1.2TB (TLC NVMe recommended) |
| Memory | 8GB+ | 8GB+ |
| CPU | Higher clock speed over core count | Higher clock speeds over core count |
| Bandwidth | Stable 24Mbps+ | Stable 24Mbps+ |
#### QLC and TLC
@ -34,14 +34,14 @@ Prior to purchasing an NVMe drive, it is advisable to research and determine whe
### Disk
There are multiple types of disks to sync Reth, with varying size requirements, depending on the syncing mode.
As of October 2023 at block number 18.3M:
As of April 2024 at block number 19.6M:
* Archive Node: At least 2.2TB is required
* Full Node: At least 1TB is required
* Archive Node: At least 2.14TB is required
* Full Node: At least 1.13TB is required
NVMe drives are recommended for the best performance, with SSDs being a cheaper alternative. HDDs are the cheapest option, but they will take the longest to sync, and are not recommended.
As of July 2023, syncing an Ethereum mainnet node to block 17.7M on NVMe drives takes about 50 hours, while on a GCP "Persistent SSD" it takes around 5 days.
As of February 2024, syncing an Ethereum mainnet node to block 19.3M on NVMe drives takes about 50 hours, while on a GCP "Persistent SSD" it takes around 5 days.
> **Note**
>

View File

@ -48,14 +48,14 @@ RUST_LOG=info reth node \
## Size
All numbers are as of October 2023 at block number 18.3M for mainnet.
All numbers are as of April 2024 at block number 19.6M for mainnet.
### Archive Node
Archive node occupies at least 2.14TB.
You can track the growth of Reth archive node size with our
[public Grafana dashboard](https://reth.paradigm.xyz/d/2k8BXz24k/reth?orgId=1&refresh=30s&viewPanel=52).
[public Grafana dashboard](https://reth.paradigm.xyz/d/2k8BXz24x/reth?orgId=1&refresh=30s&viewPanel=52).
### Pruned Node
@ -64,15 +64,15 @@ If pruned fully, this is the total freed space you'll get, per segment:
| Segment | Size |
| ------------------ | ----- |
| Sender Recovery | 75GB |
| Transaction Lookup | 150GB |
| Sender Recovery | 85GB |
| Transaction Lookup | 200GB |
| Receipts | 250GB |
| Account History | 240GB |
| Storage History | 700GB |
| Account History | 235GB |
| Storage History | 590GB |
### Full Node
Full node occupies at least 950GB.
Full node occupies at least 1.13TB.
Essentially, the full node is the same as following configuration for the pruned node:
@ -100,16 +100,6 @@ Meaning, it prunes:
is completed, so the disk space is reclaimed slowly.
- Receipts up to the last 10064 blocks, preserving all receipts with the logs from Beacon Deposit Contract
Given the aforementioned segment sizes, we get the following full node size:
```text
Archive Node - Receipts - AccountsHistory - StoragesHistory = Full Node
```
```text
2.14TB - 250GB - 240GB - 700GB = 950GB
```
## RPC support
As it was mentioned in the [pruning configuration chapter](./config.md#the-prune-section), there are several segments which can be pruned