Skip to content

Releases: NethermindEth/nethermind

v1.26.0

25 Apr 12:02
Compare
Choose a tag to compare

Release notes

Major highlights

State design upgrade: Half-path

This version introduces a new approach to the state database: half-path. It replaces the previous approach, which we will call hash for these notes.

The goal of half-path is to improve the performance of the existing database without significant codebase changes. It is a middleware between the hash and full-path-based designs (which we are currently working on). It is mainly aimed at validators' performance (we observed a significant improvement in effectiveness) and, as a side effect, improving archive nodes' sync.

Major achievements

  • Block processing time is faster by about 30 to 50% (depending on the hardware setup), but we observed that it improves low-end setups more, so the cost of hosting nodes using Nethermind software may decrease.
  • The state database size is ~25% smaller (the previous approach is now close to 200 GB while the half-path is around 150 GB). This is only for the state database. After snap sync, the entire database will decrease by ~50 GB but still hover around 900 GB.
  • State database growth is significantly reduced—the slower the database grows, the less maintenance around the node is needed.
    We are still observing the exact numbers and applying some optimizations to make it as close as possible to "live pruning", but the database should grow about 10 to 20 times slower (to see the full effect, it may need 1 or 2 weeks—growth ratio will be dropping after that time).
  • Faster archive node sync. As half-path is not meant to optimize archive nodes, we observe that it is quicker than the previous implementation (8mln blocks processed about 30 to 40% faster, and the gap was only increasing—more tests around that are to be conducted).
  • Smaller archive node database. As stated above, the major goal of the half-path wasn't to make the archive node smaller, but what was observed was 2.4 TB vs 1.7 TB at the 8mln blocks checkpoint—and the gap was getting bigger. We are now checking the final numbers.

Making the migration to the half-path hassle-free

  • All newly synced nodes will use half-path as the default state design. If node operators want to use the previous approach, they can specify it with --Init.StateDbKeyScheme Hash.
  • All existing nodes that are synced with the old hash approach will remain on it.

Env variable equivalent of this flag is: NETHERMIND_INITCONFIG_STATEDBKEYSCHEME=HalfPath/Hash

Migration to half-path

  • Full resync. This is the simplest yet most effective way to migrate. As Nethermind syncs quickly back to the head, it may take just 1-2 hours to get back on track.
  • Full pruning. There is an option to migrate while staying online using the full pruning functionality. The downside of that approach is that pruning will take much longer to finish, and users will need to restart their nodes with --Init.StateDbKeyScheme HalfPath— without it, pruning will behave as before, so it will recreate the state with the previous database approach. To reduce slowdown, the user should increase the default full pruning memory budget to at least 8 GB: --Pruning.FullPruningMemoryBudgetMb 8000.

Upgrading from previous versions

  • From v1.25.4. The node will remain on the previous hash design. To migrate to half-path please see the migration instructions above. If the node operator does not want to migrate, it is completely fine to stay on the hash design, as it will still be fully supported.
  • From v1.26.0-exp. The node will remain on the half-path design. No resync is needed between v1.26.0 and its pre-release versions—all optimizations and bugfixes are automatically applied.

Learn more

More details about the half-path and other state redesigns which we are still exploring, can be found here. Please also refer to #6331, which contains all of the changes around the half-path and more in-depth details about specific performance metrics.

Snap serving

We're excited to share that Nethermind now supports snap serving, a capability previously unique to Geth. This is a big step for the Ethereum ecosystem, offering redundancy and easing Geth's considerable responsibility in maintaining network health.

This achievement doesn't diminish Geth's critical role but reinforces our shared goal of a resilient Ethereum. We see this as an opportunity to distribute the workload, ensuring the network's strength through diversity.

This feature was previously present but required a change to the database layout for efficient performance. Nodes running on the old hash approach will not enable snap serving by default. Snap serving's performance is too low on this design and may affect the node's overall performance. If a node operator wants to enable it on hash design, there is an option --Sync.SnapServingEnabled true, although we don't recommend it.

Important

During snap serving for other nodes, the overall performance of the node may be slightly worse. Please reach out to us on Discord with all observations so more optimizations can be discovered and applied. At the same time, it is advised to keep it enabled for the sake of network health.

Rootless and chiseled Docker image

This release introduces a new chiseled rootless Docker image for enhanced security.
These image tags have the -chiseled suffix and run the Nethermind process on behalf of the non-root app user with UID/GID of 64198.

Snappy dependency removal

Nethermind no longer needs the Snappy (libsnappy-dev) to be installed on Linux.

Deprecated options

--Sync.FastBlocks has been deprecated. It is advised to remove this option from your custom configs, environment variables, or flags passed to the client. See #6792 for more details.

Deprecated metrics

Multiple database and networking metrics are now deprecated and will be removed in the next release. These metrics follow the format:

  • nethermind_*_db_size
  • nethermind_*_db_reads
  • nethermind_*_db_writes
  • nethermind_*_disconnects
  • nethermind_*_received
  • nethermind_*_sent

They have been consolidated to labeled metrics:

  • nethermind_db_size{db=*}
  • nethermind_db_reads{db=*}
  • nethermind_db_writes{db=*}
  • nethermind_local_disconnects_total{reason=*}
  • nethermind_remote_disconnects_total{reason=*}
  • nethermind_incoming_p2p_messages{message=*}
  • nethermind_outgoing_p2p_messages{message=*}

Changelog

New features

Optimism

Performance

Bug fixes and stability

  • Fix/flush on snap fin...
Read more

v1.26.0-exp.4

10 Apr 14:23
Compare
Choose a tag to compare
v1.26.0-exp.4 Pre-release
Pre-release

Release notes

Major highlights

State Design Upgrade - HalfPath

This version introduces a new approach to the State database - HalfPath. It replaces the previous approach which for a purpose of these notes we will call “Hash”.

The goal of HalfPath is to improve performance of the existing database without major codebase changes. It is a middleware between Hash design and a full Path-based design (on which we are currently working on). It is mostly aimed for validators performance (observed major effectiveness improvement) and as a side effect improve archive nodes sync.

Major achievements:

  1. Block Processing time faster by about 30-50% (depends on hardware setup but observed that it gives more improvement on weaker setups so cost of hosting node using Nethermind software can be decreased).
  2. State Database size smaller by about 25% (previous approach is now close to 200GB while HalfPath is around 150GB). This is only for StateDB - Entire Database will decrease by this 50GB but will still hover around 900GB after SnapSync.
  3. State Database growth is significantly reduced - the slower database is growing, the smaller amount of maintenance around node is needed.
    We are still observing the exact numbers and applying some optimizations to make it as close as possible to “Live pruning” but database should grow about 10-20 times slower (to see the full effect of that it may need 1 or 2 weeks - growth ratio will be dropping after that time).
  4. Archive Node sync faster - as HalfPath is not meant to be the optimization for Archive nodes we observe that it is faster than previous implementation (8mln blocks processed about 30-40% faster and gap was only increasing - more tests around that are to be conducted).
  5. Archive Node Database size smaller - as above HalfPath major goal wasn’t to make archive node smaller but what was observed is (2.4TB vs 1.7TB at 8mln blocks checkpoint - and gap was getting bigger - we are checking now the final numbers).

Decisions made by the team to make a transition to HalfPath as smooth as possible:

  1. All newly synced nodes will use a HalfPath as default state design. If for any reason node operator would like to use previous approach there is a option to specify a flag Init.StateDbKeyScheme=Hash .
  2. All existing nodes which are synced with old Hash approach will remain on it (so will not get advantage of HalfPath).

Migration to HalfPath can be done in two ways:

  1. Full Resync - This is the simplest yet the most effective way to migrate. As Nethermind syncs very fast back to the head it may take just 1-2 hours to be back on track.
  2. Full Pruning migration - There is an option to migrate while staying online using the Full pruning functionality. The downside of that approach is that pruning will take much longer to finish and users to be able to prune with it will need to restart their nodes with flag Init.StateDbKeyScheme=HalfPath - without it pruning will behave as before so will recreate state with previous DB approach. User should increase the default full pruning memory budget to at least 8GB (—Pruning.FullPruningMemoryBudgetMb 8000) to reduce slowdown.

Upgrading from previous version to 1.26.0 will result with:

  1. Upgrade from 1.25.4 → Node will remain on previous Hash design. To migrate to HalfPath please refer to Migration guide above. If node operator does not want to migrate, it is completely fine staying on Hash design as it will be still fully supported.
  2. Upgrade from 1.26.0-exp.3 → Node will remain on HalfPath design. No resync is needed between 1.26.0 experimental and current version - all optimizations and bugfixes should be automatically applied.

Learn more

More details about HalfPath and other state redesigns which we are still exploring can be found in this [Medium blogpost](https://medium.com/nethermind-eth/nethermind-client-3-experimental-approaches-to-state-database-change-8498e3d89771).
Also please refer to this [Pull Request](#6331) which contains all of the changes around HalfPath and more in-depth details about specific performance metrics.

Snap Serving

We're excited to share that Nethermind now supports Snap Sync, a capability previously unique to Geth. This is a big step for Ethereum's ecosystem, offering redundancy and easing the considerable responsibility Geth has held in maintaining network health.

This achievement doesn't diminish Geth's critical role but reinforces our shared goal of a resilient Ethereum. We see this as an opportunity to distribute the workload, ensuring the network's strength through diversity.

The feature was previously present, but it required a change to the database layout for efficient performance.

Nodes running on old Hash approach will not have a SnapServing enabled by default. Performance of SnapServing is very bad on this design and may overall performance of a node.

If for any reason node operator wants to enable it on Hash db, there is a flag Sync.SnapServingEnabled=true which will make it possible but is not advised.

Important
During snap serving for other nodes in network overall performance of node may be slightly worse. Please reach to the team on Discord with all observations out there so more optimizations around it can be discovered and applied - at the same time it is advised to keep it enabled to keep the network more healthy.

Changelog

New features

Optimism

Performance

Bug fixes and stability

Read more

v1.26.0-exp.3

01 Mar 09:38
Compare
Choose a tag to compare
v1.26.0-exp.3 Pre-release
Pre-release

Release notes

🚧 THIS IS AN EXPERIMENTAL RELEASE.
We don't recommend running it on your main validator infrastructure. It brings major improvements, but we're still unsure about its downsides and long-term effects. The latest stable version is 1.25.4.

IMPORTANT
To fully benefit from this release, you must resync your nodes from scratch.
Half-path is now the default sync mode for new nodes. If you start this version on a node with an old database schema, it will continue using the old schema without improvements. This is not recommended.

Major highlights

  • Next HalfPath upgrade - This is a next release which brings an experimental state redesign called "HalfPath". More details can be found in 1.26.0-exp.2 and in our blog post

  • SNAP SERVING - This version also introduces experimental (still under testing) mechanism for serving snap data! This is very important milestone done which will improve health of Ethereum (now only Geth is capable of serving Snap data so Nethermind can reduce very high responsibility of Geth and serve this data as well).
    Since we are still working on it users can experience some worse performance when snap data is served - we are working on optimizing it.

  • Dencun ready version for Experimental users - All users which are running experimental version are asked to migrate to this version since this is the only experimental version with Dencun support. All previous versions will stop working once hard fork occurs.

Known issues

Full pruning doesn't work properly. You may experience very long pruning times, which at the end may not work as expected causing big block processing spikes - we are working on solution but it is not advised to use it at all at the moment.

v1.25.4

16 Feb 18:35
Compare
Choose a tag to compare

Release notes

⚠️

This release is a mandatory upgrade for all nodes operating on the Ethereum Mainnet and Gnosis.
Please update your node to this version to ensure correct node functionality.

Major highlights

Mainnet Dencun hard fork

The Mainnet Dencun hard fork is scheduled on Mar 13, 2024 at 13:55:35 UTC (epoch 269568)
⚠️ Execution client and consensus client database sizes can increase by about 150GB over time - please prepare for that!

Gnosis Dencun hard fork

The Gnosis Dencun hard fork is scheduled on Mar 11, 2024 at 18:30:20 UTC (slot 14237696)
⚠️ Execution client and consensus client database sizes can increase by about 150GB over time - please prepare for that!

PPA package

The PPA package has been revised, including the version and installers. Since the version has been fixed to be 1.25.4 instead of 1.2540, installing the latest version requires the manual removal of the previous one as follows:

# ⚠️ If your data directory is in the default location of /usr/share/nethermind, back it up before the package removal.

sudo apt-get purge -y nethermind
sudo apt-get update

Changelog

Cancun

Optimism

  • Add temporary solution for pre-bedrock tx decoding by adding AncientBarriers by @deffrian in #6620

Bugfixes and new features

  • Reduce default peers amount from 100 to 50 on mainnet nodes (not on archive nodes) to reduce a load on it and reduce amount of block processing spikes in #6743
  • Reduce amount of hanging sockets with TIME_WAIT status by @benaadams in #6594
  • Fix ParityTraceActionConverter to output init on create by @LukaszRozmej in #6606
  • Fix TraceStore plugin by @LukaszRozmej in #6609
  • Return BlockForRpc from debug_getBadBlocks by @flcl42 in #6612
  • Revise PPA packaging by @rubo in #6618
  • Improve memory management and reduce traffic while serving data to other peers by @benaadams in #6636
  • Add debug_getBadBlocks support by @kjazgar and @Marchhill in #3838
  • Add Metrics.ExposeHost option so a hostname other than "0.0.0.0" can be exposed by @tgerring in #6528

    ⚠️
    For now we decided to keep default behavior so ExposeHost will default to "+" which means "0.0.0.0" but in the future we are going to change default of that flag to "127.0.0.1" to reduce risk of remote access to metrics.

Full Changelog: 1.25.3...1.25.4

v1.26.0-exp.2

22 Jan 11:33
Compare
Choose a tag to compare
v1.26.0-exp.2 Pre-release
Pre-release

Release notes

🚧 THIS IS AN EXPERIMENTAL RELEASE.
We don't recommend running it on your main validator infrastructure. It brings major improvements, but we're still unsure about its downsides and long-term effects. The latest stable version is 1.25.3.

Major highlights

This version brings many performance improvements, the most important being a major state database optimization called Half-path.
This new optimization is not a full path-based storage, but the way it arranges data helps with the performance of block processing and state database size.

Key points

  • Block processing is faster by about 40-50%
  • Block processing during historical data sync is faster by about 80-100%
  • Initial state database size on snap-synced Mainnet nodes reduced by about 25% (archive nodes still to be measured)
  • Initial state database growth in the first two weeks has halved and decreased over time. Preliminary testing shows a reduction in database growth to 3 GB per month following the first two weeks, greatly reducing the need for node maintenance and full pruning.

For more details on Half-path, see our blog post.

Backward compatibility

  • To fully benefit from this release, you must resync your nodes from scratch.
  • Half-path is now the default sync mode for new nodes. If you start this version on a node with an old database schema, it will continue using the old schema without improvements. This is not recommended.

Known issues

  • Full pruning doesn't work properly. We're working on improvements, but be aware that using this version requires a database resync from scratch.
  • Unusual block processing spikes are still happening but very rarely. We noticed some unusually high spikes when in-memory pruning interferes with block processing. A fix is already in the testing phase but not included in this release.

v1.25.3

22 Jan 18:03
Compare
Choose a tag to compare

Release notes

⚠️

This release is a mandatory upgrade for all nodes operating on the following chains: Sepolia, Holesky, and Chiado.
Please update your node to this version to ensure correct node functionality.

Major highlights

Sepolia Dencun hard fork

The Sepolia Dencun hard fork is scheduled on Jan 30, 2024 at 22:51:12 UTC (epoch 132608)

Chiado Dencun hard fork

The Chiado Dencun hard fork is scheduled on Jan 31, 2024 at 18:15:40 UTC (epoch 516608)

Holesky Dencun hard fork

The Holesky Dencun hard fork is scheduled on Feb 07, 2024 at 11:34:24 UTC (epoch 29696)

Full Changelog: 1.25.2...1.25.3

v1.25.2

21 Jan 22:00
Compare
Choose a tag to compare

Release notes

This hotfix addresses the consensus issue in Nethermind that was introduced in v1.23.0. This release is mandatory for all Nethermind users.

No resync required. A consensus client restart is required without the need to resync it.
The v1.22.0 and below weren't affected.

A full postmortem will be published soon.

v1.25.1

16 Jan 18:35
Compare
Choose a tag to compare

THIS VERSION HAS A CONSENSUS ISSUE. Use v1.25.2 or later instead.


Release notes

Major highlights

Goerli Dencun hard fork

⚠️
This version supports the upcoming Goerli Dencun hard fork that is scheduled on Jan 17, 2024 at 06:32:00 UTC (epoch 231680)
Please update your node to this version to ensure correct node functionality.

Chiado Dencun hard fork

⚠️
This version supports the upcoming Chiado Dencun hard fork that is scheduled on Jan 31, 2024 at 18:15:40 UTC (epoch 516608)
Please update your node to this version to ensure correct node functionality.

Fixed eth_syncing invalid behavior

In v1.25.0, the eth_syncing method misbehaves for those who upgraded from the older version to the latest one (returned "syncing" when a node was fully synced). This version addresses this issue, so it properly returns the sync status.

Fixed extra bodies and receipts downloaded for the Mainnet after upgrading the long-living node

In v1.25.0, in some cases, the old bodies and old receipts started to download even though it should not have been the case on the synced nodes. This usually happens for older nodes that were synced a long time ago without any resync in the meantime.

Full Changelog: 1.25.0...1.25.1

v1.25.0

10 Jan 10:59
Compare
Choose a tag to compare

THIS VERSION HAS A CONSENSUS ISSUE. Use v1.25.2 or later instead.


Release notes

Major highlights

Goerli Dencun hard fork

⚠️
This version supports the upcoming Goerli Dencun hard fork that is scheduled on Jan 17, 2024 at 06:32:00 UTC (epoch 231680)
Please update your node to this version to ensure correct node functionality.

Optimism Canyon hard fork

The initial support of the OP in Nethermind was implemented right before the Canyon hard fork happened. Because of that, Nethermind nodes could not follow the chain after Canyon activation. Since this version, Nethermind supports Canyon hard fork on all OP-related chains.

Improved JSON serialization

We replaced the famous Json.NET library with the System.Text.Json implementation. As a result, we drastically reduced the memory overhead, improved the block processing time, and sped up JSON-RPC handling in general.

JavaScript tracers

The debug_trace* JSON-RPC methods now support custom tracers written in JavaScript. This allows custom tracing logic and is in line with the Geth implementation.

Improved concurrency and reduced lock contention

  • Changed to more scalable locks for both the transaction pool and LRU caches, ensuring better scalability on systems with high core counts.
  • Used more scalable priority locks where block production and other tasks cross to give precedence to block production, optimizing performance in critical areas.
  • During signature recovery, it was changed to access the transactions from the pool serially to reduce lock contention on the transaction pool while maintaining parallel processing for ECDSA and Keccak calculations for faster throughput.

New JSON-RPC methods

  • eth_getBlockReceipts that is based on the previous Parity implementation and gives an easier way to get all receipts for transactions within a specified block
  • debug_getRawBlock debug_getRawReceipts debug_getRawHeader debug_getRawTransaction These methods have been added to have a possibility to analyze encoded block/receipt/header/transaction

Downloading all historical bodies and receipts by default

On newly synced nodes, instead of using barriers set for a block with deposit contract for beacon chain (11052984), the node will sync all bodies and receipts till genesis:

  • Healthier for the entire network
  • Makes the database size bigger for freshly synced nodes (about 200 GB extra space will be needed)
  • The change is made in a way that already synced nodes that are synced till the ancient barrier 11052984 will not sync anything in addition (to sync all of them, resync from scratch is required)
  • This may be revisited as part of EIP-4444

Other performance improvements

With the migrations to .NET 8, we got a performance boost in various areas and reduced memory usage, which is especially beneficial for validators.

Changelog

New features

Cancun

Optimism

Performance

  • Serialize Json direct to Http stream rather than through intermediary buffers; further increasing performance and reducing latency of Json RPC by @benaadams in #6369
  • Fix double write during full pruning by @asdacap in #6415
  • Return error codes in Evm rather than throwing more expensive exceptions by @benaadams in #6406
  • Reduce memory for GetPayloadBodiesByRangeV1 and GetPayloadBodiesByHashV1 by streaming the results immediately rather than building up the total response and then sending it all at once by @benaadams in #6287
  • Don’t throw exceptions for missing nodes when searching for them (missing nodes) during sync by @benaadams in #6425
  • Remove duplicate calls to FindHeader in eth_getLogs by @benaadams in #6421
  • Remove Duplicate call to TryGetPendingTransaction in RPC by @benaadams in #6420
  • RateLimiter: Remove unneeded async statemachine by @benaadams in #6418
  • Increase regex cache size; which reduces memory usage in logging by @benaadams in #6408
  • Reduce dictionary lookups by @benaadams in #6373
  • Fix excessive timer allocation in rate limiter by @benaadams in #6354
  • Use TryGetValue to halve Dictionary lookups by @benaadams in #6352
  • Use runtime throw helpers to increase hot code (reduce cold code) in processor cache by @benaadams in #6348
  • Broadcast local txs only if MaxFeePerGas is equal at least 70% of current base fee by @marcindsobczak in #6350

Metrics

Logging

Bug fixes and stability

Other changes

New Contributors

Full Changelog: 1.24.0...1.25.0

v1.24.0

15 Dec 13:05
Compare
Choose a tag to compare

THIS VERSION HAS A CONSENSUS ISSUE. Use v1.25.2 or later instead.


Release notes

⚠️ WARNING

This version cannot be downgraded. Once you upgrade to this version or sync from scratch with it, you cannot downgrade to any previous version.

Major highlights

  • Improvements to the headers database

    • Reduce IOPS and bandwidth requirements during sync by up to 200Mbps or ~8% of overall bandwidth during sync
    • Reduce IOPS and bandwidth requirements when serving headers to peers
    • Require an additional 800MB of metadata database for the Mainnet
  • Supported networks adjustments

    • Added OP Sepolia and Base Sepolia support
    • Added archive configuration support for both Optimism and Base

    For more info, see the -c, --config <value> flag and supported networks.

    For Optimism networks, ensure that networking is disabled to reduce unnecessary hardware usage.

Changelog

New features and important changes

Optimism

Bug fixes and stability

Full Changelog: 1.23.0...1.24.0