Skip to content

Releases: NethermindEth/nethermind

v1.19.1

07 Jun 08:50
Compare
Choose a tag to compare

Release notes

Fixed the regression in v1.19.0 where eth_getLogs requests weren't returning correct results for topics with leading zeroes.
The fix is important for the Rocket Pool community as their integration uses topics with leading zeroes.

Doesn't require resync since the database was not corrupted.

Changelog

  • Fix Keccak iterator not correctly detecting more items when another item is zero prefixed by @asdacap in #5780

Big thanks to @jclapis from Rocket Pool for reporting.

Full changelog: 1.19.0...1.19.1

v1.19.0

02 Jun 11:57
19d3be4
Compare
Choose a tag to compare

Release notes

⚠️ This release is replaced with v1.19.1 because of the issue in the eth_getLogs JSON-RPC method.

Major highlights

  • Significant storage reduction

    ⚠️
    Downgrading from this version to an earlier one is not possible because of the new database format.
    Resync is not required to update to this version. However, you will get the full benefits of disk space optimization only after fresh sync.

  • Faster sync due to the new auto-pivot approach

Details

  • Storage reduction

    v1.18.0 v1.19.0
    Ancient barriers Ancient barriers
    State 166 GB 170 GB 161 GB 161 GB
    Receipts 477 GB 269 GB 152 GB 104 GB
    Blocks 334 GB 222 GB 334 GB 222 GB
    Other ... ... ... ...
    Total 965 GB 678 GB 662 GB 504 GB
  • Receipts DB size reduction

    • Significantly reduced database size by using a different encoding.
      You can’t downgrade from this version without a full db drop and resync. To get the full benefit of the DB size reduction a resync is needed as it will only apply to new receipts. A node will still work fine on 1.19 version without resyncing.
      You can also call the RPC method debug_migrateReceipts(20000000) to rewrite receipts but it only reduces receipt size partially and a resync tends to be faster.
    • Significantly reduced receipt database size by limiting transaction lookup via transaction hash to past year only (similar to Geth). If you need an older lookup, you may keep the old transaction hash index via --Receipts.TxLookupLimit 0.
  • Reduced state DB size by about 5% by not storing commonly occurring patterns.

  • Lowered memory consumption. On the Mainnet, reduces memory usage was reduced by about 1 GB after resync or full pruning.

  • Faster sync due to auto-pivot

    From initialization to snap sync
    v1.17.4 1h 7m 30s
    v1.18.0 28m 22s
    v1.19.0 without auto-pivot 1m 30s
    v1.19.0 with auto-pivot 10s

    State sync will start almost immediately after starting Nethermind. The pivot block will be updated to the one close to the chain head based on the message from the consensus layer. Thus, there’s no need to download a significant amount of the newest blocks before starting state sync.
    By default, auto-pivot functionality in this version will wait for ~15 minutes for CL to send us FCU based on which Pivot will be selected (in the future will be extended). If for any reason FCU will not arrive at Nethermind in that time(no checkpoint sync on the CL side, issue with CL configuration, etc.) you can increase this value by adding the flag --Sync.MaxAttemptsToUpdatePivot=1800 (900 is the default value - 1 attempt = ~1 second, so for 1800 it will try to update the pivot for a total of 30 minutes).

Changelog

Cancun

  • Add JSON-RPC endpoints for EIP-4844, needed to exchange blobs with the consensus layer by @flcl42 in #5558
  • Refactor transaction broadcasting for a needs of EIP-4844 by @marcindsobczak in #5485 #5619
  • Change blob transaction type value to 3 by @flcl42 in #5597

Metrics

Bug fixes and stability

  • Health Checks now will give more useful information about the current syncing status, possible problems during the sync process, and better recognition if a node is healthy by @deffrian in #5630
  • Allow specifying concurrent new connection count and connect timeout by @asdacap in #5676
  • Fix/transactionhash not threadsafe by @asdacap in #5634
  • Use Environment.TickCount64 rather than StopWatch.GetTimestamp by @benaadams in #5575
  • Zero total difficulty in FindBlock by @deffrian in #5581
  • Modernise C# by @benaadams in #5607
  • Set pivot to null if snap and fast sync are disabled by @deffrian in #5454
  • Fix negative blockNumber exception by @deffrian in #5609
  • Shouldn't Wait() on timer threads by @benaadams in #5497
  • Allow specifying local ip. And use any by default instead of loopback. by @asdacap in #5635
  • Add config for eth_getLogs max block depth by @asdacap in #5652
  • Fix eth capabilities for archive nodes by @marcindsobczak in #5655
  • Fix misplaced arguments by @deffrian in #5668

Logging

Performance

Other changes

New Contributors

Full Changelog: 1.18.0...1.19.0-rc

v1.18.2

26 May 13:32
Compare
Choose a tag to compare

Release notes

⚠️
This release revises the timestamp and fork-specific feature activations of the Chiado Shapella hard-fork and is a must-have replacement for any Chiado network operator using the previous v1.18.1.

For the Chiado network operators, this is a high-priority release because of the upcoming Shapella hard-fork. We urge you to update promptly to ensure seamless network functioning post-fork.

For other chains, this is a low-priority update recommended for those syncing from scratch as it significantly accelerates the sync process.

The Chiado Shapella hard-fork is scheduled at 13:17:00 PM UTC on May 24, 2023.

Full changelog: 1.18.0...1.18.2

v1.18.0

03 May 14:45
Compare
Choose a tag to compare

Release notes

Major highlights

  • Significant memory usage reduction
    Check out the graph below for v1.18.0 vs previous versions
    image
  • A new experimental full pruning method
    This significantly reduces full pruning time at the expense of RAM. It can be turned on via --Pruning.FullPruningMemoryBudgetMb <memory budget>. Internal testing shows a 2-4x reduction in pruning time with 4000 MB of memory budget and a 4-7x reduction with 8000 MB with a diminishing return above 16000 MB.
  • Important stability fixes
    We fixed edge case bugs during full pruning that could result in TrieException errors. We also improved the stability of the client by auto repair from the "invalid best state calculation" error. In some setups, Nethermind used to require root permissions because of the current implementation. Issues related to missing total difficulty on archive nodes are also fixed. Check the full changelog for details.
  • Improved sync time and block processing
    In our internal tests with a good network and hardware, Nethermind v1.18.0 can start attesting blocks on the Ethereum Mainnet in just over an hour. After 4 hours, the full download (including bodies and receipts) is completed leading to a steady state and improved attestations.
  • Progress toward the next hard-forks
    We merged @gnosischain withdrawals and started merging 4844-related PRs.

New features

  • Disk space savings for non-validator nodes
    You can significantly save disk space if you don't need chain history -- old bodies and receipts. Validators shouldn't enable this setting, because consensus clients require history to sync deposits. With these settings turned on, you will save around 450 GB of disk space for Ethereum Mainnet. To enable this feature, set --Sync.NonValidatorNode true --Sync.DownloadBodiesInFastSync false --Sync.DownloadReceiptsInFastSync false

  • Configurable startup disk space check
    Added a new check for free disk space during node startup (Health Checks plugin). By default, the node will shut down if available space is less than twice the shutdown threshold or half of the warning threshold (if specified). A new flag --HealthChecks.LowStorageCheckAwaitOnStartup true can suspend initialization until enough free disk is available for safe node operation.

    Change is provided especially to address cases when a node is hosted in an environment with automatic restart, e.g. docker-compose with restart policy.

  • Pass-through RocksDB options
    Allows specifying arbitrary RocksDB settings. The setting must be dynamically configurable. It can be passed as JSON to the --Db.AdditionalRocksDbOptions of DbConfig.

  • Additional metrics output
    A new option of publishing the metrics that can be gathered via dotnet-counters. To enable counters to pass, set --Metrics.CountersEnabled true.
    They can be collected from a running Nethermind process using dotnet-counters collect -n Nethermind.Runner.

Experimental features

  • Flags to tune RocksDB during sync to minimize write amplification at the expense of read amplification (during sync only) saving some SSD write endurance. It can be turned on via --Sync.TuneDbMode <level> where <level> is WriteBias, HeavyWrite, or AggressiveHeavyWrite. Some users may use this to reduce sync time while others may use it to increase it.

Changelist

Cancun

Gnosis

JSON-RPC

  • Added eth_getAccountendpoint by @Demuirgos [#5352]
  • Improve eth_getFilterChanges error codes by @LukaszRozmej [#5513]
  • Added Geth-compatible boolean flags for transaction inclusion for the newPendingTransactionsand newHeads options of eth_subscribe JSON-RPC method [#5299]

Metrics

Bug fixes and stability

Plugin development

To make Nethermind plugin development easier, we published Nethermind reference assemblies as Nethermind.ReferenceAssemblies NuGet package and as a separate archive to GitHub Releases [#5496]

Logging

Performance

Read more

v1.17.4

20 Apr 14:58
Compare
Choose a tag to compare

Release notes

This is a low-priority release. Bigger changes will come with the release of v1.18.0 that we're currently testing.

This release is intended only for nodes that are syncing from scratch. If a node is already synced with v1.17.3, upgrade is unnecessary.

Changes

  • Updated the sync pivots in the configuration file. Nethermind requires a regular update of sync pivots to get the best sync performance. [#5574]
  • Updated the Sepolia bootnodes to address peering problems [#5567]

v1.17.3

23 Mar 14:53
Compare
Choose a tag to compare

Release notes

This is the Shapella-ready release for the upcoming hard fork on Mainnet.
This release is a mandatory upgrade for all users running Mainnet nodes -- all previous versions will stop working on the transition.

No configuration changes are required on the Nethermind side to work properly on the Shapella hard fork.

Please ensure your consensus client is also upgraded to a Shapella-ready version to ensure a smooth transition.
You can check the status of readiness to the Mainnet hard fork of each consensus client here.

The Mainnet Shapella hard fork is scheduled for 10:27:35 PM UTC on April 12, 2023.

Shanghai

Updated the chain spec to Shapella-ready for the Mainnet [#5455]

v1.17.2

14 Mar 20:11
Compare
Choose a tag to compare

Release notes

This version is a hotfix addressing an edge-case issue with occasional bad block production which was spotted mostly on Ethereum testnets and the Gnosis chain.

Block production

Fixed bad block production. The issue appeared only whenever an empty block was scheduled to be created. [#5411]

v1.17.1

07 Mar 16:51
Compare
Choose a tag to compare

Release notes

This release is mandatory for those who have a node running on Goerli -- previous versions would not work properly in there.
For others, this release can be treated as optional.

Goerli Shapella hardfork to happen at March 14, 2023 10:25:36 PM UTC.

Shanghai

  • Updated the chain spec for Goerli (Shapella ready) [#5366]
  • engine_getPayloadBodiesBy* methods marked as optional capabilities. This will remove warnings that can be displayed whenever the consensus client is not using those endpoints yet. [#5346]

Proto-Danksharding

  • Enabled eth68: Adds the transaction type and size to tx announcement messages in the wire protocol [#5356]

v1.17.0

17 Feb 16:02
Compare
Choose a tag to compare

Release notes

In this release, you can find the first version of withdrawals which is currently used on the Zhejiang network (configs are added in this release, check below for more details.
Also, there are plenty of performance/stability improvements to make our node faster and even more predictable.

Additionally, Sepolia configs including timestamp for Shanghai hardfork are added. This version is mandatory for those who have a node running on Sepolia - previous versions would not work properly in there.
Sepolia Shapella hardfork to happen at: 2/28/2023 4:04:48 AM UTC

Shanghai

  • Withdrawals implementation ready and under testing on Zhejiang public testnet [#4731][#5139]
  • engine_getPayloadBodiesByHashV1 and engine_getPayloadBodiesByRangeV1 adjusted to the latest spec [#5210]
  • Configs for Zhejiang network (Shanghai public testnet. [#5240]
    Here is a tweet from our core dev @MarekM25 about how to use it and start experimenting with the post-Shapella testnet today!
  • Unify Engine API failure responses [#5154]
  • Fixes for EIP-3860 [#5081] [#5147]
  • Introduced null value for latestValidHash when a latest VALID ancestor is unknown + makes semantically equal response with CLs for engine_newPayloadV1 [#5203]
  • Implemented engine_exchangeCapabilities method as specified in the spec [#5212]
  • Integrate engine_exchangeCapabilities in health-checks (Issue #5185) [#5244]

Proto Danksharding

  • Added the possibility to set different chainId and networkId in chain spec [#4850]
  • Limit size of TransactionMessage by data size instead of tx count [#4917]
  • Added specification, configs, new transaction type and header fields [#4867]
  • Added DATAHASH opcode [#4894]
  • Added Point evaluation precompile for EIP-4844 [#4890]

JSON-RPC

  • trace_block Add zero reward post-merge similarly to pre-merge (Issue #4616) [#5054]
  • Reduced RAM consumption for trace_block [#5090]
  • Increase JsonRpc.MaxBatchSize to 1024. Together with Prysm team we noticed, that there is a small issue where Prysm can send a little bit more than 1000 requests in single batch so we want to adjust that behavior for them until PR: prysmaticlabs/prysm#11982 will be released. [#5286]

Sync & Networking

  • Fix edge case on BeaconHeaders stage causing sync to hang until a better peer is found [#5102]
  • Improvements of OldHeaders processing on node restart [#5112][#5119]
  • Fixes node disconnection from peers due to error in initial sync, specifically empty receipt received [#5120]
  • Added session direction to sync peer report. It simplifies the information if the peer is an incoming or outgoing connection. [#5258]
  • Fix establishing a connection on eth/67 [#5113]
  • Fix edge case when transitioning from fast sync [#5146]

TxPool

  • Fix edge case scenario with Nonce incrementation when eth_sendTransaction is executed multiple times (Issue #4845). [#4926]

Configs

  • Updated Chiado bootnodes [#5239]
  • Removed Kiln testnet configs [#5181]
  • Removed sokol chain and all related items/tests/configs/chainspecs [#5243]
  • Sepolia Shanghai configs added. Hardfork on Sepolia chain will happen at 2/28/2023 4:04:48 AM UTC. [#5280]

Performance

  • Great performance improvements by @benaadams [#5186][#5196][#5206][#5216][#5251]
  • Force garbage collections when visiting state trie to reduce memory pressure, e.g. in full pruning (Issue #4698) [#4699]
  • Force garbage collections after finishing sync stages to reduce memory pressure [#5089]
  • Reduce memory allocations when serving headers, bodies, and receipts [#5145]
  • Limit memory allocations in networking code on CPU's with high core counts [#5236]
  • Other code optimizations [#5213][#4623]

Gnosis

  • Fixed the issue when Aura validators sometimes fork off the chain on restarts [#4679]

  • Renamed xDai to Gnosis. This includes existing chain specs and configuration files. For new nodes recommended option is now -c gnosis. Old -c xdai is still supported. [#5057]

    ⚠️
    If you want to use an existing node with a new name, you need to move the database from the old directory to the one used in the
    gnosis.cfg. For example, <datadir>/nethermind_db/xdai should be moved to <datadir>/nethermind_db/gnosis.
    Same procedure applies for archive nodes, using gnosis_archive.cfg config.

v1.16.1

23 Jan 17:36
Compare
Choose a tag to compare

Release notes

This release brings important stability fixes and performance improvements, including:

  • Background pruning: Reduces node latency spikes when processing blocks, leading to better attestation performance on Validator nodes
  • Health check: Users will now be notified about low disk space
  • Memory and performance: Performance boost and memory consumption reduction due to upgrade runtime and database engine

JSON-RPC

  • Improved JsonRPC batch calls. Batch results will now be streamed. This reduces memory usage during batched calls. [#5134]
    New flags are introduced to be able to limit batched calls on node:

    • --JsonRpc.MaxBatchSize - Limits batch size for batched JSON-RPC call. The default value is 1000.
    • --JsonRpc.MaxBatchResponseBodySize - Limits max response body size when using batch requests. Subsequent requests are trimmed
      with an error response. The default value is 30MB.
      ⚠️ IMPORTANT
      We noticed that this limit is enough for most of the CL clients but there are still some unexpected behaviors with Prysm client.
      Working together with Prysm Developers to fix this since issue needs to be fixed on both sides, but there is simple workaround for this problem.
      If you will notice in Your Prysm logs warning like:
      Unable to cache headers for execution client votes" error="json: cannot unmarshal object into Go value of type []rpc.jsonrpcMessage
      restart Nethermind node using --JsonRpc.MaxBatchSize with value set to 10000.
  • New engine_getPayloadBodiesByRange Engine API endpoint as a part of Shanghai hardfork. Consensus clients can use this endpoint for better performance instead of eth_getBlockByNumber [#4939]

  • Fixed logs ordering of eth_getLogs when blooms are disabled [#5033]

  • Fixed the trace_transaction not to contain precompile sub-traces [#4410]

Sync & Networking

  • Improved state sync performance with better parallelism. This results in performance improvement -- observed state sync executes a few times faster than before. [#4921]
  • Fixed issue causing a snap sync stage on Sepolia to hang on 0% for a long time [#5059]
  • Fixed edge case when sync is stuck and unable to proceed further until the node was restarted [#5055]

Pruning

  • In previous versions, pruning blocks block processing which can lead to increased latency from time to time on the block. In this version, pruning is moved to a background thread. This change should reduce block processing latency spikes leading to better attestation performance on Validator nodes. [#4626]

Gnosis

  • Renamed xDai to Gnosis. This includes existing chain specs and configuration files. For new nodes recommended option is now -c gnosis. Old -c xdai is still supported. [#5057]

⚠️
If you want to use an existing node with a new name, you need to move the database from the old directory to the one used in the gnosis.cfg. For example, <datadir>/nethermind_db/xdai should be moved to <datadir>/nethermind_db/gnosis.
Same procedure applies for archive nodes, using gnosis_archive.cfg config.

Runtime and database

  • Updated to .NET 7 [#4889]
  • Updated to RocksDB v7.7. Solves issues with syncing on specific ARM CPUs. [#5065]

Health check

Added free disk space checks for the drives configured as DB locations to prevent data corruption due to disks being full. Two configurable thresholds added [#4837]:

  • HealthChecks.LowStorageSpaceWarningThreshold: The percentage of free disk space below which a warning is added to the console as well as to the health checks. The default value is 5 (5% of free space).
  • HealthChecks.LowStorageSpaceShutdownThreshold: The percentage of free disk space below which the client shuts down. The default value is 1 (1% of free space)

Other changes

  • Added logging CPU type at startup [#5016]
  • Disabled color output for CLI [#4785]
  • Added Exosama network support [#5008]
  • Dropped support of obsolete configurations [#5064]
  • Various performance improvements (thank you @benaadams) [#5027] [#5009] [#5030] [#5079]

We'd love to hear from you, so if you encounter an issue or have any feedback, please open an issue or contact us on Discord.