Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dev: add LRU cache layer to block fetching client-side #947

Open
EvolveArt opened this issue Jul 31, 2023 · 7 comments · May be fixed by #1083
Open

dev: add LRU cache layer to block fetching client-side #947

EvolveArt opened this issue Jul 31, 2023 · 7 comments · May be fixed by #1083
Labels
backlog Ready to be picked enhancement New feature or request stale

Comments

@EvolveArt
Copy link
Collaborator

/// Manage LRU caches for block data and their transaction statuses.
/// These are large and take a lot of time to fetch from the database.
/// Storing them in an LRU cache will allow to reduce database accesses
/// when many subsequent requests are related to the same blocks.
pub struct EthBlockDataCacheTask<B: BlockT>(mpsc::Sender<EthBlockDataCacheMessage<B>>);

This code was taken from frontier and is extensively used on their RPC layer. We should investigate adding this on our side and evaluate the performance gain using https://github.com/keep-starknet-strange/gomu-gomu-no-gatling

@EvolveArt EvolveArt added the enhancement New feature or request label Jul 31, 2023
@tdelabro tdelabro added the backlog Ready to be picked label Aug 11, 2023
@tchataigner
Copy link

As @tdelabro pointed me to this issue I can try to work on this for our RPC layer. Would be glad to try out performance testing as well.

@tdelabro
Copy link
Collaborator

Yes sure. Try to draw inspiration from the frontier impl: https://github.com/paritytech/frontier

@tchataigner
Copy link

Hey ! Sorry for not giving news before today, was taken out by a bad cold.

I have been wondering about a detail in the cache implementation which I think is still important to determine. In frontier implementation we can see that they have a max_size parameter which limits the cache total stored size to a given threshold.

However, this implementation has one flaw I think, it only looks at stored values size and not at the allocated size (e.g.: for a value "a", value size: 1 byte, allocated size: 148 bytes). In the context of P2P networks and precise memory management, I was wondering if I should:

  1. Keep the same memory management as frontier, which is imprecise but will allow for more values to be stored.
  2. Change the memory management to allocated memory, allowing for more precise memory predictability.
  3. Have a middle ground where either or both could be selected.

Eager to get your take on that 🙌

@tdelabro
Copy link
Collaborator

tdelabro commented Sep 5, 2023

I think in the first time you can go with the easier one.
And then if it's not too difficult switch to the second one which is more precise.

tchataigner added a commit to tchataigner/madara that referenced this issue Sep 6, 2023
tchataigner added a commit to tchataigner/madara that referenced this issue Sep 6, 2023
This commit introduces a base LRU cache that will be used for the RPC layer. It is inspired by frontier's code base:
https://github.com/paritytech/frontier/blob/194f6bb06152402ba44b340c3d401ae6e0670d96/client/rpc/src/eth/cache/mod.rs#L76

Extending it, our approach proposes two types of limiter:
- Values size limiter: A limiter that will base its cache management on the total size of all values stored in the cache
- Allocated size limiter: A limiter that will base its cache management on the allocated memory for the cache

This allows for either an approximate or a precise memory management, which could prove useful in our P2P context.
tchataigner added a commit to tchataigner/madara that referenced this issue Sep 6, 2023
tchataigner added a commit to tchataigner/madara that referenced this issue Sep 6, 2023
tchataigner added a commit to tchataigner/madara that referenced this issue Sep 6, 2023
tchataigner added a commit to tchataigner/madara that referenced this issue Sep 8, 2023
tchataigner added a commit to tchataigner/madara that referenced this issue Sep 8, 2023
We added a StorageValue for blocks in the pallet_starknet, allowing us to retrieve block data through storage.
tchataigner added a commit to tchataigner/madara that referenced this issue Sep 8, 2023
tchataigner added a commit to tchataigner/madara that referenced this issue Sep 10, 2023
We chose to keep memory management by allocated memory as this allows for a more precise control.
tchataigner added a commit to tchataigner/madara that referenced this issue Sep 10, 2023
@tchataigner tchataigner linked a pull request Sep 10, 2023 that will close this issue
tchataigner added a commit to tchataigner/madara that referenced this issue Sep 10, 2023
tchataigner added a commit to tchataigner/madara that referenced this issue Sep 11, 2023
tchataigner added a commit to tchataigner/madara that referenced this issue Sep 11, 2023
@github-actions
Copy link

github-actions bot commented Oct 6, 2023

There hasn't been any activity on this issue recently, and in order to prioritize active issues, it will be marked as stale.
Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by leaving a 👍
Because this issue is marked as stale, it will be closed and locked in 7 days if no further activity occurs.
Thank you for your contributions!

@tdelabro
Copy link
Collaborator

tdelabro commented Oct 9, 2023

@tchataigner has been working on l1-l2 messaging recently.
If someone want to take the work for this issue where he left it, it would be great

@github-actions github-actions bot removed the stale label Oct 10, 2023
Copy link

github-actions bot commented Nov 9, 2023

There hasn't been any activity on this issue recently, and in order to prioritize active issues, it will be marked as stale.
Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by leaving a 👍
Because this issue is marked as stale, it will be closed and locked in 7 days if no further activity occurs.
Thank you for your contributions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backlog Ready to be picked enhancement New feature or request stale
Projects
Status: No status
Development

Successfully merging a pull request may close this issue.

3 participants