-
Notifications
You must be signed in to change notification settings - Fork 292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RPC cache system #1083
base: main
Are you sure you want to change the base?
RPC cache system #1083
Conversation
This commit introduces a base LRU cache that will be used for the RPC layer. It is inspired by frontier's code base: https://github.com/paritytech/frontier/blob/194f6bb06152402ba44b340c3d401ae6e0670d96/client/rpc/src/eth/cache/mod.rs#L76 Extending it, our approach proposes two types of limiter: - Values size limiter: A limiter that will base its cache management on the total size of all values stored in the cache - Allocated size limiter: A limiter that will base its cache management on the allocated memory for the cache This allows for either an approximate or a precise memory management, which could prove useful in our P2P context.
We added a StorageValue for blocks in the pallet_starknet, allowing us to retrieve block data through storage.
We chose to keep memory management by allocated memory as this allows for a more precise control.
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #1083 +/- ##
==========================================
- Coverage 40.71% 40.36% -0.35%
==========================================
Files 96 98 +2
Lines 11069 11281 +212
Branches 11069 11281 +212
==========================================
+ Hits 4507 4554 +47
- Misses 6050 6209 +159
- Partials 512 518 +6
☔ View full report in Codecov by Sentry. |
Hey @tdelabro, here is a first draft for the RPC cache. As we discussed under the issue, I've implemented the cache to work with allocated memory size instead of values size to make it more precise. Right now, it seems to fail integration test, which is strange as it passes on my computer. I'll look into it. The only problem that I have is that I think the benchmark tests to evaluate the cache efficiency are not there. I wanted to add some to try and measure it but I'm having trouble in identifying how to pull that off. Could I ask for your input on that ? Ideally, I think that we want to try and spam a request on loop to our RPC methods that concern block fetch (e.g.: Also, I'd be interested in your opinion of the choice I made to implement a new |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
left few comments
} | ||
|
||
#[cfg(test)] | ||
mod tests { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a way we can add tests for the metrics too?
/// Represent a pair of a cache and a data waitlist, used to communicate information between our | ||
/// internal methods. | ||
struct CacheWaitlist<B: BlockT, T> { | ||
cache: LRUCache<<B as BlockT>::Hash, T>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure we need to use this hasher here tho, we should probably optimize for performance here w/ blake2
} | ||
|
||
/// Cache for `get_block_by_block_hash`. | ||
pub async fn get_block_by_block_hash( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wdyt about adding one for tx status like its done in frontier
? With 0.12.1 this now makes total sense
get_block_by_block_hash(self.client.as_ref(), substrate_block_hash).unwrap_or_default(); | ||
let schema = mc_storage::onchain_storage_schema(self.client.as_ref(), substrate_block_hash); | ||
|
||
let block = self.data_cache.get_block_by_block_hash(schema, substrate_block_hash).await.unwrap_or_default(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's refractor this code
let schema = mc_storage::onchain_storage_schema(self.client.as_ref(), substrate_block_hash);
let block = self.data_cache.get_block_by_block_hash(schema, substrate_block_hash).await.unwrap_or_default();
to smth like let block = self.get_block_from_cache(substrate_block_hash)
Co-authored-by: 0xevolve <Artevolve@yahoo.com>
Co-authored-by: 0xevolve <Artevolve@yahoo.com>
There hasn't been any activity on this pull request recently, and in order to prioritize active work, it has been marked as stale. |
Pull Request type
Please add the labels corresponding to the type of changes your PR introduces:
What is the current behavior?
In the current implementation of the madara client, we fetch block data by going through the generated logs of our pallet. As the block data is important and we need fast retrieval, it seems important to:
StorageValue
in our palletResolves: #947
What is the new behavior?
Right now, the update to the code base are as follow:
CurrentBlock
type in the starknet pallet, representing aStorageValue
containing our block data. Also added a getter to retrieve it through our runtime API.The new flow is:
--starknet-log-block-cache-size
. For now, the default value is 200 (most likely will cache little if not nothing)Does this introduce a breaking change?
No
Other information
Original issue: #947