Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RPC performance testing #4224

Open
LesnyRumcajs opened this issue Apr 17, 2024 · 0 comments
Open

RPC performance testing #4224

LesnyRumcajs opened this issue Apr 17, 2024 · 0 comments

Comments

@LesnyRumcajs
Copy link
Member

Issue summary

Forest is claiming to be a lightweight Filecoin node implementation. We have numbers to prove that Forest is objectively better regarding speed and resource usage for snapshot exporting.

That said, we have no such numbers for other RPC methods. Do Forest endpoints support more than 1000 requests per second? A million? We were asked for the numbers at one point, but sadly, we don’t have any. If we have better performance under cheaper hardware, let’s have some numbers - it doesn’t have to be in the CI but performed at least quarterly. Anyone trying to convince NO would have it much easier if he had a table showing the results (and a portable script allowing this kind of testing on their hardware).

We need a tool to measure the performance of Filecoin nodes in terms of RPC handling. This tool should be node-agnostic and extremely easy to use (docker everywhere!). This opens a few usage scenarios:

  • Forest vs Lotus benchmarking,
  • Forest vs Forest performance regression tests,
  • Lotus vs Lotus performance regression tests

Potentially, this could be an extension to conformance tests via forest-tool api compare (or at least it could reuse some chunk of that code).

This issue concerns designing the tool - details should be split into smaller issues.

Things to keep in mind:

  • tool must be performant and allow setting arbitrary parameters for the benchmarks, e.g., CPUs available, connections, QPS. A fantastic source of inspiration for the design would be ghz.
  • tool must accept a list (via file or command-line args) of methods to benchmark,
  • tool must produce output in a structured format allowing extension and schema changes. What kind of metrics do we want to capture?
  • the output should be persistible and easily comparable
  • How should the results be validated? Is valid format + 200 enough to say the response was okay?

An example result (100% biased) is something along those lines, with the benchmark script here. It's just an idea, though.

Other information and links

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant