You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Forest is claiming to be a lightweight Filecoin node implementation. We have numbers to prove that Forest is objectively better regarding speed and resource usage for snapshot exporting.
That said, we have no such numbers for other RPC methods. Do Forest endpoints support more than 1000 requests per second? A million? We were asked for the numbers at one point, but sadly, we don’t have any. If we have better performance under cheaper hardware, let’s have some numbers - it doesn’t have to be in the CI but performed at least quarterly. Anyone trying to convince NO would have it much easier if he had a table showing the results (and a portable script allowing this kind of testing on their hardware).
We need a tool to measure the performance of Filecoin nodes in terms of RPC handling. This tool should be node-agnostic and extremely easy to use (docker everywhere!). This opens a few usage scenarios:
Forest vs Lotus benchmarking,
Forest vs Forest performance regression tests,
Lotus vs Lotus performance regression tests
Potentially, this could be an extension to conformance tests via forest-tool api compare (or at least it could reuse some chunk of that code).
This issue concerns designing the tool - details should be split into smaller issues.
Things to keep in mind:
tool must be performant and allow setting arbitrary parameters for the benchmarks, e.g., CPUs available, connections, QPS. A fantastic source of inspiration for the design would be ghz.
tool must accept a list (via file or command-line args) of methods to benchmark,
tool must produce output in a structured format allowing extension and schema changes. What kind of metrics do we want to capture?
the output should be persistible and easily comparable
How should the results be validated? Is valid format + 200 enough to say the response was okay?
An example result (100% biased) is something along those lines, with the benchmark script here. It's just an idea, though.
Other information and links
The text was updated successfully, but these errors were encountered:
Issue summary
We need a tool to measure the performance of Filecoin nodes in terms of RPC handling. This tool should be node-agnostic and extremely easy to use (
docker
everywhere!). This opens a few usage scenarios:Potentially, this could be an extension to conformance tests via
forest-tool api compare
(or at least it could reuse some chunk of that code).This issue concerns designing the tool - details should be split into smaller issues.
Things to keep in mind:
An example result (100% biased) is something along those lines, with the benchmark script here. It's just an idea, though.
Other information and links
The text was updated successfully, but these errors were encountered: