Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cmd/evm: Add evm t8n server #29739

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from
Draft

Conversation

marioevz
Copy link
Member

@marioevz marioevz commented May 8, 2024

Adds a server mode to the evm transition tool, currently as a new subcommand evm t8n-server but could easily be added as a flag to the existing evm t8n subcommand.

Two modes are added:

  • HTTP through TCP port: --port 1234
  • HTTP through an unix socket: --unix-socket /path/to/file

The TCP port implementation causes some issues with our filler and the unix socket flavor is what currently we are using to fill tests.

This speeds up test filling by some significant margin. E.g. filling all cancun tests takes 70s with current implementation and 40s using the server.

@winsvega
Copy link
Contributor

winsvega commented May 10, 2024

could you post how json request would look like. ?

ok, I made it work, here is the review:

  1. I thought we agreed that all prameters are string in json requests. chainid and reward are expected to be int
  2. the txs can be an rlp string like we put in txs.rlp file for t8n. currently it expects json there
  3. actully I can't benefit from this server as my constructing json requests is inside bash and not optimised on my side. it is too slow.
  4. error handling. the server as a tool I guess expect correct ethereum json structure. but what I the field is overflow? need to return json with error
    "failed unmarshalling request: json: cannot unmarshal hex number with leading zero digits into Go struct field input.input.txs of type *hexutil.Big"
  5. "reward": -1 is not handled

@holiman
Copy link
Contributor

holiman commented May 10, 2024

This speeds up test filling by some significant margin. E.g. filling all cancun tests takes 70s with current implementation and 40s using the server.

I would prefer getting the speeds down without this, if we can. Alternatively, I think we could do it a bit less invasive -- for fuzzing evm statetest ..., we already have a "server" mode, where we keep reading filenames from stdin. That way of doing it (server reading from stdin: sequentially reading tasks once the current task is done) is very easy to just implement outside of the existing logic. As opposed to having an actual http server and url handler and things like that.

@holiman
Copy link
Contributor

holiman commented May 15, 2024

Btw, here's the speed difference on running a random state-test, first in non-server-mode:

$ time for i in {1..1000}; do ./evm statetest ./test.json   2>/dev/null 1>/dev/null ; done

real    1m9.019s
user    0m53.971s
sys     0m23.604s

Takes roughly 70s. Then using server-mode

$ time yes ./test.json | head -n1000 | ./evm --json --nomemory --noreturndata statetest  2>/dev/null 1>/dev/null

real    0m0.961s
user    0m1.011s
sys     0m0.207s

Roughly 1 second. So a 70x speedup. So it seems to me that going from 70s to 40s has plenty of room for further speedups.

filling all cancun tests takes 70s with current implementation and 40s using the server.

Anyway, would it be possible to create a set of inputs to use as a reference, and then we can experiment a bit with different forms of speeding it up? Where the task is to "generate the outputs 1000 times, given these inputs" (kind of like I did here above, with the same inputs every execution)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants