Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Location of participants in Lotus stack, multiple identities in one instance #103

Open
Tracked by #253
Kubuxu opened this issue Feb 29, 2024 · 1 comment · May be fixed by #188
Open
Tracked by #253

Location of participants in Lotus stack, multiple identities in one instance #103

Kubuxu opened this issue Feb 29, 2024 · 1 comment · May be fixed by #188

Comments

@Kubuxu
Copy link
Collaborator

Kubuxu commented Feb 29, 2024

For now, I've assumed that the f3 active participant would live in Lotus.
This might not have been as good of an assumption.

An active participant is tied to SP code and identity; that flow lives in lotus-miner/lotus-provider.
At the same time, the lotus miner is not connected with the global pubsub (AFAIK).

A single Lotus node can also host multiple providers, which, if f3 continues to live there, would necessitate either multiple concurrent instances in one Louts node or f3 being able to handle multiple identities.

As far as I know, the protocol flow is independent of our own identity, which should make running multiple identities at the same much easier. We could abstract out signing and VRF generation as part of the broadcast operation. Essentially, the instance itself stops caring about our own identity.

@Kubuxu Kubuxu added this to the F3 Alpha milestone Apr 22, 2024
@Kubuxu
Copy link
Collaborator Author

Kubuxu commented May 20, 2024

The design I settled on:

  • gpbft.Participant is unaware of the ID it is running as the protocol is independent of our own decisions
    • this requires a slight refactor and cleanup in gpbft to remove ParticipantID
  • gpbft requests a given message to be broadcasted, that message is universal (not connected with any ID), this is passed to the Host which knows as which IDs it wants to broadcast, and uses powertable from gpbft to resolve IDs to public keys
  • The host then builds a payload to be signed with that key. Signing can happen over RPC boundary (necessary for offloading keys to lotus-miner). This results in serialized payloads that are to be signed with the given key and returned back to the host for broadcasting
  • When these payloads are returned to be broadcasted, they are also immediately processed as incoming messages

See #188 for PR on it. The suggestion there was to separate the PR into:

  1. async delivery of local messages
  2. removal of ID from gpbft.Participant
  3. addition of the message builder pattern

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In review
Development

Successfully merging a pull request may close this issue.

1 participant