Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backwards incompatible change in patch version #520

Open
prestwich opened this issue Sep 11, 2023 · 18 comments
Open

Backwards incompatible change in patch version #520

prestwich opened this issue Sep 11, 2023 · 18 comments

Comments

@prestwich
Copy link

prestwich commented Sep 11, 2023

Updating from v3.6.4 -> v3.6.8 introduced a backwards incompatible change caught by ruint ci here.

Change was in this commit. Introduced in #512

Please consider yanking 3.6.7 & 3.6.8

@ggwpez
Copy link
Member

ggwpez commented Sep 11, 2023

cc @pgherveou

@bkchr
Copy link
Member

bkchr commented Sep 11, 2023

Yeah that was my fault 🙈 #512 (comment)

Sorry!

@prestwich
Copy link
Author

No worries, it's a pretty straightforward fix. Would help to know whether 3.6.8 will also be yanked, or if we should pin the dependency to <3.6.8 in the next ruint release

@pgherveou
Copy link
Contributor

@prestwich this ruint PR should fix it, not super familiar with the project, so reviews are welcome 🙏
recmo/uint#313

@bkchr
Copy link
Member

bkchr commented Sep 12, 2023

Yeah maybe we should yank and release 4.0.0. Or does anyone has better ideas?

@pgherveou
Copy link
Contributor

Yanked it already to be safe, what about just fixing uint and then re-submitting?
Do you expect more projects that would start failing?

@prestwich
Copy link
Author

WRT fixing ruint, we can't change the dep specifier in the currently released versions. So if this change gets released outside a major, anyone using current versions of ruint will get broken. Updating ruint could solve that for most users, but we have at least one user stuck on ruint@1.8.0 (in solana, which is significantly behind mainline compiler versions, too old for our current MSRV) who would have no upgrade path and would have to pin the parity-scale-codec crate. Not the end of the world

ruint policy is also to support multiple dep major versions (as dropping support would be . so if this becomes parity-scale-codec@4, we would introudce a parity-scale-codec-v4 feature, rather than updating the existing support

So overall, we'd prefer not to update our existing support to the breaking change. If it becomes a major, we're happy to add the new support alongside the existing support

This is your lib, so let me know what you want to do here. Appreciate the help :)

@pgherveou
Copy link
Contributor

so, we should release v4.0.0 with the breaking change and ruint can update later on using the PR I submitted
Unless we want to rollback and go with the initial suggested commit of #508, which is not as nice but should not break downstream dependencies

@prestwich
Copy link
Author

Yep, exactly. Let me know what ya'll end up deciding and I'll coordinate any ruint changes

@bkchr
Copy link
Member

bkchr commented Sep 13, 2023

@pgherveou how much do you need Compact support for MaxEncodedLen? And could you maybe not use the compact attribute? This maybe already makes it work right now? Otherwise we can use your "less optimal" solution for now. There exists some issues that we should tackle before we go to 4.0, we could also solve them and then bump.

@pgherveou
Copy link
Contributor

pgherveou commented Sep 13, 2023

We have this function here read_sandbox_memory_as that will read up to MaxEncodedLen bytes to read and decode types from the contract memory. This could fail with a max_encode_len fn that returns a too small value for struct with compact fields.

It looks like we don't use Compact type now, and arbitrary contract storage do not rely on this method, so we should be safe for now, but I would like to read existing types that do define Compact field in the future.

https://github.com/paritytech/polkadot-sdk/blob/f7c95c5fc12d3abde866bc1ee106909f950e5427/substrate/frame/contracts/src/wasm/runtime.rs#L627-L638

@bkchr
Copy link
Member

bkchr commented Sep 13, 2023

Without wanting to look to deep into this, but isn't read_sandbox_memory_as trusted code that is being used by pallet-contracts from the runtime side? Why do we need to use there the MaxEncodedLen at all?

@pgherveou
Copy link
Contributor

it is invoked by the runtime when the contracts invoke a host function from pallet-contracts.
the contract pass a pointer from it's memory and we read from the pointer up to max_encoded_len of the type we need to read and decode, using decode_with_depth_limit

@bkchr
Copy link
Member

bkchr commented Sep 29, 2023

But why not just directly decode from the memory?

@pgherveou
Copy link
Contributor

That's what we do, we decode the type from the contract's memory, we just need to make a bound check to make sure that we are not reading outside of the sandbox memory

@bkchr
Copy link
Member

bkchr commented Oct 1, 2023

But at the point you are reading it, you are in trusted code? And you are declaring the type and not the contract. I really don't get why you need MaxEncodedLen for your use case.

@pgherveou
Copy link
Contributor

(call $seal_terminate
	(i32.const 65536)  ;; Pointer to "account" address (out of bound).
)

Here reading AccountId::max_encoded_len() bytes from the account addr specified will trigger an outOfBound error.

@bkchr
Copy link
Member

bkchr commented Oct 1, 2023

You can just change this code here to this:

		let mut bound_checked = memory
			.get(ptr..)
			.ok_or_else(|| Error::<E::T>::OutOfBounds)?;
		let decoded = D::decode_with_depth_limit(MAX_DECODE_NESTING, &mut bound_checked)

Decode will return an error if there aren't enough bytes in the data to decode the type.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants