-
Notifications
You must be signed in to change notification settings - Fork 680
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chain-spec building failure due to WASM allocation size #5419
Comments
Could you try this? Seems that this error is reported when the allocation exceeds this value: polkadot-sdk/substrate/primitives/core/src/lib.rs Lines 408 to 411 in 178e699
|
from what I know It is not possible to change this value during chain spec building. |
@michalkucharczyk thanks for the tip, it also requires a few other changes but in the end I got it working. The solution for this has been posted in the stackexchange for the record https://substrate.stackexchange.com/questions/11863/allocate-extra-wasm-memory-to-generate-large-chainspecs/11864 |
Thank you for writing this down. |
Should we address this at a more fundamental level? We can't expect everyone to change the code. All WASM memory limits should ideall be removed in chain spec generation and OCW code path. |
What you want is this: polkadot-fellows/RFCs#4 and then to move the allocator inside of the runtime. |
@bkchr we've been discussing about increasing the max number of validators that can be registered in the system and be part of the snapshot. From a few experiments locally, the offchain miner panics with failure to allocating memory due to the current limits when calculating the election. I wonder if we could double the I'm now trying to improve the code so require less memory allocation, in any case, if we need to increase the limits, are there any thing to consider/side effects from increasing those limits? |
We can not just bump these limits. This changes the behavior of the allocator. If possible, I would not like to touch this at all before we move this to the runtime. I think I proposed this already somewhere, but can we for now just not change the way we register them at genesis? Either pass them with two fields into the runtime or use @gpestana do you know where exactly the allocation is failing? |
The current issue is about OCW code path actually, no longer genesis, but @gpestana knows better and can point out exactly where we run OOM. It is kind of silly to have the OCW/Genesis code path, which are by no means "consensus-critical" be subject to the memory limits that are afaik only arguable in consensus code paths. I hope we can hack around it. In the old substrate days, I made a PR that never got merged, but it did something like this: On the client side, we attached a |
We have this, but even in the old days you did not modify these constants. |
As a way to be more lenient wrt to memory constrains in the offchain execution mode, I'm parameterising the My current direction is to expose those params in the CLI and overwrite the constants if the call context is of type wdyt about this approach? |
PR to address this #5419 |
Just because you don't know how the allocator works, doesn't mean that the current way is "silly". As I said, you can not just change these constants without knowing what they are doing or how the allocator works. When you change the allocator as done in #6081, it means the moment there is a two gigabyte allocation, this memory is never "freed" again. You will only be able to reuse this space. Maybe you are lucky and what you are doing works, but I clearly don't want to depend on luck for this kind of change. But for example for a
What I said here is the solution. It would be a better use of time to try to push this forward, instead of trying to come up with hacks that we can not accept. |
If that WASM instance is only used to generate the genesis config, or a one-off offchain worker, and then it is thrown away, is this a real issue? You are right to protest that I don't exactly know how these wasm executor/allocator stuff is working under the hood, and perhaps my use of the word silly was not good, so let me rephrase: The current situation seems "unnecessarily restrictive". Offchain WASM instances should have higher memory limits, and ideally kept separate from all the ones that are used for onchain execution. The offchain ones might be prone to this flaw that you are talking about, but, so be it, they are not particularly important anyways 🤷 |
For sure that will not lead to any problem with the on chain execution. However, depending on the access in the offchain worker, the offchain worker could die because it runs out of memory. And it will run out of memory faster with the proposed solution. Maybe could someone explain me what we mean by "allowing more validators"? More validators can apply for getting nominated? Also don't we have the solution miner that runs totally independent of the on chain runtime? |
dumb thought : we could only increase limit in particular binary (chain-spec-builder), while keeping old limit in node. |
Yeah, that would be an acceptable solution for now. |
I'm trying to build a chain-spec with a large number of nominators and validators for the multi-block election tests but given the amount of stakers to add, the chain-spec generation fails with:
Is there any way to increase the WASM allocation size at the spec building time?
This issue seems to be similar to paritytech/substrate#11132 but at the time of chain-spec building, not block production.
The text was updated successfully, but these errors were encountered: