From 423a60cde33a6d277ce5c36430b2050eb9e59c99 Mon Sep 17 00:00:00 2001 From: bkchr Date: Sun, 14 Jan 2024 00:57:24 +0000 Subject: [PATCH] deploy: ade390b5ef06f3d897a56acea653504e706fe30e --- 404.html | 2 +- approved/0001-agile-coretime.html | 2 +- approved/0005-coretime-interface.html | 2 +- approved/0007-system-collator-selection.html | 2 +- approved/0008-parachain-bootnodes-dht.html | 2 +- ...12-process-for-adding-new-collectives.html | 2 +- ...rove-locking-mechanism-for-parachains.html | 2 +- approved/0022-adopt-encointer-runtime.html | 2 +- approved/0032-minimal-relay.html | 2 +- approved/0050-fellowship-salaries.html | 2 +- ...0056-one-transaction-per-notification.html | 2 +- index.html | 2 +- introduction.html | 2 +- print.html | 659 ++++++++---------- proposed/000x-lowering-deposits-assethub.html | 2 +- proposed/0026-sassafras-consensus.html | 2 +- ...-absolute-location-account-derivation.html | 6 +- proposed/0044-rent-based-registration.html | 6 +- .../0046-metadata-for-offline-signers.html | 2 +- ...047-assignment-of-availability-chunks.html | 2 +- .../0059-nodes-capabilities-discovery.html | 2 +- .../0061-allocator-inside-of-runtime.html | 6 +- ...ering-existential-deposit-on-assethub.html | 323 --------- searchindex.js | 2 +- searchindex.json | 2 +- ...04-remove-unnecessary-allocator-usage.html | 6 +- ...namic-pricing-for-bulk-coretime-sales.html | 2 +- ...09-improved-net-light-client-requests.html | 2 +- stale/0010-burn-coretime-revenue.html | 2 +- ...ath-to-account-creation-on-asset-hubs.html | 2 +- ...uilder-and-core-runtime-apis-for-mbms.html | 2 +- stale/0015-market-design-revisit.html | 2 +- ...irmation-period-duration-modification.html | 2 +- ...ction-voting-delegation-modifications.html | 6 +- .../0042-extrinsics-state-version.html | 10 +- .../0043-storage-proof-size-hostfunction.html | 6 +- stale/0048-session-keys-runtime-api.html | 2 +- stale/0054-remove-heap-pages.html | 2 +- 38 files changed, 334 insertions(+), 752 deletions(-) delete mode 100644 proposed/0062-lowering-existential-deposit-on-assethub.html rename {proposed => stale}/0042-extrinsics-state-version.html (82%) diff --git a/404.html b/404.html index 4e17be0dc..0de390004 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index bf682103e..5b362ee41 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index 05879f861..d84a80ae0 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index 864079b28..725ca1863 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index f139d9123..32f116047 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index e850e930f..096ea659f 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index 39f90d782..ab3eaa831 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index 61ade4191..6e0feb831 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index 5d070f21b..0c1e8b57c 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index d6ea1f197..67ec6965a 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index e8488578e..0fdb03b2f 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ diff --git a/index.html b/index.html index a62133f3c..b61b196bd 100644 --- a/index.html +++ b/index.html @@ -90,7 +90,7 @@ diff --git a/introduction.html b/introduction.html index a62133f3c..b61b196bd 100644 --- a/introduction.html +++ b/introduction.html @@ -90,7 +90,7 @@ diff --git a/print.html b/print.html index f41e8efe6..2b1169d2b 100644 --- a/print.html +++ b/print.html @@ -91,7 +91,7 @@ @@ -3201,106 +3201,6 @@

Unresolved Questions

Implementation details and overall code is still up to discussion.

-

(source)

-

Table of Contents

- -

RFC-0042: Add System version that replaces StateVersion on RuntimeVersion

-
- - - -
Start Date25th October 2023
DescriptionAdd System Version and remove State Version
AuthorsVedhavyas Singareddi
-
-

Summary

-

At the moment, we have system_version field on RuntimeVersion that derives which state version is used for the -Storage. -We have a use case where we want extrinsics root is derived using StateVersion::V1. Without defining a new field -under RuntimeVersion, -we would like to propose adding system_version that can be used to derive both storage and extrinsic state version.

-

Motivation

-

Since the extrinsic state version is always StateVersion::V0, deriving extrinsic root requires full extrinsic data. -This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is -further explored in https://github.com/polkadot-fellows/RFCs/issues/19

-

For Subspace project, we have an enshrined rollups called Domain with optimistic verification and Fraud proofs are -used to detect malicious behavior. -One of the Fraud proof variant is to derive Domain block extrinsic root on Subspace's consensus chain. -Since StateVersion::V0 requires full extrinsic data, we are forced to pass all the extrinsics through the Fraud proof. -One of the main challenge here is some extrinsics could be big enough that this variant of Fraud proof may not be -included in the Consensus block due to Block's weight restriction. -If the extrinsic root is derived using StateVersion::V1, then we do not need to pass the full extrinsic data but -rather at maximum, 32 byte of extrinsic data.

-

Stakeholders

- -

Explanation

-

In order to use project specific StateVersion for extrinsic roots, we proposed -an implementation that introduced -parameter to frame_system::Config but that unfortunately did not feel correct. -So we would like to propose adding this change to -the RuntimeVersion -object. The system version, if introduced, will be used to derive both storage and extrinsic state version. -If system version is 0, then both Storage and Extrinsic State version would use V0. -If system version is 1, then Storage State version would use V1 and Extrinsic State version would use V0. -If system version is 2, then both Storage and Extrinsic State version would use V1.

-

If implemented, the new RuntimeVersion definition would look something similar to

-
#![allow(unused)]
-fn main() {
-/// Runtime version (Rococo).
-#[sp_version::runtime_version]
-pub const VERSION: RuntimeVersion = RuntimeVersion {
-		spec_name: create_runtime_str!("rococo"),
-		impl_name: create_runtime_str!("parity-rococo-v2.0"),
-		authoring_version: 0,
-		spec_version: 10020,
-		impl_version: 0,
-		apis: RUNTIME_API_VERSIONS,
-		transaction_version: 22,
-		system_version: 1,
-	};
-}
-

Drawbacks

-

There should be no drawbacks as it would replace state_version with same behavior but documentation should be updated -so that chains know which system_version to use.

-

Testing, Security, and Privacy

-

AFAIK, should not have any impact on the security or privacy.

-

Performance, Ergonomics, and Compatibility

-

These changes should be compatible for existing chains if they use state_version value for system_verision.

-

Performance

-

I do not believe there is any performance hit with this change.

-

Ergonomics

-

This does not break any exposed Apis.

-

Compatibility

-

This change should not break any compatibility.

-

Prior Art and References

-

We proposed introducing a similar change by introducing a -parameter to frame_system::Config but did not feel that -is the correct way of introducing this change.

-

Unresolved Questions

-

I do not have any specific questions about this change at the moment.

- -

IMO, this change is pretty self-contained and there won't be any future work necessary.

(source)

Table of Contents

-

Stakeholders

+

Stakeholders

All chain teams are stakeholders, as implementing this feature would require timely effort on their side and would impact compatibility with older tools.

This feature is essential for all offline signer tools; many regular signing tools might make use of it. In general, this RFC greatly improves security of any network implementing it, as many governing keys are used with offline signers.

Implementing this RFC would remove requirement to maintain metadata portals manually, as task of metadata verification would be effectively moved to consensus mechanism of the chain.

-

Explanation

+

Explanation

Detailed description of metadata shortening and digest process is provided in metadata-shortener crate (see cargo doc --open and examples). Below are presented algorithms of the process.

Definitions

Metadata structure

@@ -3717,29 +3617,29 @@

Chain v 0x02 - 0xFFreservedreserved for future use -

Drawbacks

+

Drawbacks

Increased transaction size

A 1-byte increase in transaction size due to signed extension value. Digest is not included in transferred transaction, only in signing process.

Transition overhead

Some slightly out of spec systems might experience breaking changes as new content of signed extensions is added. It is important to note, that there is no real overhead in processing time nor complexity, as the metadata checking mechanism is voluntary. The only drawbacks are expected for tools that do not implement MetadataV14 self-descripting features.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

The metadata shortening protocol should be extensively tested on all available examples of metadata before releasing changes to either metadata or shortener. Careful code review should be performed on shortener implementation code to ensure security. The main metadata tree would inevitably be constructed on runtime build which would also ensure correctness.

To be able to recall shortener protocol in case of vulnerability issues, a version byte is included.

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

This is negligibly short pessimization during build time on the chain side. Cold wallets performance would improve mostly as metadata validity mechanism that was taking most of effort in cold wallet support would become trivial.

-

Ergonomics

+

Ergonomics

The proposal was optimized for cold storage wallets usage with minimal impact on all other parts of the ecosystem

-

Compatibility

+

Compatibility

Proposal in this form is not compatible with older tools that do not implement proper MetadataV14 self-descriptive features; those would have to be upgraded to include a new signed extensions field.

-

Prior Art and References

+

Prior Art and References

This project was developed upon a Polkadot Treasury grant; relevant development links are located in metadata-offline-project repository.

-

Unresolved Questions

+

Unresolved Questions

  1. How would polkadot-js handle the transition?
  2. Where would non-rust tools like Ledger apps get shortened metadata content?
- +

Changes to code of all cold signers to implement this mechanism SHOULD be done when this is enabled; non-cold signers may perform extra metadata check for better security. Ultimately, signing anything without decoding it with verifiable metadata should become discouraged in all situations where a decision-making mechanism is involved (that is, outside of fully automated blind signers like trade bots or staking rewards payout tools).

(source)

Table of Contents

@@ -3782,11 +3682,11 @@

Summary

+

Summary

Propose a way of permuting the availability chunk indices assigned to validators, in the context of recovering available data from systematic chunks, with the purpose of fairly distributing network bandwidth usage.

-

Motivation

+

Motivation

Currently, the ValidatorIndex is always identical to the ChunkIndex. Since the validator array is only shuffled once per session, naively using the ValidatorIndex as the ChunkIndex would pose an unreasonable stress on the first N/3 validators during an entire session, when favouring availability recovery from systematic chunks.

@@ -3794,9 +3694,9 @@

Motivation -

Stakeholders

+

Stakeholders

Relay chain node core developers.

-

Explanation

+

Explanation

Systematic erasure codes

An erasure coding algorithm is considered systematic if it preserves the original unencoded data as part of the resulting code. @@ -3950,7 +3850,7 @@

Configuration::set_node_feature extrinsic. Once the feature is enabled and new configuration is live, the validator->chunk mapping ceases to be a 1:1 mapping and systematic recovery may begin.

-

Drawbacks

+

Drawbacks

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

Extensive testing will be conducted - both automated and manual. This proposal doesn't affect security or privacy.

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

This is a necessary data availability optimisation, as reed-solomon erasure coding has proven to be a top consumer of CPU time in polkadot as we scale up the parachain block size and number of availability cores.

With this optimisation, preliminary performance results show that CPU time used for reed-solomon coding/decoding can be halved and total POV recovery time decrease by 80% for large POVs. See more here.

-

Ergonomics

+

Ergonomics

Not applicable.

-

Compatibility

+

Compatibility

This is a breaking change. See upgrade path section above. All validators and collators need to have upgraded their node versions before the feature will be enabled via a governance call.

-

Prior Art and References

+

Prior Art and References

See comments on the tracking issue and the in-progress PR

-

Unresolved Questions

+

Unresolved Questions

Not applicable.

- +

This enables future optimisations for the performance of availability recovery, such as retrieving batched systematic chunks from backers/approval-checkers.

Appendix A

@@ -4063,20 +3963,20 @@

Summary

+

Summary

This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".

Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.

The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.

-

Motivation

+

Motivation

The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recently blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.

It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.

If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.

This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.

-

Stakeholders

+

Stakeholders

Low-level client developers. People interested in accessing the archive of the chain.

-

Explanation

+

Explanation

Reading RFC #8 first might help with comprehension, as this RFC is very similar.

Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.

Capabilities

@@ -4111,30 +4011,30 @@

Drawbacks

+

Drawbacks

None that I can see.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

The content of this section is basically the same as the one in RFC 8.

This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms.

Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of nodes that have specific capabilities. Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.

For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks.

Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

-

Ergonomics

+

Ergonomics

Irrelevant.

-

Compatibility

+

Compatibility

Irrelevant.

-

Prior Art and References

+

Prior Art and References

Unknown.

-

Unresolved Questions

+

Unresolved Questions

While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

- +

This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC.

If we ever decide to break backwards compatibility, we could divide the "history" and "archive" capabilities in two, between nodes capable of serving older blocks and nodes capable of serving newer blocks. We could even add to the peer-to-peer network nodes that are only capable of serving older blocks (by reading from a database) but do not participate in the head of the chain, and that just exist for historical purposes.

@@ -4174,19 +4074,19 @@

Summary

+

Summary

Currently, substrate runtime use an simple allocator defined by host side. Every runtime MUST import these allocator functions for normal execution. This situation make runtime code not versatile enough.

So this RFC proposes to define a new spec for allocator part to make substrate runtime more generic.

-

Motivation

+

Motivation

Since this RFC define a new way for allocator, we now regard the old one as legacy allocator. As we all know, since the allocator implementation details are defined by the substrate client, parachain/parathread cannot customize memory allocator algorithm, so the new specification allows the runtime to customize memory allocation, and then export the allocator function according to the specification for the client side to use. Another benefit is that some new host functions can be designed without allocating memory on the client, which may have potential performance improvements. Also it will help provide a unified and clean specification if substrate runtime support multi-targets(e.g. RISC-V). There is also a potential benefit. Many programming languages that support compilation to wasm may not be friendly to supporting external allocator. This is beneficial for other programming languages ​​to enter the substrate runtime ecosystem. The last and most important benefit is that for offchain context execution, the runtime can fully support pure wasm. What this means here is that all imported host functions could not actually be called (as stub functions), then the various verification logic of the runtime can be converted into pure wasm, which provides the possibility for the substrate runtime to run block verification in other environments (such as in browsers and other non-substrate environments).

-

Stakeholders

+

Stakeholders

No attempt was made at convincing stakeholders.

-

Explanation

+

Explanation

Runtime side spec

This section contains a list of functions should be exported by substrate runtime.

We define the spec as version 1, so the following dummy function v1 MUST be exported to hint @@ -4225,129 +4125,34 @@

Client side allocator.

Detail-heavy explanation of the RFC, suitable for explanation to an implementer of the changeset. This should address corner cases in detail and provide justification behind decisions, and provide rationale for how the design meets the solution requirements.

-

Drawbacks

+

Drawbacks

The allocator inside of the runtime will make code size bigger, but it's not obvious. The allocator inside of the runtime maybe slow down(or speed up) the runtime, still not obvious.

We could ignore these drawbacks since they are not prominent. And the execution efficiency is highly decided by runtime developer. We could not prevent a poor efficiency if developer want to do it.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

Keep the legacy allocator runtime test cases, and add new feature to compile test cases for v1 allocator spec. And then update the test asserts.

Update template runtime to enable v1 spec. Once the dev network runs well, it seems that the spec is implmented correctly.

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

As the above says, not obvious impact about performance. And polkadot-sdk could offer the best practice allocator for all chains. Third party also could customized by theirself. So the performance could be improved over time.

-

Ergonomics

+

Ergonomics

Only for runtime developer, Just need to import a new crate and enable a new feature. Maybe it's convienient for other wasm-target language to implment.

-

Compatibility

+

Compatibility

It's 100% compatible. Only Some runtime configs and executor configs need to be depreacted.

For support new runtime spec, we MUST upgrade the client binary to support new spec of client part firstly.

We SHALL add an optional primtive crate to enable the version 1 spec and disable the legacy allocator by cargo feature. For the first year, we SHALL disable the v1 by default, and enable it by default start in the next year.

-

Prior Art and References

+

Prior Art and References

-

Unresolved Questions

+

Unresolved Questions

None at this time.

- +

The content discussed with RFC-0004 is basically orthogonal, but it could still be considered together, and it is preferred that this rfc be implmentented first.

This feature could make substrate runtime be easier supported by other languages and integreted into other ecosystem.

-

(source)

-

Table of Contents

- -

RFC-0062: Lowering Existential Deposit on Asset Hub for Polkadot

-
- - - -
Start Date28 December 2023
DescriptionA proposal to reduce the existential deposit required for Asset Hub for Polkadot, making (a) asset minting to all DOT token holders more affordable for Asset Minters and (b) asset conversion on Asset Hub for Polkadot more accessible for all DOT Token holders.
AuthorsSourabh Niyogi
-
-

Summary

-

This RFC proposes lowering the existential deposit requirements on Asset Hub for Polkadot by a factor of 25, from 0.1 DOT to .004 DOT. The objective is to lower the barrier to entry for asset minters to mint a new asset to the entire DOT token holder base, and make Asset Hub on Polkadot a place where everyone can do small asset conversions.

-

Motivation

-

The current Existential deposit is 0.1 DOT on Asset Hub for Polkadot. While this is not does not appear to be a significant financial barrier for most people (only $0.80), this value makes Asset Hub impractical for Asset Hub Minters, specifically for the case where the Asset Hub Minters wishes to mint a new asset for the entire community of DOT holders (e.g. 1.25MM DOT holders would cost 125K DOT @ $8 = $1MM).

-

By lowering the existential deposit requirements from 0.1 DOT to 0.004 DOT, the cost of minting to the entire community of DOT holders goes from an unmanagable number [125K DOT, the value of several houses circa December 2023] down to a manageable number [5K DOT, the value of a car circa December 2023].

-

Stakeholders

- -

Explanation

-

The exact amount of the existential deposit (ED) is proposed to be 0.004 DOT based on

- -

Empirically, asset.transferKeepAlive is the lowest valued extrinsic at this time, so there is no value to lowering the ED below 0.001 DOT. Lowering further would be unnecessary invite account spam attacks common to EVM chains, which have no ED.

-

By RFC #32 Minimal Relay Chain, believed to be implemented within the next couple of years, Asset Hub should be able to support the entire DOT existing token holder base. If there is any doubt that Substrate chains can store 10x-100x as many elements, then this change should test Asset Hub for Polkadot's capabilities.

-

The implementation is believed to be trivial:

-

https://github.com/polkadot-fellows/runtimes/blob/30e0dbfdcb78722ed61325c0ebf1efdcdb6033ba/system-parachains/asset-hubs/asset-hub-polkadot/src/constants.rs#L21

-

from

-
pub const EXISTENTIAL_DEPOSIT: Balance = constants::currency::EXISTENTIAL_DEPOSIT / 10;
-
-

to

-
pub const EXISTENTIAL_DEPOSIT: Balance = constants::currency::EXISTENTIAL_DEPOSIT / 250;
-
-

Given this change, once Asset Hub Minter 1 spends approximately 5K DOT to cover the ED for the entire DOT Token Holder base, then Asset Hub Minter 2 who subsequently wishes to mint to the same DOT Tokenholder will not pay anything (assuming no new DOT Tokenholders); however, both the first and second holder will need to spend 2,485 DOT to conduct their asset.mint operations (0.001988 DOT per asset.mint) on the entire 1.25MM DOT Token holders. If Minter 3 does the same thing when there are 1.26MM DOT Token holders (10K new DOT holders), then Minter 3 will bear the cost of 40 DOT. This is summarized here:

-
- - - -
MinterCost to fund ED for 1.25MM usersCost to call asset.mint for 1.25MM users
Minter 15K DOT (instead of 125K DOT)2,485 DOT
Minter 20 DOT2,485 DOT
Minter 340 DOT2,485 DOT
-
-

As new DOT Token Holders always enter the system, this lower ED will reduce costs for all new minters, not just Minter 1. Given this reduced cost for Asset Hub Minters (Minter 2, 3, ...), this will enable a greater number of DOT Token Holders to use the assetconversion pallet for newly minted assets.

-

It is believed that having a greater number of assetconversion end-users will be massively beneficial for DOT ecosystem growth, especially for key asset pools of DOT/USDC and DOT/USDT, which can be reliably predicted to be the most widely used pools on the Asset Hub for Polkadot.

-

It is assumed that the estimated cost to store a single account is less than 0.004 DOT. If this assumption is challenged by Polkadot Fellows, we request the Fellows provided a empirical determination of what the actual cost of storing a single account is, at present day numbers of DOT Token Holders (approximately 1-2MM) and then to support a factor or 10-1000x growth over the next 5 years. This assumption has been discussed on the forum: Polkadot AssetHub - high NFT collection deposit

-

First, the cost has to be mapped from DOT into real world USD storage costs of running an Asset Hub on Polkadot node, and the DOT / USD ratio itself has varied widely in the past and will continue to do so in the future. Second, according to this analysis, at present the pragmatic cost of estimating storage is approximated by what it costs to store accounts for 1 or 2 years at most. Underestimates on this cost is believed to be an economic subsidy while overestimates on this cost is believe to be an economic depressant on activity.

-

Given the relatively underused AssetHub for Polkadot, we believe the correct thing to do is to aim to subsidize Asset Hub activity with a lower ED.

-

Drawbacks

-

The primary drawback for subsidize Asset Hub activity with a 25x lower ED is borne by Asset Hub users in the distant future who will pay for the subsidized activity by lowering the ED.

-

Testing, Security, and Privacy

-

Lowering the ED from 0.004 DOT to 0 DOT would clearly unnecessarily invite account spam attacks common to EVM chains, which have no ED.

-

Lowering ED from 0.004 DOT to 0.002 DOT or 0.001 DOT would threaten user experience wherein just 1 or 2 asset pallet operation would reap the account.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

This change is not expected to have a significant impact on the overall performance of the Asset Hub for Polkadot.

-

Ergonomics

-

The proposed change aims to enhance the user experience for:

- -

Compatibility

-

It is believed that Asset Hub for Kusama can undergo the same logic change without issue.

-

For Asset Hub for Polkadot, it is extremely desirable that this change be approved in early 2024 with some urgency.

-

Unresolved Questions

-

It is desirable to know the cost to store an account on Asset Hub for Polkadot when the number of accounts is 10MM, 100MM, 1B to better the cost of the subsidy. We do not believe a precise answer to this merits delaying a subsidy at present. However, if approved, we believe once the number of accounts reaches 10MM-25MM or exponential growth is observed, this ED be reevaluated.

- -

If accepted, this RFC could pave the way for other accessibility improvements:

-

(source)

Table of Contents

-

Drawbacks

+

Drawbacks

This RFC might be difficult to implement in Substrate due to the internal code design. It is not clear to the author of this RFC how difficult it would be.

Prior Art

The API of these new functions was heavily inspired by API used by the C programming language.

-

Unresolved Questions

+

Unresolved Questions

The changes in this RFC would need to be benchmarked. This involves implementing the RFC and measuring the speed difference.

It is expected that most host functions are faster or equal speed to their deprecated counterparts, with the following exceptions: