From 2915d798df06324e31cf9b48cbc863e11c2fd369 Mon Sep 17 00:00:00 2001 From: Sergey Shulepov Date: Fri, 6 Nov 2020 19:13:58 +0100 Subject: [PATCH 01/16] Guide: Split router module in guide. Now we have: DMP, UMP and Router module. --- roadmap/implementers-guide/src/SUMMARY.md | 4 +- roadmap/implementers-guide/src/runtime/dmp.md | 57 ++++++++++ .../src/runtime/{router.md => hrmp.md} | 103 +----------------- .../src/runtime/inclusion.md | 16 +-- .../src/runtime/inclusioninherent.md | 2 +- .../src/runtime/initializer.md | 6 +- roadmap/implementers-guide/src/runtime/ump.md | 98 +++++++++++++++++ 7 files changed, 172 insertions(+), 114 deletions(-) create mode 100644 roadmap/implementers-guide/src/runtime/dmp.md rename roadmap/implementers-guide/src/runtime/{router.md => hrmp.md} (73%) create mode 100644 roadmap/implementers-guide/src/runtime/ump.md diff --git a/roadmap/implementers-guide/src/SUMMARY.md b/roadmap/implementers-guide/src/SUMMARY.md index f37fc08f964a..f90f149f2556 100644 --- a/roadmap/implementers-guide/src/SUMMARY.md +++ b/roadmap/implementers-guide/src/SUMMARY.md @@ -16,7 +16,9 @@ - [Scheduler Module](runtime/scheduler.md) - [Inclusion Module](runtime/inclusion.md) - [InclusionInherent Module](runtime/inclusioninherent.md) - - [Router Module](runtime/router.md) + - [DMP Module](runtime/dmp.md) + - [UMP Module](runtime/ump.md) + - [HRMP Module](runtime/hrmp.md) - [Session Info Module](runtime/session_info.md) - [Runtime APIs](runtime-api/README.md) - [Validators](runtime-api/validators.md) diff --git a/roadmap/implementers-guide/src/runtime/dmp.md b/roadmap/implementers-guide/src/runtime/dmp.md new file mode 100644 index 000000000000..74b2cb03e2ed --- /dev/null +++ b/roadmap/implementers-guide/src/runtime/dmp.md @@ -0,0 +1,57 @@ +# DMP Module + +## Storage + +General storage entries + +```rust +/// Paras that are to be cleaned up at the end of the session. +/// The entries are sorted ascending by the para id. +OutgoingParas: Vec; +``` + +Storage layout required for implementation of DMP. + +```rust +/// The downward messages addressed for a certain para. +DownwardMessageQueues: map ParaId => Vec; +/// A mapping that stores the downward message queue MQC head for each para. +/// +/// Each link in this chain has a form: +/// `(prev_head, B, H(M))`, where +/// - `prev_head`: is the previous head hash or zero if none. +/// - `B`: is the relay-chain block number in which a message was appended. +/// - `H(M)`: is the hash of the message being appended. +DownwardMessageQueueHeads: map ParaId => Hash; +``` + +## Initialization + +No initialization routine runs for this module. + +## Routines + +Candidate Acceptance Function: + +* `check_processed_downward_messages(P: ParaId, processed_downward_messages)`: + 1. Checks that `DownwardMessageQueues` for `P` is at least `processed_downward_messages` long. + 1. Checks that `processed_downward_messages` is at least 1 if `DownwardMessageQueues` for `P` is not empty. + +Candidate Enactment: + +* `prune_dmq(P: ParaId, processed_downward_messages)`: + 1. Remove the first `processed_downward_messages` from the `DownwardMessageQueues` of `P`. + +Utility routines. + +`queue_downward_message(P: ParaId, M: DownwardMessage)`: + 1. Check if the size of `M` exceeds the `config.max_downward_message_size`. If so, return an error. + 1. Wrap `M` into `InboundDownwardMessage` using the current block number for `sent_at`. + 1. Obtain a new MQC link for the resulting `InboundDownwardMessage` and replace `DownwardMessageQueueHeads` for `P` with the resulting hash. + 1. Add the resulting `InboundDownwardMessage` into `DownwardMessageQueues` for `P`. + +## Session Change + +1. Drain `OutgoingParas`. For each `P` happened to be in the list: + 1. Remove all `DownwardMessageQueues` of `P`. + 1. Remove `DownwardMessageQueueHeads` for `P`. diff --git a/roadmap/implementers-guide/src/runtime/router.md b/roadmap/implementers-guide/src/runtime/hrmp.md similarity index 73% rename from roadmap/implementers-guide/src/runtime/router.md rename to roadmap/implementers-guide/src/runtime/hrmp.md index ef7ce8ceb7bd..2200956f055d 100644 --- a/roadmap/implementers-guide/src/runtime/router.md +++ b/roadmap/implementers-guide/src/runtime/hrmp.md @@ -1,6 +1,4 @@ -# Router Module - -The Router module is responsible for all messaging mechanisms supported between paras and the relay chain, specifically: UMP, DMP, HRMP and later XCMP. +# HRMP Module ## Storage @@ -12,61 +10,6 @@ General storage entries OutgoingParas: Vec; ``` -### Upward Message Passing (UMP) - -```rust -/// The messages waiting to be handled by the relay-chain originating from a certain parachain. -/// -/// Note that some upward messages might have been already processed by the inclusion logic. E.g. -/// channel management messages. -/// -/// The messages are processed in FIFO order. -RelayDispatchQueues: map ParaId => Vec; -/// Size of the dispatch queues. Caches sizes of the queues in `RelayDispatchQueue`. -/// -/// First item in the tuple is the count of messages and second -/// is the total length (in bytes) of the message payloads. -/// -/// Note that this is an auxilary mapping: it's possible to tell the byte size and the number of -/// messages only looking at `RelayDispatchQueues`. This mapping is separate to avoid the cost of -/// loading the whole message queue if only the total size and count are required. -/// -/// Invariant: -/// - The set of keys should exactly match the set of keys of `RelayDispatchQueues`. -RelayDispatchQueueSize: map ParaId => (u32, u32); // (num_messages, total_bytes) -/// The ordered list of `ParaId`s that have a `RelayDispatchQueue` entry. -/// -/// Invariant: -/// - The set of items from this vector should be exactly the set of the keys in -/// `RelayDispatchQueues` and `RelayDispatchQueueSize`. -NeedsDispatch: Vec; -/// This is the para that gets dispatched first during the next upward dispatchable queue -/// execution round. -/// -/// Invariant: -/// - If `Some(para)`, then `para` must be present in `NeedsDispatch`. -NextDispatchRoundStartWith: Option; -``` - -### Downward Message Passing (DMP) - -Storage layout required for implementation of DMP. - -```rust -/// The downward messages addressed for a certain para. -DownwardMessageQueues: map ParaId => Vec; -/// A mapping that stores the downward message queue MQC head for each para. -/// -/// Each link in this chain has a form: -/// `(prev_head, B, H(M))`, where -/// - `prev_head`: is the previous head hash or zero if none. -/// - `B`: is the relay-chain block number in which a message was appended. -/// - `H(M)`: is the hash of the message being appended. -DownwardMessageQueueHeads: map ParaId => Hash; -``` - -### HRMP - HRMP related structs: ```rust @@ -189,13 +132,6 @@ No initialization routine runs for this module. Candidate Acceptance Function: -* `check_upward_messages(P: ParaId, Vec`): - 1. Checks that there are at most `config.max_upward_message_num_per_candidate` messages. - 1. Checks that no message exceeds `config.max_upward_message_size`. - 1. Verify that `RelayDispatchQueueSize` for `P` has enough capacity for the messages -* `check_processed_downward_messages(P: ParaId, processed_downward_messages)`: - 1. Checks that `DownwardMessageQueues` for `P` is at least `processed_downward_messages` long. - 1. Checks that `processed_downward_messages` is at least 1 if `DownwardMessageQueues` for `P` is not empty. * `check_hrmp_watermark(P: ParaId, new_hrmp_watermark)`: 1. `new_hrmp_watermark` should be strictly greater than the value of `HrmpWatermarks` for `P` (if any). 1. `new_hrmp_watermark` must not be greater than the context's block number. @@ -232,42 +168,12 @@ Candidate Enactment: > parametrization this shouldn't be a big of a deal. > If that becomes a problem consider introducing an extra dictionary which says at what block the given sender > sent a message to the recipient. -* `prune_dmq(P: ParaId, processed_downward_messages)`: - 1. Remove the first `processed_downward_messages` from the `DownwardMessageQueues` of `P`. -* `enact_upward_messages(P: ParaId, Vec)`: - 1. Process each upward message `M` in order: - 1. Append the message to `RelayDispatchQueues` for `P` - 1. Increment the size and the count in `RelayDispatchQueueSize` for `P`. - 1. Ensure that `P` is present in `NeedsDispatch`. The following routine is intended to be called in the same time when `Paras::schedule_para_cleanup` is called. `schedule_para_cleanup(ParaId)`: 1. Add the para into the `OutgoingParas` vector maintaining the sorted order. -The following routine is meant to execute pending entries in upward message queues. This function doesn't fail, even if -dispatcing any of individual upward messages returns an error. - -`process_pending_upward_messages()`: - 1. Initialize a cumulative weight counter `T` to 0 - 1. Iterate over items in `NeedsDispatch` cyclically, starting with `NextDispatchRoundStartWith`. If the item specified is `None` start from the beginning. For each `P` encountered: - 1. Dequeue the first upward message `D` from `RelayDispatchQueues` for `P` - 1. Decrement the size of the message from `RelayDispatchQueueSize` for `P` - 1. Delegate processing of the message to the runtime. The weight consumed is added to `T`. - 1. If `T >= config.preferred_dispatchable_upward_messages_step_weight`, set `NextDispatchRoundStartWith` to `P` and finish processing. - 1. If `RelayDispatchQueues` for `P` became empty, remove `P` from `NeedsDispatch`. - 1. If `NeedsDispatch` became empty then finish processing and set `NextDispatchRoundStartWith` to `None`. - > NOTE that in practice we would need to approach the weight calculation more thoroughly, i.e. incorporate all operations - > that could take place on the course of handling these upward messages. - -Utility routines. - -`queue_downward_message(P: ParaId, M: DownwardMessage)`: - 1. Check if the size of `M` exceeds the `config.max_downward_message_size`. If so, return an error. - 1. Wrap `M` into `InboundDownwardMessage` using the current block number for `sent_at`. - 1. Obtain a new MQC link for the resulting `InboundDownwardMessage` and replace `DownwardMessageQueueHeads` for `P` with the resulting hash. - 1. Add the resulting `InboundDownwardMessage` into `DownwardMessageQueues` for `P`. - ## Entry-points The following entry-points are meant to be used for HRMP channel management. @@ -336,15 +242,8 @@ the parachain executed the message. 1. Drain `OutgoingParas`. For each `P` happened to be in the list: 1. Remove all inbound channels of `P`, i.e. `(_, P)`, 1. Remove all outbound channels of `P`, i.e. `(P, _)`, - 1. Remove all `DownwardMessageQueues` of `P`. - 1. Remove `DownwardMessageQueueHeads` for `P`. - 1. Remove `RelayDispatchQueueSize` of `P`. - 1. Remove `RelayDispatchQueues` of `P`. 1. Remove `HrmpOpenChannelRequestCount` for `P` 1. Remove `HrmpAcceptedChannelRequestCount` for `P`. - 1. Remove `P` if it exists in `NeedsDispatch`. - 1. If `P` is in `NextDispatchRoundStartWith`, then reset it to `None` - - Note that if we don't remove the open/close requests since they are going to die out naturally at the end of the session. 1. For each channel designator `D` in `HrmpOpenChannelRequestsList` we query the request `R` from `HrmpOpenChannelRequests`: 1. if `R.confirmed = false`: 1. increment `R.age` by 1. diff --git a/roadmap/implementers-guide/src/runtime/inclusion.md b/roadmap/implementers-guide/src/runtime/inclusion.md index 46f3e5211674..f2d9f214225a 100644 --- a/roadmap/implementers-guide/src/runtime/inclusion.md +++ b/roadmap/implementers-guide/src/runtime/inclusion.md @@ -67,20 +67,20 @@ All failed checks should lead to an unrecoverable error making the block invalid 1. Ensure that any code upgrade scheduled by the candidate does not happen within `config.validation_upgrade_frequency` of `Paras::last_code_upgrade(para_id, true)`, if any, comparing against the value of `Paras::FutureCodeUpgrades` for the given para ID. 1. Check the collator's signature on the candidate data. 1. check the backing of the candidate using the signatures and the bitfields, comparing against the validators assigned to the groups, fetched with the `group_validators` lookup. - 1. call `Router::check_upward_messages(para, commitments.upward_messages)` to check that the upward messages are valid. - 1. call `Router::check_processed_downward_messages(para, commitments.processed_downward_messages)` to check that the DMQ is properly drained. - 1. call `Router::check_hrmp_watermark(para, commitments.hrmp_watermark)` for each candidate to check rules of processing the HRMP watermark. - 1. using `Router::check_outbound_hrmp(sender, commitments.horizontal_messages)` ensure that the each candidate sent a valid set of horizontal messages + 1. call `Ump::check_upward_messages(para, commitments.upward_messages)` to check that the upward messages are valid. + 1. call `Dmp::check_processed_downward_messages(para, commitments.processed_downward_messages)` to check that the DMQ is properly drained. + 1. call `Hrmp::check_hrmp_watermark(para, commitments.hrmp_watermark)` for each candidate to check rules of processing the HRMP watermark. + 1. using `Hrmp::check_outbound_hrmp(sender, commitments.horizontal_messages)` ensure that the each candidate sent a valid set of horizontal messages 1. create an entry in the `PendingAvailability` map for each backed candidate with a blank `availability_votes` bitfield. 1. create a corresponding entry in the `PendingAvailabilityCommitments` with the commitments. 1. Return a `Vec` of all scheduled cores of the list of passed assignments that a candidate was successfully backed for, sorted ascending by CoreIndex. * `enact_candidate(relay_parent_number: BlockNumber, CommittedCandidateReceipt)`: 1. If the receipt contains a code upgrade, Call `Paras::schedule_code_upgrade(para_id, code, relay_parent_number + config.validationl_upgrade_delay)`. > TODO: Note that this is safe as long as we never enact candidates where the relay parent is across a session boundary. In that case, which we should be careful to avoid with contextual execution, the configuration might have changed and the para may de-sync from the host's understanding of it. - 1. call `Router::enact_upward_messages` for each backed candidate, using the [`UpwardMessage`s](../types/messages.md#upward-message) from the [`CandidateCommitments`](../types/candidate.md#candidate-commitments). - 1. call `Router::prune_dmq` with the para id of the candidate and the candidate's `processed_downward_messages`. - 1. call `Router::prune_hrmp` with the para id of the candiate and the candidate's `hrmp_watermark`. - 1. call `Router::queue_outbound_hrmp` with the para id of the candidate and the list of horizontal messages taken from the commitment, + 1. call `Ump::enact_upward_messages` for each backed candidate, using the [`UpwardMessage`s](../types/messages.md#upward-message) from the [`CandidateCommitments`](../types/candidate.md#candidate-commitments). + 1. call `Dmp::prune_dmq` with the para id of the candidate and the candidate's `processed_downward_messages`. + 1. call `Hrmp::prune_hrmp` with the para id of the candiate and the candidate's `hrmp_watermark`. + 1. call `Hrmp::queue_outbound_hrmp` with the para id of the candidate and the list of horizontal messages taken from the commitment, 1. Call `Paras::note_new_head` using the `HeadData` from the receipt and `relay_parent_number`. * `collect_pending`: diff --git a/roadmap/implementers-guide/src/runtime/inclusioninherent.md b/roadmap/implementers-guide/src/runtime/inclusioninherent.md index 9290025e2d05..54ebf3af7b52 100644 --- a/roadmap/implementers-guide/src/runtime/inclusioninherent.md +++ b/roadmap/implementers-guide/src/runtime/inclusioninherent.md @@ -22,5 +22,5 @@ Included: Option<()>, 1. Invoke `Scheduler::schedule(freed)` 1. Invoke the `Inclusion::process_candidates` routine with the parameters `(backed_candidates, Scheduler::scheduled(), Scheduler::group_validators)`. 1. Call `Scheduler::occupied` using the return value of the `Inclusion::process_candidates` call above, first sorting the list of assigned core indices. - 1. Call the `Router::process_pending_upward_messages` routine to execute all messages in upward dispatch queues. + 1. Call the `Ump::process_pending_upward_messages` routine to execute all messages in upward dispatch queues. 1. If all of the above succeeds, set `Included` to `Some(())`. diff --git a/roadmap/implementers-guide/src/runtime/initializer.md b/roadmap/implementers-guide/src/runtime/initializer.md index 5fd2bc3bd60f..fd7324b2198d 100644 --- a/roadmap/implementers-guide/src/runtime/initializer.md +++ b/roadmap/implementers-guide/src/runtime/initializer.md @@ -23,8 +23,10 @@ The other parachains modules are initialized in this order: 1. Paras 1. Scheduler 1. Inclusion -1. Validity. -1. Router. +1. Validity +1. DMP +1. UMP +1. HRMP The [Configuration Module](configuration.md) is first, since all other modules need to operate under the same configuration as each other. It would lead to inconsistency if, for example, the scheduler ran first and then the configuration was updated before the Inclusion module. diff --git a/roadmap/implementers-guide/src/runtime/ump.md b/roadmap/implementers-guide/src/runtime/ump.md new file mode 100644 index 000000000000..c6017fb7853b --- /dev/null +++ b/roadmap/implementers-guide/src/runtime/ump.md @@ -0,0 +1,98 @@ +# UMP Module + +## Storage + +General storage entries + +```rust +/// Paras that are to be cleaned up at the end of the session. +/// The entries are sorted ascending by the para id. +OutgoingParas: Vec; +``` + +Storage related to UMP + +```rust +/// The messages waiting to be handled by the relay-chain originating from a certain parachain. +/// +/// Note that some upward messages might have been already processed by the inclusion logic. E.g. +/// channel management messages. +/// +/// The messages are processed in FIFO order. +RelayDispatchQueues: map ParaId => Vec; +/// Size of the dispatch queues. Caches sizes of the queues in `RelayDispatchQueue`. +/// +/// First item in the tuple is the count of messages and second +/// is the total length (in bytes) of the message payloads. +/// +/// Note that this is an auxilary mapping: it's possible to tell the byte size and the number of +/// messages only looking at `RelayDispatchQueues`. This mapping is separate to avoid the cost of +/// loading the whole message queue if only the total size and count are required. +/// +/// Invariant: +/// - The set of keys should exactly match the set of keys of `RelayDispatchQueues`. +RelayDispatchQueueSize: map ParaId => (u32, u32); // (num_messages, total_bytes) +/// The ordered list of `ParaId`s that have a `RelayDispatchQueue` entry. +/// +/// Invariant: +/// - The set of items from this vector should be exactly the set of the keys in +/// `RelayDispatchQueues` and `RelayDispatchQueueSize`. +NeedsDispatch: Vec; +/// This is the para that gets dispatched first during the next upward dispatchable queue +/// execution round. +/// +/// Invariant: +/// - If `Some(para)`, then `para` must be present in `NeedsDispatch`. +NextDispatchRoundStartWith: Option; +``` + + +## Initialization + +No initialization routine runs for this module. + +## Routines + +Candidate Acceptance Function: + +* `check_upward_messages(P: ParaId, Vec`): + 1. Checks that there are at most `config.max_upward_message_num_per_candidate` messages. + 1. Checks that no message exceeds `config.max_upward_message_size`. + 1. Verify that `RelayDispatchQueueSize` for `P` has enough capacity for the messages + +Candidate Enactment: + +* `enact_upward_messages(P: ParaId, Vec)`: + 1. Process each upward message `M` in order: + 1. Append the message to `RelayDispatchQueues` for `P` + 1. Increment the size and the count in `RelayDispatchQueueSize` for `P`. + 1. Ensure that `P` is present in `NeedsDispatch`. + +The following routine is intended to be called in the same time when `Paras::schedule_para_cleanup` is called. + +`schedule_para_cleanup(ParaId)`: + 1. Add the para into the `OutgoingParas` vector maintaining the sorted order. + +The following routine is meant to execute pending entries in upward message queues. This function doesn't fail, even if +dispatcing any of individual upward messages returns an error. + +`process_pending_upward_messages()`: + 1. Initialize a cumulative weight counter `T` to 0 + 1. Iterate over items in `NeedsDispatch` cyclically, starting with `NextDispatchRoundStartWith`. If the item specified is `None` start from the beginning. For each `P` encountered: + 1. Dequeue the first upward message `D` from `RelayDispatchQueues` for `P` + 1. Decrement the size of the message from `RelayDispatchQueueSize` for `P` + 1. Delegate processing of the message to the runtime. The weight consumed is added to `T`. + 1. If `T >= config.preferred_dispatchable_upward_messages_step_weight`, set `NextDispatchRoundStartWith` to `P` and finish processing. + 1. If `RelayDispatchQueues` for `P` became empty, remove `P` from `NeedsDispatch`. + 1. If `NeedsDispatch` became empty then finish processing and set `NextDispatchRoundStartWith` to `None`. + > NOTE that in practice we would need to approach the weight calculation more thoroughly, i.e. incorporate all operations + > that could take place on the course of handling these upward messages. + +## Session Change + +1. Drain `OutgoingParas`. For each `P` happened to be in the list:. + 1. Remove `RelayDispatchQueueSize` of `P`. + 1. Remove `RelayDispatchQueues` of `P`. + 1. Remove `P` if it exists in `NeedsDispatch`. + 1. If `P` is in `NextDispatchRoundStartWith`, then reset it to `None` + - Note that if we don't remove the open/close requests since they are going to die out naturally at the end of the session. From 5195118618827c5e3e379fccad8934c35f164c39 Mon Sep 17 00:00:00 2001 From: Sergey Shulepov Date: Fri, 6 Nov 2020 19:14:28 +0100 Subject: [PATCH 02/16] Add a glossary entry for what used to be called Router --- roadmap/implementers-guide/src/glossary.md | 1 + 1 file changed, 1 insertion(+) diff --git a/roadmap/implementers-guide/src/glossary.md b/roadmap/implementers-guide/src/glossary.md index 63294d1d77fd..706ba7c62f2e 100644 --- a/roadmap/implementers-guide/src/glossary.md +++ b/roadmap/implementers-guide/src/glossary.md @@ -24,6 +24,7 @@ Here you can find definitions of a bunch of jargon, usually specific to the Polk - Parathread: A parachain which is scheduled on a pay-as-you-go basis. - Proof-of-Validity (PoV): A stateless-client proof that a parachain candidate is valid, with respect to some validation function. - Relay Parent: A block in the relay chain, referred to in a context where work is being done in the context of the state at this block. +- Router: The router module used to be a runtime module responsible for routing messages between paras and the relay chain. At some point it was split up into separate runtime modules: Dmp, Ump, Hrmp, each responsible for the respective part of message routing. - Runtime: The relay-chain state machine. - Runtime Module: See Module. - Runtime API: A means for the node-side behavior to access structured information based on the state of a fork of the blockchain. From 08fc26130c486b1d60bf20ea4cfea6d126e58e48 Mon Sep 17 00:00:00 2001 From: Sergey Shulepov Date: Fri, 6 Nov 2020 19:17:52 +0100 Subject: [PATCH 03/16] Extract DMP --- runtime/parachains/src/dmp.rs | 390 +++++++++++++++++++++++++++++++++ runtime/parachains/src/lib.rs | 1 + runtime/parachains/src/mock.rs | 5 + 3 files changed, 396 insertions(+) create mode 100644 runtime/parachains/src/dmp.rs diff --git a/runtime/parachains/src/dmp.rs b/runtime/parachains/src/dmp.rs new file mode 100644 index 000000000000..49f34aaa49dc --- /dev/null +++ b/runtime/parachains/src/dmp.rs @@ -0,0 +1,390 @@ +// Copyright 2020 Parity Technologies (UK) Ltd. +// This file is part of Polkadot. + +// Polkadot is free software: you can redistribute it and/or modify +// it under the terms of the GNU General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. + +// Polkadot is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU General Public License for more details. + +// You should have received a copy of the GNU General Public License +// along with Polkadot. If not, see . + +use crate::{ + configuration::{self, HostConfiguration}, + initializer, +}; +use frame_support::{decl_module, decl_storage, StorageMap, weights::Weight, traits::Get}; +use sp_std::prelude::*; +use sp_std::fmt; +use sp_runtime::traits::{BlakeTwo256, Hash as HashT, SaturatedConversion}; +use primitives::v1::{Id as ParaId, DownwardMessage, InboundDownwardMessage, Hash}; + +/// An error sending a downward message. +#[cfg_attr(test, derive(Debug))] +pub enum QueueDownwardMessageError { + /// The message being sent exceeds the configured max message size. + ExceedsMaxMessageSize, +} + +/// An error returned by `check_processed_downward_messages` that indicates an acceptance check +/// didn't pass. +pub enum ProcessedDownwardMessagesAcceptanceErr { + /// If there are pending messages then `processed_downward_messages` should be at least 1, + AdvancementRule, + /// `processed_downward_messages` should not be greater than the number of pending messages. + Underflow { + processed_downward_messages: u32, + dmq_length: u32, + }, +} + +impl fmt::Debug for ProcessedDownwardMessagesAcceptanceErr { + fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { + use ProcessedDownwardMessagesAcceptanceErr::*; + match *self { + AdvancementRule => write!( + fmt, + "DMQ is not empty, but processed_downward_messages is 0", + ), + Underflow { + processed_downward_messages, + dmq_length, + } => write!( + fmt, + "processed_downward_messages = {}, but dmq_length is only {}", + processed_downward_messages, dmq_length, + ), + } + } +} + +pub trait Trait: frame_system::Trait + configuration::Trait {} + +decl_storage! { + trait Store for Module as Dmp { + /// Paras that are to be cleaned up at the end of the session. + /// The entries are sorted ascending by the para id. + OutgoingParas: Vec; + + /// The downward messages addressed for a certain para. + DownwardMessageQueues: map hasher(twox_64_concat) ParaId => Vec>; + /// A mapping that stores the downward message queue MQC head for each para. + /// + /// Each link in this chain has a form: + /// `(prev_head, B, H(M))`, where + /// - `prev_head`: is the previous head hash or zero if none. + /// - `B`: is the relay-chain block number in which a message was appended. + /// - `H(M)`: is the hash of the message being appended. + DownwardMessageQueueHeads: map hasher(twox_64_concat) ParaId => Hash; + } +} + +decl_module! { + /// The DMP module. + pub struct Module for enum Call where origin: ::Origin { } +} + +/// Routines and getters related to downward message passing. +impl Module { + /// Block initialization logic, called by initializer. + pub(crate) fn initializer_initialize(_now: T::BlockNumber) -> Weight { + 0 + } + + /// Block finalization logic, called by initializer. + pub(crate) fn initializer_finalize() {} + + /// Called by the initializer to note that a new session has started. + pub(crate) fn initializer_on_new_session( + _notification: &initializer::SessionChangeNotification, + ) { + Self::perform_outgoing_para_cleanup(); + } + + /// Iterate over all paras that were registered for offboarding and remove all the data + /// associated with them. + fn perform_outgoing_para_cleanup() { + let outgoing = OutgoingParas::take(); + for outgoing_para in outgoing { + Self::clean_dmp_after_outgoing(outgoing_para); + } + } + + fn clean_dmp_after_outgoing(outgoing_para: ParaId) { + ::DownwardMessageQueues::remove(&outgoing_para); + ::DownwardMessageQueueHeads::remove(&outgoing_para); + } + + /// Schedule a para to be cleaned up at the start of the next session. + pub(crate) fn schedule_para_cleanup(id: ParaId) { + OutgoingParas::mutate(|v| { + if let Err(i) = v.binary_search(&id) { + v.insert(i, id); + } + }); + } + + /// Enqueue a downward message to a specific recipient para. + /// + /// When encoded, the message should not exceed the `config.max_downward_message_size`. + /// Otherwise, the message won't be sent and `Err` will be returned. + /// + /// It is possible to send a downward message to a non-existent para. That, however, would lead + /// to a dangling storage. If the caller cannot statically prove that the recipient exists + /// then the caller should perform a runtime check. + pub fn queue_downward_message( + config: &HostConfiguration, + para: ParaId, + msg: DownwardMessage, + ) -> Result<(), QueueDownwardMessageError> { + let serialized_len = msg.len() as u32; + if serialized_len > config.max_downward_message_size { + return Err(QueueDownwardMessageError::ExceedsMaxMessageSize); + } + + let inbound = InboundDownwardMessage { + msg, + sent_at: >::block_number(), + }; + + // obtain the new link in the MQC and update the head. + ::DownwardMessageQueueHeads::mutate(para, |head| { + let new_head = + BlakeTwo256::hash_of(&(*head, inbound.sent_at, T::Hashing::hash_of(&inbound.msg))); + *head = new_head; + }); + + ::DownwardMessageQueues::mutate(para, |v| { + v.push(inbound); + }); + + Ok(()) + } + + /// Checks if the number of processed downward messages is valid. + pub(crate) fn check_processed_downward_messages( + para: ParaId, + processed_downward_messages: u32, + ) -> Result<(), ProcessedDownwardMessagesAcceptanceErr> { + let dmq_length = Self::dmq_length(para); + + if dmq_length > 0 && processed_downward_messages == 0 { + return Err(ProcessedDownwardMessagesAcceptanceErr::AdvancementRule); + } + if dmq_length < processed_downward_messages { + return Err(ProcessedDownwardMessagesAcceptanceErr::Underflow { + processed_downward_messages, + dmq_length, + }); + } + + Ok(()) + } + + /// Prunes the specified number of messages from the downward message queue of the given para. + pub(crate) fn prune_dmq(para: ParaId, processed_downward_messages: u32) -> Weight { + ::DownwardMessageQueues::mutate(para, |q| { + let processed_downward_messages = processed_downward_messages as usize; + if processed_downward_messages > q.len() { + // reaching this branch is unexpected due to the constraint established by + // `check_processed_downward_messages`. But better be safe than sorry. + q.clear(); + } else { + *q = q.split_off(processed_downward_messages); + } + }); + T::DbWeight::get().reads_writes(1, 1) + } + + /// Returns the Head of Message Queue Chain for the given para or `None` if there is none + /// associated with it. + pub(crate) fn dmq_mqc_head(para: ParaId) -> Hash { + ::DownwardMessageQueueHeads::get(¶) + } + + /// Returns the number of pending downward messages addressed to the given para. + /// + /// Returns 0 if the para doesn't have an associated downward message queue. + pub(crate) fn dmq_length(para: ParaId) -> u32 { + ::DownwardMessageQueues::decode_len(¶) + .unwrap_or(0) + .saturated_into::() + } + + /// Returns the downward message queue contents for the given para. + /// + /// The most recent messages are the latest in the vector. + pub(crate) fn dmq_contents(recipient: ParaId) -> Vec> { + ::DownwardMessageQueues::get(&recipient) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use primitives::v1::BlockNumber; + use frame_support::StorageValue; + use frame_support::traits::{OnFinalize, OnInitialize}; + use codec::Encode; + use crate::mock::{Configuration, new_test_ext, System, Dmp, GenesisConfig as MockGenesisConfig}; + + pub(crate) fn run_to_block(to: BlockNumber, new_session: Option>) { + while System::block_number() < to { + let b = System::block_number(); + Dmp::initializer_finalize(); + System::on_finalize(b); + + System::on_initialize(b + 1); + System::set_block_number(b + 1); + + if new_session.as_ref().map_or(false, |v| v.contains(&(b + 1))) { + Dmp::initializer_on_new_session(&Default::default()); + } + Dmp::initializer_initialize(b + 1); + } + } + + fn default_genesis_config() -> MockGenesisConfig { + MockGenesisConfig { + configuration: crate::configuration::GenesisConfig { + config: crate::configuration::HostConfiguration { + max_downward_message_size: 1024, + ..Default::default() + }, + }, + ..Default::default() + } + } + + fn queue_downward_message( + para_id: ParaId, + msg: DownwardMessage, + ) -> Result<(), QueueDownwardMessageError> { + Dmp::queue_downward_message(&Configuration::config(), para_id, msg) + } + + #[test] + fn scheduled_cleanup_performed() { + let a = ParaId::from(1312); + let b = ParaId::from(228); + let c = ParaId::from(123); + + new_test_ext(default_genesis_config()).execute_with(|| { + run_to_block(1, None); + + // enqueue downward messages to A, B and C. + queue_downward_message(a, vec![1, 2, 3]).unwrap(); + queue_downward_message(b, vec![4, 5, 6]).unwrap(); + queue_downward_message(c, vec![7, 8, 9]).unwrap(); + + Dmp::schedule_para_cleanup(a); + + // run to block without session change. + run_to_block(2, None); + + assert!(!::DownwardMessageQueues::get(&a).is_empty()); + assert!(!::DownwardMessageQueues::get(&b).is_empty()); + assert!(!::DownwardMessageQueues::get(&c).is_empty()); + + Dmp::schedule_para_cleanup(b); + + // run to block changing the session. + run_to_block(3, Some(vec![3])); + + assert!(::DownwardMessageQueues::get(&a).is_empty()); + assert!(::DownwardMessageQueues::get(&b).is_empty()); + assert!(!::DownwardMessageQueues::get(&c).is_empty()); + + // verify that the outgoing paras are emptied. + assert!(OutgoingParas::get().is_empty()) + }); + } + + #[test] + fn dmq_length_and_head_updated_properly() { + let a = ParaId::from(1312); + let b = ParaId::from(228); + + new_test_ext(default_genesis_config()).execute_with(|| { + assert_eq!(Dmp::dmq_length(a), 0); + assert_eq!(Dmp::dmq_length(b), 0); + + queue_downward_message(a, vec![1, 2, 3]).unwrap(); + + assert_eq!(Dmp::dmq_length(a), 1); + assert_eq!(Dmp::dmq_length(b), 0); + assert!(!Dmp::dmq_mqc_head(a).is_zero()); + assert!(Dmp::dmq_mqc_head(b).is_zero()); + }); + } + + #[test] + fn check_processed_downward_messages() { + let a = ParaId::from(1312); + + new_test_ext(default_genesis_config()).execute_with(|| { + // processed_downward_messages=0 is allowed when the DMQ is empty. + assert!(Dmp::check_processed_downward_messages(a, 0).is_ok()); + + queue_downward_message(a, vec![1, 2, 3]).unwrap(); + queue_downward_message(a, vec![4, 5, 6]).unwrap(); + queue_downward_message(a, vec![7, 8, 9]).unwrap(); + + // 0 doesn't pass if the DMQ has msgs. + assert!(!Dmp::check_processed_downward_messages(a, 0).is_ok()); + // a candidate can consume up to 3 messages + assert!(Dmp::check_processed_downward_messages(a, 1).is_ok()); + assert!(Dmp::check_processed_downward_messages(a, 2).is_ok()); + assert!(Dmp::check_processed_downward_messages(a, 3).is_ok()); + // there is no 4 messages in the queue + assert!(!Dmp::check_processed_downward_messages(a, 4).is_ok()); + }); + } + + #[test] + fn dmq_pruning() { + let a = ParaId::from(1312); + + new_test_ext(default_genesis_config()).execute_with(|| { + assert_eq!(Dmp::dmq_length(a), 0); + + queue_downward_message(a, vec![1, 2, 3]).unwrap(); + queue_downward_message(a, vec![4, 5, 6]).unwrap(); + queue_downward_message(a, vec![7, 8, 9]).unwrap(); + assert_eq!(Dmp::dmq_length(a), 3); + + // pruning 0 elements shouldn't change anything. + Dmp::prune_dmq(a, 0); + assert_eq!(Dmp::dmq_length(a), 3); + + Dmp::prune_dmq(a, 2); + assert_eq!(Dmp::dmq_length(a), 1); + }); + } + + #[test] + fn queue_downward_message_critical() { + let a = ParaId::from(1312); + + let mut genesis = default_genesis_config(); + genesis.configuration.config.max_downward_message_size = 7; + + new_test_ext(genesis).execute_with(|| { + let smol = [0; 3].to_vec(); + let big = [0; 8].to_vec(); + + // still within limits + assert_eq!(smol.encode().len(), 4); + assert!(queue_downward_message(a, smol).is_ok()); + + // that's too big + assert_eq!(big.encode().len(), 9); + assert!(queue_downward_message(a, big).is_err()); + }); + } +} diff --git a/runtime/parachains/src/lib.rs b/runtime/parachains/src/lib.rs index 833ff6ae4793..bc1fb44187aa 100644 --- a/runtime/parachains/src/lib.rs +++ b/runtime/parachains/src/lib.rs @@ -31,6 +31,7 @@ pub mod router; pub mod scheduler; pub mod validity; pub mod origin; +pub mod dmp; pub mod runtime_api_impl; diff --git a/runtime/parachains/src/mock.rs b/runtime/parachains/src/mock.rs index 3da3a6448128..38a27bd1b0f4 100644 --- a/runtime/parachains/src/mock.rs +++ b/runtime/parachains/src/mock.rs @@ -113,6 +113,8 @@ impl crate::router::Trait for Test { type UmpSink = crate::router::MockUmpSink; } +impl crate::dmp::Trait for Test { } + impl crate::scheduler::Trait for Test { } impl crate::inclusion::Trait for Test { @@ -133,6 +135,9 @@ pub type Paras = crate::paras::Module; /// Mocked router. pub type Router = crate::router::Module; +/// Mocked DMP +pub type Dmp = crate::dmp::Module; + /// Mocked scheduler. pub type Scheduler = crate::scheduler::Module; From 6bdf5415c1e16c296625f0984b900ea8572d5c8e Mon Sep 17 00:00:00 2001 From: Sergey Shulepov Date: Fri, 6 Nov 2020 19:20:29 +0100 Subject: [PATCH 04/16] Extract UMP --- runtime/parachains/src/lib.rs | 1 + runtime/parachains/src/mock.rs | 7 + runtime/parachains/src/ump.rs | 874 +++++++++++++++++++++++++++++++++ 3 files changed, 882 insertions(+) create mode 100644 runtime/parachains/src/ump.rs diff --git a/runtime/parachains/src/lib.rs b/runtime/parachains/src/lib.rs index bc1fb44187aa..2705998cef5d 100644 --- a/runtime/parachains/src/lib.rs +++ b/runtime/parachains/src/lib.rs @@ -32,6 +32,7 @@ pub mod scheduler; pub mod validity; pub mod origin; pub mod dmp; +pub mod ump; pub mod runtime_api_impl; diff --git a/runtime/parachains/src/mock.rs b/runtime/parachains/src/mock.rs index 38a27bd1b0f4..60249520ad76 100644 --- a/runtime/parachains/src/mock.rs +++ b/runtime/parachains/src/mock.rs @@ -115,6 +115,10 @@ impl crate::router::Trait for Test { impl crate::dmp::Trait for Test { } +impl crate::ump::Trait for Test { + type UmpSink = crate::ump::mock_sink::MockUmpSink; +} + impl crate::scheduler::Trait for Test { } impl crate::inclusion::Trait for Test { @@ -138,6 +142,9 @@ pub type Router = crate::router::Module; /// Mocked DMP pub type Dmp = crate::dmp::Module; +/// Mocked UMP +pub type Ump = crate::ump::Module; + /// Mocked scheduler. pub type Scheduler = crate::scheduler::Module; diff --git a/runtime/parachains/src/ump.rs b/runtime/parachains/src/ump.rs new file mode 100644 index 000000000000..258cf6513a51 --- /dev/null +++ b/runtime/parachains/src/ump.rs @@ -0,0 +1,874 @@ +// Copyright 2020 Parity Technologies (UK) Ltd. +// This file is part of Polkadot. + +// Polkadot is free software: you can redistribute it and/or modify +// it under the terms of the GNU General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. + +// Polkadot is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU General Public License for more details. + +// You should have received a copy of the GNU General Public License +// along with Polkadot. If not, see . + +use crate::{configuration::{self, HostConfiguration}, initializer}; +use sp_std::prelude::*; +use sp_std::fmt; +use sp_std::collections::{btree_map::BTreeMap, vec_deque::VecDeque}; +use frame_support::{decl_module, decl_storage, StorageMap, StorageValue, weights::Weight, traits::Get}; +use primitives::v1::{Id as ParaId, UpwardMessage}; + +/// All upward messages coming from parachains will be funneled into an implementation of this trait. +/// +/// The message is opaque from the perspective of UMP. The message size can range from 0 to +/// `config.max_upward_message_size`. +/// +/// It's up to the implementation of this trait to decide what to do with a message as long as it +/// returns the amount of weight consumed in the process of handling. Ignoring a message is a valid +/// strategy. +/// +/// There are no guarantees on how much time it takes for the message sent by a candidate to end up +/// in the sink after the candidate was enacted. That typically depends on the UMP traffic, the sizes +/// of upward messages and the configuration of UMP. +/// +/// It is possible that by the time the message is sank the origin parachain was offboarded. It is +/// up to the implementer to check that if it cares. +pub trait UmpSink { + /// Process an incoming upward message and return the amount of weight it consumed. + /// + /// See the trait docs for more details. + fn process_upward_message(origin: ParaId, msg: Vec) -> Weight; +} + +/// An implementation of a sink that just swallows the message without consuming any weight. +impl UmpSink for () { + fn process_upward_message(_: ParaId, _: Vec) -> Weight { + 0 + } +} + +/// An error returned by `check_upward_messages` that indicates a violation of one of acceptance +/// criteria rules. +pub enum AcceptanceCheckErr { + MoreMessagesThanPermitted { + sent: u32, + permitted: u32, + }, + MessageSize { + idx: u32, + msg_size: u32, + max_size: u32, + }, + CapacityExceeded { + count: u32, + limit: u32, + }, + TotalSizeExceeded { + total_size: u32, + limit: u32, + }, +} + +impl fmt::Debug for AcceptanceCheckErr { + fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { + match *self { + AcceptanceCheckErr::MoreMessagesThanPermitted { sent, permitted } => write!( + fmt, + "more upward messages than permitted by config ({} > {})", + sent, permitted, + ), + AcceptanceCheckErr::MessageSize { + idx, + msg_size, + max_size, + } => write!( + fmt, + "upward message idx {} larger than permitted by config ({} > {})", + idx, msg_size, max_size, + ), + AcceptanceCheckErr::CapacityExceeded { count, limit } => write!( + fmt, + "the ump queue would have more items than permitted by config ({} > {})", + count, limit, + ), + AcceptanceCheckErr::TotalSizeExceeded { total_size, limit } => write!( + fmt, + "the ump queue would have grown past the max size permitted by config ({} > {})", + total_size, limit, + ), + } + } +} + +pub trait Trait: frame_system::Trait + configuration::Trait { + /// A place where all received upward messages are funneled. + type UmpSink: UmpSink; +} + +decl_storage! { + trait Store for Module as Ump { + /// Paras that are to be cleaned up at the end of the session. + /// The entries are sorted ascending by the para id. + OutgoingParas: Vec; + + /// The messages waiting to be handled by the relay-chain originating from a certain parachain. + /// + /// Note that some upward messages might have been already processed by the inclusion logic. E.g. + /// channel management messages. + /// + /// The messages are processed in FIFO order. + RelayDispatchQueues: map hasher(twox_64_concat) ParaId => VecDeque; + /// Size of the dispatch queues. Caches sizes of the queues in `RelayDispatchQueue`. + /// + /// First item in the tuple is the count of messages and second + /// is the total length (in bytes) of the message payloads. + /// + /// Note that this is an auxilary mapping: it's possible to tell the byte size and the number of + /// messages only looking at `RelayDispatchQueues`. This mapping is separate to avoid the cost of + /// loading the whole message queue if only the total size and count are required. + /// + /// Invariant: + /// - The set of keys should exactly match the set of keys of `RelayDispatchQueues`. + RelayDispatchQueueSize: map hasher(twox_64_concat) ParaId => (u32, u32); + /// The ordered list of `ParaId`s that have a `RelayDispatchQueue` entry. + /// + /// Invariant: + /// - The set of items from this vector should be exactly the set of the keys in + /// `RelayDispatchQueues` and `RelayDispatchQueueSize`. + NeedsDispatch: Vec; + /// This is the para that gets will get dispatched first during the next upward dispatchable queue + /// execution round. + /// + /// Invariant: + /// - If `Some(para)`, then `para` must be present in `NeedsDispatch`. + NextDispatchRoundStartWith: Option; + } +} + +decl_module! { + /// The UMP module. + pub struct Module for enum Call where origin: ::Origin { + } +} + +/// Routines related to the upward message passing. +impl Module { + /// Block initialization logic, called by initializer. + pub(crate) fn initializer_initialize(_now: T::BlockNumber) -> Weight { + 0 + } + + /// Block finalization logic, called by initializer. + pub(crate) fn initializer_finalize() {} + + /// Called by the initializer to note that a new session has started. + pub(crate) fn initializer_on_new_session( + _notification: &initializer::SessionChangeNotification, + ) { + Self::perform_outgoing_para_cleanup(); + } + + /// Iterate over all paras that were registered for offboarding and remove all the data + /// associated with them. + fn perform_outgoing_para_cleanup() { + let outgoing = OutgoingParas::take(); + for outgoing_para in outgoing { + Self::clean_ump_after_outgoing(outgoing_para); + } + } + + /// Schedule a para to be cleaned up at the start of the next session. + pub(crate) fn schedule_para_cleanup(id: ParaId) { + OutgoingParas::mutate(|v| { + if let Err(i) = v.binary_search(&id) { + v.insert(i, id); + } + }); + } + + fn clean_ump_after_outgoing(outgoing_para: ParaId) { + ::RelayDispatchQueueSize::remove(&outgoing_para); + ::RelayDispatchQueues::remove(&outgoing_para); + + // Remove the outgoing para from the `NeedsDispatch` list and from + // `NextDispatchRoundStartWith`. + // + // That's needed for maintaining invariant that `NextDispatchRoundStartWith` points to an + // existing item in `NeedsDispatch`. + ::NeedsDispatch::mutate(|v| { + if let Ok(i) = v.binary_search(&outgoing_para) { + v.remove(i); + } + }); + ::NextDispatchRoundStartWith::mutate(|v| { + *v = v.filter(|p| *p == outgoing_para) + }); + } + + /// Check that all the upward messages sent by a candidate pass the acceptance criteria. Returns + /// false, if any of the messages doesn't pass. + pub(crate) fn check_upward_messages( + config: &HostConfiguration, + para: ParaId, + upward_messages: &[UpwardMessage], + ) -> Result<(), AcceptanceCheckErr> { + if upward_messages.len() as u32 > config.max_upward_message_num_per_candidate { + return Err(AcceptanceCheckErr::MoreMessagesThanPermitted { + sent: upward_messages.len() as u32, + permitted: config.max_upward_message_num_per_candidate, + }); + } + + let (mut para_queue_count, mut para_queue_size) = + ::RelayDispatchQueueSize::get(¶); + + for (idx, msg) in upward_messages.into_iter().enumerate() { + let msg_size = msg.len() as u32; + if msg_size > config.max_upward_message_size { + return Err(AcceptanceCheckErr::MessageSize { + idx: idx as u32, + msg_size, + max_size: config.max_upward_message_size, + }); + } + para_queue_count += 1; + para_queue_size += msg_size; + } + + // make sure that the queue is not overfilled. + // we do it here only once since returning false invalidates the whole relay-chain block. + if para_queue_count > config.max_upward_queue_count { + return Err(AcceptanceCheckErr::CapacityExceeded { + count: para_queue_count, + limit: config.max_upward_queue_count, + }); + } + if para_queue_size > config.max_upward_queue_size { + return Err(AcceptanceCheckErr::TotalSizeExceeded { + total_size: para_queue_size, + limit: config.max_upward_queue_size, + }); + } + + Ok(()) + } + + /// Enacts all the upward messages sent by a candidate. + pub(crate) fn enact_upward_messages( + para: ParaId, + upward_messages: Vec, + ) -> Weight { + let mut weight = 0; + + if !upward_messages.is_empty() { + let (extra_cnt, extra_size) = upward_messages + .iter() + .fold((0, 0), |(cnt, size), d| (cnt + 1, size + d.len() as u32)); + + ::RelayDispatchQueues::mutate(¶, |v| { + v.extend(upward_messages.into_iter()) + }); + + ::RelayDispatchQueueSize::mutate( + ¶, + |(ref mut cnt, ref mut size)| { + *cnt += extra_cnt; + *size += extra_size; + }, + ); + + ::NeedsDispatch::mutate(|v| { + if let Err(i) = v.binary_search(¶) { + v.insert(i, para); + } + }); + + weight += T::DbWeight::get().reads_writes(3, 3); + } + + weight + } + + /// Devote some time into dispatching pending upward messages. + pub(crate) fn process_pending_upward_messages() { + let mut used_weight_so_far = 0; + + let config = >::config(); + let mut cursor = NeedsDispatchCursor::new::(); + let mut queue_cache = QueueCache::new(); + + while let Some(dispatchee) = cursor.peek() { + if used_weight_so_far >= config.preferred_dispatchable_upward_messages_step_weight { + // Then check whether we've reached or overshoot the + // preferred weight for the dispatching stage. + // + // if so - bail. + break; + } + + // dequeue the next message from the queue of the dispatchee + let (upward_message, became_empty) = queue_cache.dequeue::(dispatchee); + if let Some(upward_message) = upward_message { + used_weight_so_far += + T::UmpSink::process_upward_message(dispatchee, upward_message); + } + + if became_empty { + // the queue is empty now - this para doesn't need attention anymore. + cursor.remove(); + } else { + cursor.advance(); + } + } + + cursor.flush::(); + queue_cache.flush::(); + } +} + +/// To avoid constant fetching, deserializing and serialization the queues are cached. +/// +/// After an item dequeued from a queue for the first time, the queue is stored in this struct rather +/// than being serialized and persisted. +/// +/// This implementation works best when: +/// +/// 1. when the queues are shallow +/// 2. the dispatcher makes more than one cycle +/// +/// if the queues are deep and there are many we would load and keep the queues for a long time, +/// thus increasing the peak memory consumption of the wasm runtime. Under such conditions persisting +/// queues might play better since it's unlikely that they are going to be requested once more. +/// +/// On the other hand, the situation when deep queues exist and it takes more than one dipsatcher +/// cycle to traverse the queues is already sub-optimal and better be avoided. +/// +/// This struct is not supposed to be dropped but rather to be consumed by [`flush`]. +struct QueueCache(BTreeMap); + +struct QueueCacheEntry { + queue: VecDeque, + count: u32, + total_size: u32, +} + +impl QueueCache { + fn new() -> Self { + Self(BTreeMap::new()) + } + + /// Dequeues one item from the upward message queue of the given para. + /// + /// Returns `(upward_message, became_empty)`, where + /// + /// - `upward_message` a dequeued message or `None` if the queue _was_ empty. + /// - `became_empty` is true if the queue _became_ empty. + fn dequeue(&mut self, para: ParaId) -> (Option, bool) { + let cache_entry = self.0.entry(para).or_insert_with(|| { + let queue = as Store>::RelayDispatchQueues::get(¶); + let (count, total_size) = as Store>::RelayDispatchQueueSize::get(¶); + QueueCacheEntry { + queue, + count, + total_size, + } + }); + let upward_message = cache_entry.queue.pop_front(); + if let Some(ref msg) = upward_message { + cache_entry.count -= 1; + cache_entry.total_size -= msg.len() as u32; + } + + let became_empty = cache_entry.queue.is_empty(); + (upward_message, became_empty) + } + + /// Flushes the updated queues into the storage. + fn flush(self) { + // NOTE we use an explicit method here instead of Drop impl because it has unwanted semantics + // within runtime. It is dangerous to use because of double-panics and flushing on a panic + // is not necessary as well. + for ( + para, + QueueCacheEntry { + queue, + count, + total_size, + }, + ) in self.0 + { + if queue.is_empty() { + // remove the entries altogether. + as Store>::RelayDispatchQueues::remove(¶); + as Store>::RelayDispatchQueueSize::remove(¶); + } else { + as Store>::RelayDispatchQueues::insert(¶, queue); + as Store>::RelayDispatchQueueSize::insert(¶, (count, total_size)); + } + } + } +} + +/// A cursor that iterates over all entries in `NeedsDispatch`. +/// +/// This cursor will start with the para indicated by `NextDispatchRoundStartWith` storage entry. +/// This cursor is cyclic meaning that after reaching the end it will jump to the beginning. Unlike +/// an iterator, this cursor allows removing items during the iteration. +/// +/// Each iteration cycle *must be* concluded with a call to either `advance` or `remove`. +/// +/// This struct is not supposed to be dropped but rather to be consumed by [`flush`]. +#[derive(Debug)] +struct NeedsDispatchCursor { + needs_dispatch: Vec, + cur_idx: usize, +} + +impl NeedsDispatchCursor { + fn new() -> Self { + let needs_dispatch: Vec = as Store>::NeedsDispatch::get(); + let start_with = as Store>::NextDispatchRoundStartWith::get(); + + let start_with_idx = match start_with { + Some(para) => match needs_dispatch.binary_search(¶) { + Ok(found_idx) => found_idx, + Err(_supposed_idx) => { + // well that's weird because we maintain an invariant that + // `NextDispatchRoundStartWith` must point into one of the items in + // `NeedsDispatch`. + // + // let's select 0 as the starting index as a safe bet. + debug_assert!(false); + 0 + } + }, + None => 0, + }; + + Self { + needs_dispatch, + cur_idx: start_with_idx, + } + } + + /// Returns the item the cursor points to. + fn peek(&self) -> Option { + self.needs_dispatch.get(self.cur_idx).cloned() + } + + /// Moves the cursor to the next item. + fn advance(&mut self) { + if self.needs_dispatch.is_empty() { + return; + } + self.cur_idx = (self.cur_idx + 1) % self.needs_dispatch.len(); + } + + /// Removes the item under the cursor. + fn remove(&mut self) { + if self.needs_dispatch.is_empty() { + return; + } + let _ = self.needs_dispatch.remove(self.cur_idx); + + // we might've removed the last element and that doesn't necessarily mean that `needs_dispatch` + // became empty. Reposition the cursor in this case to the beginning. + if self.needs_dispatch.get(self.cur_idx).is_none() { + self.cur_idx = 0; + } + } + + /// Flushes the dispatcher state into the persistent storage. + fn flush(self) { + let next_one = self.peek(); + as Store>::NextDispatchRoundStartWith::set(next_one); + as Store>::NeedsDispatch::put(self.needs_dispatch); + } +} + +#[cfg(test)] +pub(crate) mod mock_sink { + //! An implementation of a mock UMP sink that allows attaching a probe for mocking the weights + //! and checking the sent messages. + //! + //! A default behavior of the UMP sink is to ignore an incoming message and return 0 weight. + //! + //! A probe can be attached to the mock UMP sink. When attached, the mock sink would consult the + //! probe to check whether the received message was expected and what weight it should return. + //! + //! There are two rules on how to use a probe: + //! + //! 1. There can be only one active probe at a time. Creation of another probe while there is + //! already an active one leads to a panic. The probe is scoped to a thread where it was created. + //! + //! 2. All messages expected by the probe must be received by the time of dropping it. Unreceived + //! messages will lead to a panic while dropping a probe. + + use super::{UmpSink, UpwardMessage, ParaId}; + use std::cell::RefCell; + use std::collections::vec_deque::VecDeque; + use frame_support::weights::Weight; + + #[derive(Debug)] + struct UmpExpectation { + expected_origin: ParaId, + expected_msg: UpwardMessage, + mock_weight: Weight, + } + + std::thread_local! { + // `Some` here indicates that there is an active probe. + static HOOK: RefCell>> = RefCell::new(None); + } + + pub struct MockUmpSink; + impl UmpSink for MockUmpSink { + fn process_upward_message(actual_origin: ParaId, actual_msg: Vec) -> Weight { + HOOK.with(|opt_hook| match &mut *opt_hook.borrow_mut() { + Some(hook) => { + let UmpExpectation { + expected_origin, + expected_msg, + mock_weight, + } = match hook.pop_front() { + Some(expectation) => expectation, + None => { + panic!( + "The probe is active but didn't expect the message:\n\n\t{:?}.", + actual_msg, + ); + } + }; + assert_eq!(expected_origin, actual_origin); + assert_eq!(expected_msg, actual_msg); + mock_weight + } + None => 0, + }) + } + } + + pub struct Probe { + _private: (), + } + + impl Probe { + pub fn new() -> Self { + HOOK.with(|opt_hook| { + let prev = opt_hook.borrow_mut().replace(VecDeque::default()); + + // that can trigger if there were two probes were created during one session which + // is may be a bit strict, but may save time figuring out what's wrong. + // if you land here and you do need the two probes in one session consider + // dropping the the existing probe explicitly. + assert!(prev.is_none()); + }); + Self { _private: () } + } + + /// Add an expected message. + /// + /// The enqueued messages are processed in FIFO order. + pub fn assert_msg( + &mut self, + expected_origin: ParaId, + expected_msg: UpwardMessage, + mock_weight: Weight, + ) { + HOOK.with(|opt_hook| { + opt_hook + .borrow_mut() + .as_mut() + .unwrap() + .push_back(UmpExpectation { + expected_origin, + expected_msg, + mock_weight, + }) + }); + } + } + + impl Drop for Probe { + fn drop(&mut self) { + let _ = HOOK.try_with(|opt_hook| { + let prev = opt_hook.borrow_mut().take().expect( + "this probe was created and hasn't been yet destroyed; + the probe cannot be replaced; + there is only one probe at a time allowed; + thus it cannot be `None`; + qed", + ); + + if !prev.is_empty() { + // some messages are left unchecked. We should notify the developer about this. + // however, we do so only if the thread doesn't panic already. Otherwise, the + // developer would get a SIGILL or SIGABRT without a meaningful error message. + if !std::thread::panicking() { + panic!( + "the probe is dropped and not all expected messages arrived: {:?}", + prev + ); + } + } + }); + // an `Err` here signals here that the thread local was already destroyed. + } + } +} + +#[cfg(test)] +mod tests { + use super::*; + use super::mock_sink::Probe; + use crate::mock::{Configuration, Ump, new_test_ext, GenesisConfig as MockGenesisConfig}; + use frame_support::IterableStorageMap; + use std::collections::HashSet; + + struct GenesisConfigBuilder { + max_upward_message_size: u32, + max_upward_message_num_per_candidate: u32, + max_upward_queue_count: u32, + max_upward_queue_size: u32, + preferred_dispatchable_upward_messages_step_weight: Weight, + } + + impl Default for GenesisConfigBuilder { + fn default() -> Self { + Self { + max_upward_message_size: 16, + max_upward_message_num_per_candidate: 2, + max_upward_queue_count: 4, + max_upward_queue_size: 64, + preferred_dispatchable_upward_messages_step_weight: 1000, + } + } + } + + impl GenesisConfigBuilder { + fn build(self) -> crate::mock::GenesisConfig { + let mut genesis = default_genesis_config(); + let config = &mut genesis.configuration.config; + + config.max_upward_message_size = self.max_upward_message_size; + config.max_upward_message_num_per_candidate = self.max_upward_message_num_per_candidate; + config.max_upward_queue_count = self.max_upward_queue_count; + config.max_upward_queue_size = self.max_upward_queue_size; + config.preferred_dispatchable_upward_messages_step_weight = + self.preferred_dispatchable_upward_messages_step_weight; + genesis + } + } + + fn default_genesis_config() -> MockGenesisConfig { + MockGenesisConfig { + configuration: crate::configuration::GenesisConfig { + config: crate::configuration::HostConfiguration { + max_downward_message_size: 1024, + ..Default::default() + }, + }, + ..Default::default() + } + } + + fn queue_upward_msg(para: ParaId, msg: UpwardMessage) { + let msgs = vec![msg]; + assert!(Ump::check_upward_messages(&Configuration::config(), para, &msgs).is_ok()); + let _ = Ump::enact_upward_messages(para, msgs); + } + + fn assert_storage_consistency_exhaustive() { + // check that empty queues don't clutter the storage. + for (_para, queue) in ::RelayDispatchQueues::iter() { + assert!(!queue.is_empty()); + } + + // actually count the counts and sizes in queues and compare them to the bookkeeped version. + for (para, queue) in ::RelayDispatchQueues::iter() { + let (expected_count, expected_size) = + ::RelayDispatchQueueSize::get(para); + let (actual_count, actual_size) = + queue.into_iter().fold((0, 0), |(acc_count, acc_size), x| { + (acc_count + 1, acc_size + x.len() as u32) + }); + + assert_eq!(expected_count, actual_count); + assert_eq!(expected_size, actual_size); + } + + // since we wipe the empty queues the sets of paras in queue contents, queue sizes and + // need dispatch set should all be equal. + let queue_contents_set = ::RelayDispatchQueues::iter() + .map(|(k, _)| k) + .collect::>(); + let queue_sizes_set = ::RelayDispatchQueueSize::iter() + .map(|(k, _)| k) + .collect::>(); + let needs_dispatch_set = ::NeedsDispatch::get() + .into_iter() + .collect::>(); + assert_eq!(queue_contents_set, queue_sizes_set); + assert_eq!(queue_contents_set, needs_dispatch_set); + + // `NextDispatchRoundStartWith` should point into a para that is tracked. + if let Some(para) = ::NextDispatchRoundStartWith::get() { + assert!(queue_contents_set.contains(¶)); + } + + // `NeedsDispatch` is always sorted. + assert!(::NeedsDispatch::get() + .windows(2) + .all(|xs| xs[0] <= xs[1])); + } + + #[test] + fn dispatch_empty() { + new_test_ext(default_genesis_config()).execute_with(|| { + assert_storage_consistency_exhaustive(); + + // make sure that the case with empty queues is handled properly + Ump::process_pending_upward_messages(); + + assert_storage_consistency_exhaustive(); + }); + } + + #[test] + fn dispatch_single_message() { + let a = ParaId::from(228); + let msg = vec![1, 2, 3]; + + new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| { + let mut probe = Probe::new(); + + probe.assert_msg(a, msg.clone(), 0); + queue_upward_msg(a, msg); + + Ump::process_pending_upward_messages(); + + assert_storage_consistency_exhaustive(); + }); + } + + #[test] + fn dispatch_resume_after_exceeding_dispatch_stage_weight() { + let a = ParaId::from(128); + let c = ParaId::from(228); + let q = ParaId::from(911); + + let a_msg_1 = vec![1, 2, 3]; + let a_msg_2 = vec![3, 2, 1]; + let c_msg_1 = vec![4, 5, 6]; + let c_msg_2 = vec![9, 8, 7]; + let q_msg = b"we are Q".to_vec(); + + new_test_ext( + GenesisConfigBuilder { + preferred_dispatchable_upward_messages_step_weight: 500, + ..Default::default() + } + .build(), + ) + .execute_with(|| { + queue_upward_msg(q, q_msg.clone()); + queue_upward_msg(c, c_msg_1.clone()); + queue_upward_msg(a, a_msg_1.clone()); + queue_upward_msg(a, a_msg_2.clone()); + + assert_storage_consistency_exhaustive(); + + // we expect only two first messages to fit in the first iteration. + { + let mut probe = Probe::new(); + + probe.assert_msg(a, a_msg_1.clone(), 300); + probe.assert_msg(c, c_msg_1.clone(), 300); + Ump::process_pending_upward_messages(); + assert_storage_consistency_exhaustive(); + + drop(probe); + } + + queue_upward_msg(c, c_msg_2.clone()); + assert_storage_consistency_exhaustive(); + + // second iteration should process the second message. + { + let mut probe = Probe::new(); + + probe.assert_msg(q, q_msg.clone(), 500); + Ump::process_pending_upward_messages(); + assert_storage_consistency_exhaustive(); + + drop(probe); + } + + // 3rd iteration. + { + let mut probe = Probe::new(); + + probe.assert_msg(a, a_msg_2.clone(), 100); + probe.assert_msg(c, c_msg_2.clone(), 100); + Ump::process_pending_upward_messages(); + assert_storage_consistency_exhaustive(); + + drop(probe); + } + + // finally, make sure that the queue is empty. + { + let probe = Probe::new(); + + Ump::process_pending_upward_messages(); + assert_storage_consistency_exhaustive(); + + drop(probe); + } + }); + } + + #[test] + fn dispatch_correctly_handle_remove_of_latest() { + let a = ParaId::from(1991); + let b = ParaId::from(1999); + + let a_msg_1 = vec![1, 2, 3]; + let a_msg_2 = vec![3, 2, 1]; + let b_msg_1 = vec![4, 5, 6]; + + new_test_ext( + GenesisConfigBuilder { + preferred_dispatchable_upward_messages_step_weight: 900, + ..Default::default() + } + .build(), + ) + .execute_with(|| { + // We want to test here an edge case, where we remove the queue with the highest + // para id (i.e. last in the needs_dispatch order). + // + // If the last entry was removed we should proceed execution, assuming we still have + // weight available. + + queue_upward_msg(a, a_msg_1.clone()); + queue_upward_msg(a, a_msg_2.clone()); + queue_upward_msg(b, b_msg_1.clone()); + + { + let mut probe = Probe::new(); + + probe.assert_msg(a, a_msg_1.clone(), 300); + probe.assert_msg(b, b_msg_1.clone(), 300); + probe.assert_msg(a, a_msg_2.clone(), 300); + + Ump::process_pending_upward_messages(); + + drop(probe); + } + }); + } +} From a97fc34d2c97224ab015ce7e19fdec9d3af2bcda Mon Sep 17 00:00:00 2001 From: Sergey Shulepov Date: Fri, 6 Nov 2020 19:23:12 +0100 Subject: [PATCH 05/16] Extract HRMP --- runtime/parachains/src/hrmp.rs | 1529 ++++++++++++++++++++++++++++++++ runtime/parachains/src/lib.rs | 1 + runtime/parachains/src/mock.rs | 7 + 3 files changed, 1537 insertions(+) create mode 100644 runtime/parachains/src/hrmp.rs diff --git a/runtime/parachains/src/hrmp.rs b/runtime/parachains/src/hrmp.rs new file mode 100644 index 000000000000..7a9b5c9bfda8 --- /dev/null +++ b/runtime/parachains/src/hrmp.rs @@ -0,0 +1,1529 @@ +// Copyright 2020 Parity Technologies (UK) Ltd. +// This file is part of Polkadot. + +// Polkadot is free software: you can redistribute it and/or modify +// it under the terms of the GNU General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. + +// Polkadot is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU General Public License for more details. + +// You should have received a copy of the GNU General Public License +// along with Polkadot. If not, see . + +use crate::{ + ensure_parachain, + configuration::{self, HostConfiguration}, + initializer, paras, dmp, +}; +use codec::{Decode, Encode}; +use frame_support::{ + decl_storage, decl_module, decl_error, ensure, traits::Get, weights::Weight, StorageMap, + StorageValue, dispatch::DispatchResult, +}; +use primitives::v1::{ + Balance, Hash, HrmpChannelId, Id as ParaId, InboundHrmpMessage, OutboundHrmpMessage, + SessionIndex, +}; +use sp_runtime::traits::{BlakeTwo256, Hash as HashT}; +use sp_std::collections::{btree_map::BTreeMap, btree_set::BTreeSet}; +use sp_std::{mem, fmt}; +use sp_std::prelude::*; + +/// A description of a request to open an HRMP channel. +#[derive(Encode, Decode)] +pub struct HrmpOpenChannelRequest { + /// Indicates if this request was confirmed by the recipient. + pub confirmed: bool, + /// How many session boundaries ago this request was seen. + pub age: SessionIndex, + /// The amount that the sender supplied at the time of creation of this request. + pub sender_deposit: Balance, + /// The maximum message size that could be put into the channel. + pub max_message_size: u32, + /// The maximum number of messages that can be pending in the channel at once. + pub max_capacity: u32, + /// The maximum total size of the messages that can be pending in the channel at once. + pub max_total_size: u32, +} + +/// A metadata of an HRMP channel. +#[derive(Encode, Decode)] +#[cfg_attr(test, derive(Debug))] +pub struct HrmpChannel { + /// The amount that the sender supplied as a deposit when opening this channel. + pub sender_deposit: Balance, + /// The amount that the recipient supplied as a deposit when accepting opening this channel. + pub recipient_deposit: Balance, + /// The maximum number of messages that can be pending in the channel at once. + pub max_capacity: u32, + /// The maximum total size of the messages that can be pending in the channel at once. + pub max_total_size: u32, + /// The maximum message size that could be put into the channel. + pub max_message_size: u32, + /// The current number of messages pending in the channel. + /// Invariant: should be less or equal to `max_capacity`.s`. + pub msg_count: u32, + /// The total size in bytes of all message payloads in the channel. + /// Invariant: should be less or equal to `max_total_size`. + pub total_size: u32, + /// A head of the Message Queue Chain for this channel. Each link in this chain has a form: + /// `(prev_head, B, H(M))`, where + /// - `prev_head`: is the previous value of `mqc_head` or zero if none. + /// - `B`: is the [relay-chain] block number in which a message was appended + /// - `H(M)`: is the hash of the message being appended. + /// This value is initialized to a special value that consists of all zeroes which indicates + /// that no messages were previously added. + pub mqc_head: Option, +} + +/// An error returned by `check_hrmp_watermark` that indicates an acceptance criteria check +/// didn't pass. +pub enum HrmpWatermarkAcceptanceErr { + AdvancementRule { + new_watermark: BlockNumber, + last_watermark: BlockNumber, + }, + AheadRelayParent { + new_watermark: BlockNumber, + relay_chain_parent_number: BlockNumber, + }, + LandsOnBlockWithNoMessages { + new_watermark: BlockNumber, + }, +} + +/// An error returned by `check_outbound_hrmp` that indicates an acceptance criteria check +/// didn't pass. +pub enum OutboundHrmpAcceptanceErr { + MoreMessagesThanPermitted { + sent: u32, + permitted: u32, + }, + NotSorted { + idx: u32, + }, + NoSuchChannel { + idx: u32, + channel_id: HrmpChannelId, + }, + MaxMessageSizeExceeded { + idx: u32, + msg_size: u32, + max_size: u32, + }, + TotalSizeExceeded { + idx: u32, + total_size: u32, + limit: u32, + }, + CapacityExceeded { + idx: u32, + count: u32, + limit: u32, + }, +} + +impl fmt::Debug for HrmpWatermarkAcceptanceErr +where + BlockNumber: fmt::Debug, +{ + fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { + use HrmpWatermarkAcceptanceErr::*; + match self { + AdvancementRule { + new_watermark, + last_watermark, + } => write!( + fmt, + "the HRMP watermark is not advanced relative to the last watermark ({:?} > {:?})", + new_watermark, last_watermark, + ), + AheadRelayParent { + new_watermark, + relay_chain_parent_number, + } => write!( + fmt, + "the HRMP watermark is ahead the relay-parent ({:?} > {:?})", + new_watermark, relay_chain_parent_number + ), + LandsOnBlockWithNoMessages { new_watermark } => write!( + fmt, + "the HRMP watermark ({:?}) doesn't land on a block with messages received", + new_watermark + ), + } + } +} + +impl fmt::Debug for OutboundHrmpAcceptanceErr { + fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { + use OutboundHrmpAcceptanceErr::*; + match self { + MoreMessagesThanPermitted { sent, permitted } => write!( + fmt, + "more HRMP messages than permitted by config ({} > {})", + sent, permitted, + ), + NotSorted { idx } => write!( + fmt, + "the HRMP messages are not sorted (first unsorted is at index {})", + idx, + ), + NoSuchChannel { idx, channel_id } => write!( + fmt, + "the HRMP message at index {} is sent to a non existent channel {:?}->{:?}", + idx, channel_id.sender, channel_id.recipient, + ), + MaxMessageSizeExceeded { + idx, + msg_size, + max_size, + } => write!( + fmt, + "the HRMP message at index {} exceeds the negotiated channel maximum message size ({} > {})", + idx, msg_size, max_size, + ), + TotalSizeExceeded { + idx, + total_size, + limit, + } => write!( + fmt, + "sending the HRMP message at index {} would exceed the neogitiated channel total size ({} > {})", + idx, total_size, limit, + ), + CapacityExceeded { idx, count, limit } => write!( + fmt, + "sending the HRMP message at index {} would exceed the neogitiated channel capacity ({} > {})", + idx, count, limit, + ), + } + } +} + +pub trait Trait: frame_system::Trait + configuration::Trait + paras::Trait + dmp::Trait { + type Origin: From + + From<::Origin> + + Into::Origin>>; +} + +decl_storage! { + trait Store for Module as Hrmp { + /// Paras that are to be cleaned up at the end of the session. + /// The entries are sorted ascending by the para id. + OutgoingParas: Vec; + + + /// The set of pending HRMP open channel requests. + /// + /// The set is accompanied by a list for iteration. + /// + /// Invariant: + /// - There are no channels that exists in list but not in the set and vice versa. + HrmpOpenChannelRequests: map hasher(twox_64_concat) HrmpChannelId => Option; + HrmpOpenChannelRequestsList: Vec; + + /// This mapping tracks how many open channel requests are inititated by a given sender para. + /// Invariant: `HrmpOpenChannelRequests` should contain the same number of items that has `(X, _)` + /// as the number of `HrmpOpenChannelRequestCount` for `X`. + HrmpOpenChannelRequestCount: map hasher(twox_64_concat) ParaId => u32; + /// This mapping tracks how many open channel requests were accepted by a given recipient para. + /// Invariant: `HrmpOpenChannelRequests` should contain the same number of items `(_, X)` with + /// `confirmed` set to true, as the number of `HrmpAcceptedChannelRequestCount` for `X`. + HrmpAcceptedChannelRequestCount: map hasher(twox_64_concat) ParaId => u32; + + /// A set of pending HRMP close channel requests that are going to be closed during the session change. + /// Used for checking if a given channel is registered for closure. + /// + /// The set is accompanied by a list for iteration. + /// + /// Invariant: + /// - There are no channels that exists in list but not in the set and vice versa. + HrmpCloseChannelRequests: map hasher(twox_64_concat) HrmpChannelId => Option<()>; + HrmpCloseChannelRequestsList: Vec; + + /// The HRMP watermark associated with each para. + /// Invariant: + /// - each para `P` used here as a key should satisfy `Paras::is_valid_para(P)` within a session. + HrmpWatermarks: map hasher(twox_64_concat) ParaId => Option; + /// HRMP channel data associated with each para. + /// Invariant: + /// - each participant in the channel should satisfy `Paras::is_valid_para(P)` within a session. + HrmpChannels: map hasher(twox_64_concat) HrmpChannelId => Option; + /// Ingress/egress indexes allow to find all the senders and receivers given the opposite + /// side. I.e. + /// + /// (a) ingress index allows to find all the senders for a given recipient. + /// (b) egress index allows to find all the recipients for a given sender. + /// + /// Invariants: + /// - for each ingress index entry for `P` each item `I` in the index should present in `HrmpChannels` + /// as `(I, P)`. + /// - for each egress index entry for `P` each item `E` in the index should present in `HrmpChannels` + /// as `(P, E)`. + /// - there should be no other dangling channels in `HrmpChannels`. + /// - the vectors are sorted. + HrmpIngressChannelsIndex: map hasher(twox_64_concat) ParaId => Vec; + HrmpEgressChannelsIndex: map hasher(twox_64_concat) ParaId => Vec; + /// Storage for the messages for each channel. + /// Invariant: cannot be non-empty if the corresponding channel in `HrmpChannels` is `None`. + HrmpChannelContents: map hasher(twox_64_concat) HrmpChannelId => Vec>; + /// Maintains a mapping that can be used to answer the question: + /// What paras sent a message at the given block number for a given reciever. + /// Invariants: + /// - The inner `Vec` is never empty. + /// - The inner `Vec` cannot store two same `ParaId`. + /// - The outer vector is sorted ascending by block number and cannot store two items with the same + /// block number. + HrmpChannelDigests: map hasher(twox_64_concat) ParaId => Vec<(T::BlockNumber, Vec)>; + } +} + +decl_error! { + pub enum Error for Module { + /// The sender tried to open a channel to themselves. + OpenHrmpChannelToSelf, + /// The recipient is not a valid para. + OpenHrmpChannelInvalidRecipient, + /// The requested capacity is zero. + OpenHrmpChannelZeroCapacity, + /// The requested capacity exceeds the global limit. + OpenHrmpChannelCapacityExceedsLimit, + /// The requested maximum message size is 0. + OpenHrmpChannelZeroMessageSize, + /// The open request requested the message size that exceeds the global limit. + OpenHrmpChannelMessageSizeExceedsLimit, + /// The channel already exists + OpenHrmpChannelAlreadyExists, + /// There is already a request to open the same channel. + OpenHrmpChannelAlreadyRequested, + /// The sender already has the maximum number of allowed outbound channels. + OpenHrmpChannelLimitExceeded, + /// The channel from the sender to the origin doesn't exist. + AcceptHrmpChannelDoesntExist, + /// The channel is already confirmed. + AcceptHrmpChannelAlreadyConfirmed, + /// The recipient already has the maximum number of allowed inbound channels. + AcceptHrmpChannelLimitExceeded, + /// The origin tries to close a channel where it is neither the sender nor the recipient. + CloseHrmpChannelUnauthorized, + /// The channel to be closed doesn't exist. + CloseHrmpChannelDoesntExist, + /// The channel close request is already requested. + CloseHrmpChannelAlreadyUnderway, + } +} + +decl_module! { + /// The HRMP module. + pub struct Module for enum Call where origin: ::Origin { + type Error = Error; + + #[weight = 0] + fn hrmp_init_open_channel( + origin, + recipient: ParaId, + proposed_max_capacity: u32, + proposed_max_message_size: u32, + ) -> DispatchResult { + let origin = ensure_parachain(::Origin::from(origin))?; + Self::init_open_channel( + origin, + recipient, + proposed_max_capacity, + proposed_max_message_size + )?; + Ok(()) + } + + #[weight = 0] + fn hrmp_accept_open_channel(origin, sender: ParaId) -> DispatchResult { + let origin = ensure_parachain(::Origin::from(origin))?; + Self::accept_open_channel(origin, sender)?; + Ok(()) + } + + #[weight = 0] + fn hrmp_close_channel(origin, channel_id: HrmpChannelId) -> DispatchResult { + let origin = ensure_parachain(::Origin::from(origin))?; + Self::close_channel(origin, channel_id)?; + Ok(()) + } + } +} + +/// Routines and getters related to HRMP. +impl Module { + /// Block initialization logic, called by initializer. + pub(crate) fn initializer_initialize(_now: T::BlockNumber) -> Weight { + 0 + } + + /// Block finalization logic, called by initializer. + pub(crate) fn initializer_finalize() {} + + /// Called by the initializer to note that a new session has started. + pub(crate) fn initializer_on_new_session( + notification: &initializer::SessionChangeNotification, + ) { + Self::perform_outgoing_para_cleanup(); + Self::process_hrmp_open_channel_requests(¬ification.prev_config); + Self::process_hrmp_close_channel_requests(); + } + + /// Iterate over all paras that were registered for offboarding and remove all the data + /// associated with them. + fn perform_outgoing_para_cleanup() { + let outgoing = OutgoingParas::take(); + for outgoing_para in outgoing { + Self::clean_hrmp_after_outgoing(outgoing_para); + } + } + + /// Schedule a para to be cleaned up at the start of the next session. + pub(crate) fn schedule_para_cleanup(id: ParaId) { + OutgoingParas::mutate(|v| { + if let Err(i) = v.binary_search(&id) { + v.insert(i, id); + } + }); + } + + /// Remove all storage entries associated with the given para. + pub(super) fn clean_hrmp_after_outgoing(outgoing_para: ParaId) { + ::HrmpOpenChannelRequestCount::remove(&outgoing_para); + ::HrmpAcceptedChannelRequestCount::remove(&outgoing_para); + + // close all channels where the outgoing para acts as the recipient. + for sender in ::HrmpIngressChannelsIndex::take(&outgoing_para) { + Self::close_hrmp_channel(&HrmpChannelId { + sender, + recipient: outgoing_para.clone(), + }); + } + // close all channels where the outgoing para acts as the sender. + for recipient in ::HrmpEgressChannelsIndex::take(&outgoing_para) { + Self::close_hrmp_channel(&HrmpChannelId { + sender: outgoing_para.clone(), + recipient, + }); + } + } + + /// Iterate over all open channel requests and: + /// + /// - prune the stale requests + /// - enact the confirmed requests + pub(super) fn process_hrmp_open_channel_requests(config: &HostConfiguration) { + let mut open_req_channels = ::HrmpOpenChannelRequestsList::get(); + if open_req_channels.is_empty() { + return; + } + + // iterate the vector starting from the end making our way to the beginning. This way we + // can leverage `swap_remove` to efficiently remove an item during iteration. + let mut idx = open_req_channels.len(); + loop { + // bail if we've iterated over all items. + if idx == 0 { + break; + } + + idx -= 1; + let channel_id = open_req_channels[idx].clone(); + let mut request = ::HrmpOpenChannelRequests::get(&channel_id).expect( + "can't be `None` due to the invariant that the list contains the same items as the set; qed", + ); + + if request.confirmed { + if >::is_valid_para(channel_id.sender) + && >::is_valid_para(channel_id.recipient) + { + ::HrmpChannels::insert( + &channel_id, + HrmpChannel { + sender_deposit: request.sender_deposit, + recipient_deposit: config.hrmp_recipient_deposit, + max_capacity: request.max_capacity, + max_total_size: request.max_total_size, + max_message_size: request.max_message_size, + msg_count: 0, + total_size: 0, + mqc_head: None, + }, + ); + + ::HrmpIngressChannelsIndex::mutate(&channel_id.recipient, |v| { + if let Err(i) = v.binary_search(&channel_id.sender) { + v.insert(i, channel_id.sender); + } + }); + ::HrmpEgressChannelsIndex::mutate(&channel_id.sender, |v| { + if let Err(i) = v.binary_search(&channel_id.recipient) { + v.insert(i, channel_id.recipient); + } + }); + } + + let new_open_channel_req_cnt = + ::HrmpOpenChannelRequestCount::get(&channel_id.sender) + .saturating_sub(1); + if new_open_channel_req_cnt != 0 { + ::HrmpOpenChannelRequestCount::insert( + &channel_id.sender, + new_open_channel_req_cnt, + ); + } else { + ::HrmpOpenChannelRequestCount::remove(&channel_id.sender); + } + + let new_accepted_channel_req_cnt = + ::HrmpAcceptedChannelRequestCount::get(&channel_id.recipient) + .saturating_sub(1); + if new_accepted_channel_req_cnt != 0 { + ::HrmpAcceptedChannelRequestCount::insert( + &channel_id.recipient, + new_accepted_channel_req_cnt, + ); + } else { + ::HrmpAcceptedChannelRequestCount::remove(&channel_id.recipient); + } + + let _ = open_req_channels.swap_remove(idx); + ::HrmpOpenChannelRequests::remove(&channel_id); + } else { + request.age += 1; + if request.age == config.hrmp_open_request_ttl { + // got stale + + ::HrmpOpenChannelRequestCount::mutate(&channel_id.sender, |v| { + *v -= 1; + }); + + // TODO: return deposit https://github.com/paritytech/polkadot/issues/1907 + + let _ = open_req_channels.swap_remove(idx); + ::HrmpOpenChannelRequests::remove(&channel_id); + } + } + } + + ::HrmpOpenChannelRequestsList::put(open_req_channels); + } + + /// Iterate over all close channel requests unconditionally closing the channels. + pub(super) fn process_hrmp_close_channel_requests() { + let close_reqs = ::HrmpCloseChannelRequestsList::take(); + for condemned_ch_id in close_reqs { + ::HrmpCloseChannelRequests::remove(&condemned_ch_id); + Self::close_hrmp_channel(&condemned_ch_id); + + // clean up the indexes. + ::HrmpEgressChannelsIndex::mutate(&condemned_ch_id.sender, |v| { + if let Ok(i) = v.binary_search(&condemned_ch_id.recipient) { + v.remove(i); + } + }); + ::HrmpIngressChannelsIndex::mutate(&condemned_ch_id.recipient, |v| { + if let Ok(i) = v.binary_search(&condemned_ch_id.sender) { + v.remove(i); + } + }); + } + } + + /// Close and remove the designated HRMP channel. + /// + /// This includes returning the deposits. However, it doesn't include updating the ingress/egress + /// indicies. + pub(super) fn close_hrmp_channel(channel_id: &HrmpChannelId) { + // TODO: return deposit https://github.com/paritytech/polkadot/issues/1907 + + ::HrmpChannels::remove(channel_id); + ::HrmpChannelContents::remove(channel_id); + } + + /// Check that the candidate of the given recipient controls the HRMP watermark properly. + pub(crate) fn check_hrmp_watermark( + recipient: ParaId, + relay_chain_parent_number: T::BlockNumber, + new_hrmp_watermark: T::BlockNumber, + ) -> Result<(), HrmpWatermarkAcceptanceErr> { + // First, check where the watermark CANNOT legally land. + // + // (a) For ensuring that messages are eventually, a rule requires each parablock new + // watermark should be greater than the last one. + // + // (b) However, a parachain cannot read into "the future", therefore the watermark should + // not be greater than the relay-chain context block which the parablock refers to. + if let Some(last_watermark) = ::HrmpWatermarks::get(&recipient) { + if new_hrmp_watermark <= last_watermark { + return Err(HrmpWatermarkAcceptanceErr::AdvancementRule { + new_watermark: new_hrmp_watermark, + last_watermark, + }); + } + } + if new_hrmp_watermark > relay_chain_parent_number { + return Err(HrmpWatermarkAcceptanceErr::AheadRelayParent { + new_watermark: new_hrmp_watermark, + relay_chain_parent_number, + }); + } + + // Second, check where the watermark CAN land. It's one of the following: + // + // (a) The relay parent block number. + // (b) A relay-chain block in which this para received at least one message. + if new_hrmp_watermark == relay_chain_parent_number { + Ok(()) + } else { + let digest = ::HrmpChannelDigests::get(&recipient); + if !digest + .binary_search_by_key(&new_hrmp_watermark, |(block_no, _)| *block_no) + .is_ok() + { + return Err(HrmpWatermarkAcceptanceErr::LandsOnBlockWithNoMessages { + new_watermark: new_hrmp_watermark, + }); + } + Ok(()) + } + } + + pub(crate) fn check_outbound_hrmp( + config: &HostConfiguration, + sender: ParaId, + out_hrmp_msgs: &[OutboundHrmpMessage], + ) -> Result<(), OutboundHrmpAcceptanceErr> { + if out_hrmp_msgs.len() as u32 > config.hrmp_max_message_num_per_candidate { + return Err(OutboundHrmpAcceptanceErr::MoreMessagesThanPermitted { + sent: out_hrmp_msgs.len() as u32, + permitted: config.hrmp_max_message_num_per_candidate, + }); + } + + let mut last_recipient = None::; + + for (idx, out_msg) in out_hrmp_msgs + .iter() + .enumerate() + .map(|(idx, out_msg)| (idx as u32, out_msg)) + { + match last_recipient { + // the messages must be sorted in ascending order and there must be no two messages sent + // to the same recipient. Thus we can check that every recipient is strictly greater than + // the previous one. + Some(last_recipient) if out_msg.recipient <= last_recipient => { + return Err(OutboundHrmpAcceptanceErr::NotSorted { idx }); + } + _ => last_recipient = Some(out_msg.recipient), + } + + let channel_id = HrmpChannelId { + sender, + recipient: out_msg.recipient, + }; + + let channel = match ::HrmpChannels::get(&channel_id) { + Some(channel) => channel, + None => { + return Err(OutboundHrmpAcceptanceErr::NoSuchChannel { channel_id, idx }); + } + }; + + let msg_size = out_msg.data.len() as u32; + if msg_size > channel.max_message_size { + return Err(OutboundHrmpAcceptanceErr::MaxMessageSizeExceeded { + idx, + msg_size, + max_size: channel.max_message_size, + }); + } + + let new_total_size = channel.total_size + out_msg.data.len() as u32; + if new_total_size > channel.max_total_size { + return Err(OutboundHrmpAcceptanceErr::TotalSizeExceeded { + idx, + total_size: new_total_size, + limit: channel.max_total_size, + }); + } + + let new_msg_count = channel.msg_count + 1; + if new_msg_count > channel.max_capacity { + return Err(OutboundHrmpAcceptanceErr::CapacityExceeded { + idx, + count: new_msg_count, + limit: channel.max_capacity, + }); + } + } + + Ok(()) + } + + pub(crate) fn prune_hrmp(recipient: ParaId, new_hrmp_watermark: T::BlockNumber) -> Weight { + let mut weight = 0; + + // sift through the incoming messages digest to collect the paras that sent at least one + // message to this parachain between the old and new watermarks. + let senders = ::HrmpChannelDigests::mutate(&recipient, |digest| { + let mut senders = BTreeSet::new(); + let mut leftover = Vec::with_capacity(digest.len()); + for (block_no, paras_sent_msg) in mem::replace(digest, Vec::new()) { + if block_no <= new_hrmp_watermark { + senders.extend(paras_sent_msg); + } else { + leftover.push((block_no, paras_sent_msg)); + } + } + *digest = leftover; + senders + }); + weight += T::DbWeight::get().reads_writes(1, 1); + + // having all senders we can trivially find out the channels which we need to prune. + let channels_to_prune = senders + .into_iter() + .map(|sender| HrmpChannelId { sender, recipient }); + for channel_id in channels_to_prune { + // prune each channel up to the new watermark keeping track how many messages we removed + // and what is the total byte size of them. + let (mut pruned_cnt, mut pruned_size) = (0, 0); + + let contents = ::HrmpChannelContents::get(&channel_id); + let mut leftover = Vec::with_capacity(contents.len()); + for msg in contents { + if msg.sent_at <= new_hrmp_watermark { + pruned_cnt += 1; + pruned_size += msg.data.len(); + } else { + leftover.push(msg); + } + } + if !leftover.is_empty() { + ::HrmpChannelContents::insert(&channel_id, leftover); + } else { + ::HrmpChannelContents::remove(&channel_id); + } + + // update the channel metadata. + ::HrmpChannels::mutate(&channel_id, |channel| { + if let Some(ref mut channel) = channel { + channel.msg_count -= pruned_cnt as u32; + channel.total_size -= pruned_size as u32; + } + }); + + weight += T::DbWeight::get().reads_writes(2, 2); + } + + ::HrmpWatermarks::insert(&recipient, new_hrmp_watermark); + weight += T::DbWeight::get().reads_writes(0, 1); + + weight + } + + /// Process the outbound HRMP messages by putting them into the appropriate recipient queues. + /// + /// Returns the amount of weight consumed. + pub(crate) fn queue_outbound_hrmp( + sender: ParaId, + out_hrmp_msgs: Vec>, + ) -> Weight { + let mut weight = 0; + let now = >::block_number(); + + for out_msg in out_hrmp_msgs { + let channel_id = HrmpChannelId { + sender, + recipient: out_msg.recipient, + }; + + let mut channel = match ::HrmpChannels::get(&channel_id) { + Some(channel) => channel, + None => { + // apparently, that since acceptance of this candidate the recipient was + // offboarded and the channel no longer exists. + continue; + } + }; + + let inbound = InboundHrmpMessage { + sent_at: now, + data: out_msg.data, + }; + + // book keeping + channel.msg_count += 1; + channel.total_size += inbound.data.len() as u32; + + // compute the new MQC head of the channel + let prev_head = channel.mqc_head.clone().unwrap_or(Default::default()); + let new_head = BlakeTwo256::hash_of(&( + prev_head, + inbound.sent_at, + T::Hashing::hash_of(&inbound.data), + )); + channel.mqc_head = Some(new_head); + + ::HrmpChannels::insert(&channel_id, channel); + ::HrmpChannelContents::append(&channel_id, inbound); + + // The digests are sorted in ascending by block number order. Assuming absence of + // contextual execution, there are only two possible scenarios here: + // + // (a) It's the first time anybody sends a message to this recipient within this block. + // In this case, the digest vector would be empty or the block number of the latest + // entry is smaller than the current. + // + // (b) Somebody has already sent a message within the current block. That means that + // the block number of the latest entry is equal to the current. + // + // Note that having the latest entry greater than the current block number is a logical + // error. + let mut recipient_digest = + ::HrmpChannelDigests::get(&channel_id.recipient); + if let Some(cur_block_digest) = recipient_digest + .last_mut() + .filter(|(block_no, _)| *block_no == now) + .map(|(_, ref mut d)| d) + { + cur_block_digest.push(sender); + } else { + recipient_digest.push((now, vec![sender])); + } + ::HrmpChannelDigests::insert(&channel_id.recipient, recipient_digest); + + weight += T::DbWeight::get().reads_writes(2, 2); + } + + weight + } + + pub(super) fn init_open_channel( + origin: ParaId, + recipient: ParaId, + proposed_max_capacity: u32, + proposed_max_message_size: u32, + ) -> Result<(), Error> { + ensure!(origin != recipient, Error::::OpenHrmpChannelToSelf); + ensure!( + >::is_valid_para(recipient), + Error::::OpenHrmpChannelInvalidRecipient, + ); + + let config = >::config(); + ensure!( + proposed_max_capacity > 0, + Error::::OpenHrmpChannelZeroCapacity, + ); + ensure!( + proposed_max_capacity <= config.hrmp_channel_max_capacity, + Error::::OpenHrmpChannelCapacityExceedsLimit, + ); + ensure!( + proposed_max_message_size > 0, + Error::::OpenHrmpChannelZeroMessageSize, + ); + ensure!( + proposed_max_message_size <= config.hrmp_channel_max_message_size, + Error::::OpenHrmpChannelMessageSizeExceedsLimit, + ); + + let channel_id = HrmpChannelId { + sender: origin, + recipient, + }; + ensure!( + ::HrmpOpenChannelRequests::get(&channel_id).is_none(), + Error::::OpenHrmpChannelAlreadyExists, + ); + ensure!( + ::HrmpChannels::get(&channel_id).is_none(), + Error::::OpenHrmpChannelAlreadyRequested, + ); + + let egress_cnt = + ::HrmpEgressChannelsIndex::decode_len(&origin).unwrap_or(0) as u32; + let open_req_cnt = ::HrmpOpenChannelRequestCount::get(&origin); + let channel_num_limit = if >::is_parathread(origin) { + config.hrmp_max_parathread_outbound_channels + } else { + config.hrmp_max_parachain_outbound_channels + }; + ensure!( + egress_cnt + open_req_cnt < channel_num_limit, + Error::::OpenHrmpChannelLimitExceeded, + ); + + // TODO: Deposit https://github.com/paritytech/polkadot/issues/1907 + + ::HrmpOpenChannelRequestCount::insert(&origin, open_req_cnt + 1); + ::HrmpOpenChannelRequests::insert( + &channel_id, + HrmpOpenChannelRequest { + confirmed: false, + age: 0, + sender_deposit: config.hrmp_sender_deposit, + max_capacity: proposed_max_capacity, + max_message_size: proposed_max_message_size, + max_total_size: config.hrmp_channel_max_total_size, + }, + ); + ::HrmpOpenChannelRequestsList::append(channel_id); + + let notification_bytes = { + use xcm::v0::Xcm; + use codec::Encode as _; + + Xcm::HrmpNewChannelOpenRequest { + sender: u32::from(origin), + max_capacity: proposed_max_capacity, + max_message_size: proposed_max_message_size, + } + .encode() + }; + if let Err(dmp::QueueDownwardMessageError::ExceedsMaxMessageSize) = + >::queue_downward_message(&config, recipient, notification_bytes) + { + // this should never happen unless the max downward message size is configured to an + // jokingly small number. + debug_assert!(false); + } + + Ok(()) + } + + pub(super) fn accept_open_channel(origin: ParaId, sender: ParaId) -> Result<(), Error> { + let channel_id = HrmpChannelId { + sender, + recipient: origin, + }; + let mut channel_req = ::HrmpOpenChannelRequests::get(&channel_id) + .ok_or(Error::::AcceptHrmpChannelDoesntExist)?; + ensure!( + !channel_req.confirmed, + Error::::AcceptHrmpChannelAlreadyConfirmed, + ); + + // check if by accepting this open channel request, this parachain would exceed the + // number of inbound channels. + let config = >::config(); + let channel_num_limit = if >::is_parathread(origin) { + config.hrmp_max_parathread_inbound_channels + } else { + config.hrmp_max_parachain_inbound_channels + }; + let ingress_cnt = + ::HrmpIngressChannelsIndex::decode_len(&origin).unwrap_or(0) as u32; + let accepted_cnt = ::HrmpAcceptedChannelRequestCount::get(&origin); + ensure!( + ingress_cnt + accepted_cnt < channel_num_limit, + Error::::AcceptHrmpChannelLimitExceeded, + ); + + // TODO: Deposit https://github.com/paritytech/polkadot/issues/1907 + + // persist the updated open channel request and then increment the number of accepted + // channels. + channel_req.confirmed = true; + ::HrmpOpenChannelRequests::insert(&channel_id, channel_req); + ::HrmpAcceptedChannelRequestCount::insert(&origin, accepted_cnt + 1); + + let notification_bytes = { + use codec::Encode as _; + use xcm::v0::Xcm; + + Xcm::HrmpChannelAccepted { + recipient: u32::from(origin), + } + .encode() + }; + if let Err(dmp::QueueDownwardMessageError::ExceedsMaxMessageSize) = + >::queue_downward_message(&config, sender, notification_bytes) + { + // this should never happen unless the max downward message size is configured to an + // jokingly small number. + debug_assert!(false); + } + + Ok(()) + } + + pub(super) fn close_channel(origin: ParaId, channel_id: HrmpChannelId) -> Result<(), Error> { + // check if the origin is allowed to close the channel. + ensure!( + origin == channel_id.sender || origin == channel_id.recipient, + Error::::CloseHrmpChannelUnauthorized, + ); + + // check if the channel requested to close does exist. + ensure!( + ::HrmpChannels::get(&channel_id).is_some(), + Error::::CloseHrmpChannelDoesntExist, + ); + + // check that there is no outstanding close request for this channel + ensure!( + ::HrmpCloseChannelRequests::get(&channel_id).is_none(), + Error::::CloseHrmpChannelAlreadyUnderway, + ); + + ::HrmpCloseChannelRequests::insert(&channel_id, ()); + ::HrmpCloseChannelRequestsList::append(channel_id.clone()); + + let config = >::config(); + let notification_bytes = { + use codec::Encode as _; + use xcm::v0::Xcm; + + Xcm::HrmpChannelClosing { + initiator: u32::from(origin), + sender: u32::from(channel_id.sender), + recipient: u32::from(channel_id.recipient), + } + .encode() + }; + let opposite_party = if origin == channel_id.sender { + channel_id.recipient + } else { + channel_id.sender + }; + if let Err(dmp::QueueDownwardMessageError::ExceedsMaxMessageSize) = + >::queue_downward_message(&config, opposite_party, notification_bytes) + { + // this should never happen unless the max downward message size is configured to an + // jokingly small number. + debug_assert!(false); + } + + Ok(()) + } + + /// Returns the list of MQC heads for the inbound channels of the given recipient para paired + /// with the sender para ids. This vector is sorted ascending by the para id and doesn't contain + /// multiple entries with the same sender. + pub(crate) fn hrmp_mqc_heads(recipient: ParaId) -> Vec<(ParaId, Hash)> { + let sender_set = ::HrmpIngressChannelsIndex::get(&recipient); + + // The ingress channels vector is sorted, thus `mqc_heads` is sorted as well. + let mut mqc_heads = Vec::with_capacity(sender_set.len()); + for sender in sender_set { + let channel_metadata = + ::HrmpChannels::get(&HrmpChannelId { sender, recipient }); + let mqc_head = channel_metadata + .and_then(|metadata| metadata.mqc_head) + .unwrap_or(Hash::default()); + mqc_heads.push((sender, mqc_head)); + } + + mqc_heads + } + + /// Returns contents of all channels addressed to the given recipient. Channels that have no + /// messages in them are also included. + pub(crate) fn inbound_hrmp_channels_contents( + recipient: ParaId, + ) -> BTreeMap>> { + let sender_set = ::HrmpIngressChannelsIndex::get(&recipient); + + let mut inbound_hrmp_channels_contents = BTreeMap::new(); + for sender in sender_set { + let channel_contents = + ::HrmpChannelContents::get(&HrmpChannelId { sender, recipient }); + inbound_hrmp_channels_contents.insert(sender, channel_contents); + } + + inbound_hrmp_channels_contents + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::mock::{ + new_test_ext, Configuration, Paras, Hrmp, System, GenesisConfig as MockGenesisConfig, + }; + use primitives::v1::BlockNumber; + use std::collections::{BTreeMap, HashSet}; + + fn run_to_block(to: BlockNumber, new_session: Option>) { + use frame_support::traits::{OnFinalize as _, OnInitialize as _}; + + while System::block_number() < to { + let b = System::block_number(); + + // NOTE: this is in reverse initialization order. + Hrmp::initializer_finalize(); + Paras::initializer_finalize(); + + System::on_finalize(b); + + System::on_initialize(b + 1); + System::set_block_number(b + 1); + + if new_session.as_ref().map_or(false, |v| v.contains(&(b + 1))) { + // NOTE: this is in initialization order. + Paras::initializer_on_new_session(&Default::default()); + Hrmp::initializer_on_new_session(&Default::default()); + } + + // NOTE: this is in initialization order. + Paras::initializer_initialize(b + 1); + Hrmp::initializer_initialize(b + 1); + } + } + + struct GenesisConfigBuilder { + hrmp_channel_max_capacity: u32, + hrmp_channel_max_message_size: u32, + hrmp_max_parathread_outbound_channels: u32, + hrmp_max_parachain_outbound_channels: u32, + hrmp_max_parathread_inbound_channels: u32, + hrmp_max_parachain_inbound_channels: u32, + hrmp_max_message_num_per_candidate: u32, + hrmp_channel_max_total_size: u32, + } + + impl Default for GenesisConfigBuilder { + fn default() -> Self { + Self { + hrmp_channel_max_capacity: 2, + hrmp_channel_max_message_size: 8, + hrmp_max_parathread_outbound_channels: 1, + hrmp_max_parachain_outbound_channels: 2, + hrmp_max_parathread_inbound_channels: 1, + hrmp_max_parachain_inbound_channels: 2, + hrmp_max_message_num_per_candidate: 2, + hrmp_channel_max_total_size: 16, + } + } + } + + impl GenesisConfigBuilder { + fn build(self) -> crate::mock::GenesisConfig { + let mut genesis = default_genesis_config(); + let config = &mut genesis.configuration.config; + config.hrmp_channel_max_capacity = self.hrmp_channel_max_capacity; + config.hrmp_channel_max_message_size = self.hrmp_channel_max_message_size; + config.hrmp_max_parathread_outbound_channels = + self.hrmp_max_parathread_outbound_channels; + config.hrmp_max_parachain_outbound_channels = self.hrmp_max_parachain_outbound_channels; + config.hrmp_max_parathread_inbound_channels = self.hrmp_max_parathread_inbound_channels; + config.hrmp_max_parachain_inbound_channels = self.hrmp_max_parachain_inbound_channels; + config.hrmp_max_message_num_per_candidate = self.hrmp_max_message_num_per_candidate; + config.hrmp_channel_max_total_size = self.hrmp_channel_max_total_size; + genesis + } + } + + fn default_genesis_config() -> MockGenesisConfig { + MockGenesisConfig { + configuration: crate::configuration::GenesisConfig { + config: crate::configuration::HostConfiguration { + max_downward_message_size: 1024, + ..Default::default() + }, + }, + ..Default::default() + } + } + + fn register_parachain(id: ParaId) { + Paras::schedule_para_initialize( + id, + crate::paras::ParaGenesisArgs { + parachain: true, + genesis_head: vec![1].into(), + validation_code: vec![1].into(), + }, + ); + } + + fn deregister_parachain(id: ParaId) { + Paras::schedule_para_cleanup(id); + } + + fn channel_exists(sender: ParaId, recipient: ParaId) -> bool { + ::HrmpChannels::get(&HrmpChannelId { sender, recipient }).is_some() + } + + fn assert_storage_consistency_exhaustive() { + use frame_support::IterableStorageMap; + + assert_eq!( + ::HrmpOpenChannelRequests::iter() + .map(|(k, _)| k) + .collect::>(), + ::HrmpOpenChannelRequestsList::get() + .into_iter() + .collect::>(), + ); + + // verify that the set of keys in `HrmpOpenChannelRequestCount` corresponds to the set + // of _senders_ in `HrmpOpenChannelRequests`. + // + // having ensured that, we can go ahead and go over all counts and verify that they match. + assert_eq!( + ::HrmpOpenChannelRequestCount::iter() + .map(|(k, _)| k) + .collect::>(), + ::HrmpOpenChannelRequests::iter() + .map(|(k, _)| k.sender) + .collect::>(), + ); + for (open_channel_initiator, expected_num) in + ::HrmpOpenChannelRequestCount::iter() + { + let actual_num = ::HrmpOpenChannelRequests::iter() + .filter(|(ch, _)| ch.sender == open_channel_initiator) + .count() as u32; + assert_eq!(expected_num, actual_num); + } + + // The same as above, but for accepted channel request count. Note that we are interested + // only in confirmed open requests. + assert_eq!( + ::HrmpAcceptedChannelRequestCount::iter() + .map(|(k, _)| k) + .collect::>(), + ::HrmpOpenChannelRequests::iter() + .filter(|(_, v)| v.confirmed) + .map(|(k, _)| k.recipient) + .collect::>(), + ); + for (channel_recipient, expected_num) in + ::HrmpAcceptedChannelRequestCount::iter() + { + let actual_num = ::HrmpOpenChannelRequests::iter() + .filter(|(ch, v)| ch.recipient == channel_recipient && v.confirmed) + .count() as u32; + assert_eq!(expected_num, actual_num); + } + + assert_eq!( + ::HrmpCloseChannelRequests::iter() + .map(|(k, _)| k) + .collect::>(), + ::HrmpCloseChannelRequestsList::get() + .into_iter() + .collect::>(), + ); + + // A HRMP watermark can be None for an onboarded parachain. However, an offboarded parachain + // cannot have an HRMP watermark: it should've been cleanup. + assert_contains_only_onboarded( + ::HrmpWatermarks::iter().map(|(k, _)| k), + "HRMP watermarks should contain only onboarded paras", + ); + + // An entry in `HrmpChannels` indicates that the channel is open. Only open channels can + // have contents. + for (non_empty_channel, contents) in ::HrmpChannelContents::iter() { + assert!(::HrmpChannels::contains_key( + &non_empty_channel + )); + + // pedantic check: there should be no empty vectors in storage, those should be modeled + // by a removed kv pair. + assert!(!contents.is_empty()); + } + + // Senders and recipients must be onboarded. Otherwise, all channels associated with them + // are removed. + assert_contains_only_onboarded( + ::HrmpChannels::iter().flat_map(|(k, _)| vec![k.sender, k.recipient]), + "senders and recipients in all channels should be onboarded", + ); + + // Check the docs for `HrmpIngressChannelsIndex` and `HrmpEgressChannelsIndex` in decl + // storage to get an index what are the channel mappings indexes. + // + // Here, from indexes. + // + // ingress egress + // + // a -> [x, y] x -> [a, b] + // b -> [x, z] y -> [a] + // z -> [b] + // + // we derive a list of channels they represent. + // + // (a, x) (a, x) + // (a, y) (a, y) + // (b, x) (b, x) + // (b, z) (b, z) + // + // and then that we compare that to the channel list in the `HrmpChannels`. + let channel_set_derived_from_ingress = ::HrmpIngressChannelsIndex::iter() + .flat_map(|(p, v)| v.into_iter().map(|i| (i, p)).collect::>()) + .collect::>(); + let channel_set_derived_from_egress = ::HrmpEgressChannelsIndex::iter() + .flat_map(|(p, v)| v.into_iter().map(|e| (p, e)).collect::>()) + .collect::>(); + let channel_set_ground_truth = ::HrmpChannels::iter() + .map(|(k, _)| (k.sender, k.recipient)) + .collect::>(); + assert_eq!( + channel_set_derived_from_ingress, + channel_set_derived_from_egress + ); + assert_eq!(channel_set_derived_from_egress, channel_set_ground_truth); + + ::HrmpIngressChannelsIndex::iter() + .map(|(_, v)| v) + .for_each(|v| assert_is_sorted(&v, "HrmpIngressChannelsIndex")); + ::HrmpEgressChannelsIndex::iter() + .map(|(_, v)| v) + .for_each(|v| assert_is_sorted(&v, "HrmpIngressChannelsIndex")); + + assert_contains_only_onboarded( + ::HrmpChannelDigests::iter().map(|(k, _)| k), + "HRMP channel digests should contain only onboarded paras", + ); + for (_digest_for_para, digest) in ::HrmpChannelDigests::iter() { + // Assert that items are in **strictly** ascending order. The strictness also implies + // there are no duplicates. + assert!(digest.windows(2).all(|xs| xs[0].0 < xs[1].0)); + + for (_, mut senders) in digest { + assert!(!senders.is_empty()); + + // check for duplicates. For that we sort the vector, then perform deduplication. + // if the vector stayed the same, there are no duplicates. + senders.sort(); + let orig_senders = senders.clone(); + senders.dedup(); + assert_eq!( + orig_senders, senders, + "duplicates removed implies existence of duplicates" + ); + } + } + + fn assert_contains_only_onboarded(iter: impl Iterator, cause: &str) { + for para in iter { + assert!( + Paras::is_valid_para(para), + "{}: {} para is offboarded", + cause, + para + ); + } + } + } + + fn assert_is_sorted(slice: &[T], id: &str) { + assert!( + slice.windows(2).all(|xs| xs[0] <= xs[1]), + "{} supposed to be sorted", + id + ); + } + + #[test] + fn empty_state_consistent_state() { + new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| { + assert_storage_consistency_exhaustive(); + }); + } + + #[test] + fn open_channel_works() { + let para_a = 1.into(); + let para_b = 3.into(); + + new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| { + // We need both A & B to be registered and alive parachains. + register_parachain(para_a); + register_parachain(para_b); + + run_to_block(5, Some(vec![5])); + Hrmp::init_open_channel(para_a, para_b, 2, 8).unwrap(); + assert_storage_consistency_exhaustive(); + + Hrmp::accept_open_channel(para_b, para_a).unwrap(); + assert_storage_consistency_exhaustive(); + + // Advance to a block 6, but without session change. That means that the channel has + // not been created yet. + run_to_block(6, None); + assert!(!channel_exists(para_a, para_b)); + assert_storage_consistency_exhaustive(); + + // Now let the session change happen and thus open the channel. + run_to_block(8, Some(vec![8])); + assert!(channel_exists(para_a, para_b)); + }); + } + + #[test] + fn close_channel_works() { + let para_a = 5.into(); + let para_b = 2.into(); + + new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| { + register_parachain(para_a); + register_parachain(para_b); + + run_to_block(5, Some(vec![5])); + Hrmp::init_open_channel(para_a, para_b, 2, 8).unwrap(); + Hrmp::accept_open_channel(para_b, para_a).unwrap(); + + run_to_block(6, Some(vec![6])); + assert!(channel_exists(para_a, para_b)); + + // Close the channel. The effect is not immediate, but rather deferred to the next + // session change. + Hrmp::close_channel( + para_b, + HrmpChannelId { + sender: para_a, + recipient: para_b, + }, + ) + .unwrap(); + assert!(channel_exists(para_a, para_b)); + assert_storage_consistency_exhaustive(); + + // After the session change the channel should be closed. + run_to_block(8, Some(vec![8])); + assert!(!channel_exists(para_a, para_b)); + assert_storage_consistency_exhaustive(); + }); + } + + #[test] + fn send_recv_messages() { + let para_a = 32.into(); + let para_b = 64.into(); + + let mut genesis = GenesisConfigBuilder::default(); + genesis.hrmp_channel_max_message_size = 20; + genesis.hrmp_channel_max_total_size = 20; + new_test_ext(genesis.build()).execute_with(|| { + register_parachain(para_a); + register_parachain(para_b); + + run_to_block(5, Some(vec![5])); + Hrmp::init_open_channel(para_a, para_b, 2, 20).unwrap(); + Hrmp::accept_open_channel(para_b, para_a).unwrap(); + + // On Block 6: + // A sends a message to B + run_to_block(6, Some(vec![6])); + assert!(channel_exists(para_a, para_b)); + let msgs = vec![OutboundHrmpMessage { + recipient: para_b, + data: b"this is an emergency".to_vec(), + }]; + let config = Configuration::config(); + assert!(Hrmp::check_outbound_hrmp(&config, para_a, &msgs).is_ok()); + let _ = Hrmp::queue_outbound_hrmp(para_a, msgs); + assert_storage_consistency_exhaustive(); + + // On Block 7: + // B receives the message sent by A. B sets the watermark to 6. + run_to_block(7, None); + assert!(Hrmp::check_hrmp_watermark(para_b, 7, 6).is_ok()); + let _ = Hrmp::prune_hrmp(para_b, 6); + assert_storage_consistency_exhaustive(); + }); + } + + #[test] + fn accept_incoming_request_and_offboard() { + let para_a = 32.into(); + let para_b = 64.into(); + + new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| { + register_parachain(para_a); + register_parachain(para_b); + + run_to_block(5, Some(vec![5])); + Hrmp::init_open_channel(para_a, para_b, 2, 8).unwrap(); + Hrmp::accept_open_channel(para_b, para_a).unwrap(); + deregister_parachain(para_a); + + // On Block 6: session change. The channel should not be created. + run_to_block(6, Some(vec![6])); + assert!(!Paras::is_valid_para(para_a)); + assert!(!channel_exists(para_a, para_b)); + assert_storage_consistency_exhaustive(); + }); + } + + #[test] + fn check_sent_messages() { + let para_a = 32.into(); + let para_b = 64.into(); + let para_c = 97.into(); + + new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| { + register_parachain(para_a); + register_parachain(para_b); + register_parachain(para_c); + + run_to_block(5, Some(vec![5])); + + // Open two channels to the same receiver, b: + // a -> b, c -> b + Hrmp::init_open_channel(para_a, para_b, 2, 8).unwrap(); + Hrmp::accept_open_channel(para_b, para_a).unwrap(); + Hrmp::init_open_channel(para_c, para_b, 2, 8).unwrap(); + Hrmp::accept_open_channel(para_b, para_c).unwrap(); + + // On Block 6: session change. + run_to_block(6, Some(vec![6])); + assert!(Paras::is_valid_para(para_a)); + + let msgs = vec![OutboundHrmpMessage { + recipient: para_b, + data: b"knock".to_vec(), + }]; + let config = Configuration::config(); + assert!(Hrmp::check_outbound_hrmp(&config, para_a, &msgs).is_ok()); + let _ = Hrmp::queue_outbound_hrmp(para_a, msgs.clone()); + + // Verify that the sent messages are there and that also the empty channels are present. + let mqc_heads = Hrmp::hrmp_mqc_heads(para_b); + let contents = Hrmp::inbound_hrmp_channels_contents(para_b); + assert_eq!( + contents, + vec![ + ( + para_a, + vec![InboundHrmpMessage { + sent_at: 6, + data: b"knock".to_vec(), + }] + ), + (para_c, vec![]) + ] + .into_iter() + .collect::>(), + ); + assert_eq!( + mqc_heads, + vec![ + ( + para_a, + hex_literal::hex!( + "3bba6404e59c91f51deb2ae78f1273ebe75896850713e13f8c0eba4b0996c483" + ) + .into() + ), + (para_c, Default::default()) + ], + ); + + assert_storage_consistency_exhaustive(); + }); + } +} diff --git a/runtime/parachains/src/lib.rs b/runtime/parachains/src/lib.rs index 2705998cef5d..10f7ed106ef7 100644 --- a/runtime/parachains/src/lib.rs +++ b/runtime/parachains/src/lib.rs @@ -33,6 +33,7 @@ pub mod validity; pub mod origin; pub mod dmp; pub mod ump; +pub mod hrmp; pub mod runtime_api_impl; diff --git a/runtime/parachains/src/mock.rs b/runtime/parachains/src/mock.rs index 60249520ad76..403d5f8068f8 100644 --- a/runtime/parachains/src/mock.rs +++ b/runtime/parachains/src/mock.rs @@ -119,6 +119,10 @@ impl crate::ump::Trait for Test { type UmpSink = crate::ump::mock_sink::MockUmpSink; } +impl crate::hrmp::Trait for Test { + type Origin = Origin; +} + impl crate::scheduler::Trait for Test { } impl crate::inclusion::Trait for Test { @@ -145,6 +149,9 @@ pub type Dmp = crate::dmp::Module; /// Mocked UMP pub type Ump = crate::ump::Module; +/// Mocked HRMP +pub type Hrmp = crate::hrmp::Module; + /// Mocked scheduler. pub type Scheduler = crate::scheduler::Module; From c88bdc940d62761929f42a5a4472dbf53ec5c8cd Mon Sep 17 00:00:00 2001 From: Sergey Shulepov Date: Fri, 6 Nov 2020 19:26:30 +0100 Subject: [PATCH 06/16] Switch over to new modules --- runtime/common/src/paras_registrar.rs | 23 +++++++------ runtime/common/src/paras_sudo_wrapper.rs | 10 +++--- runtime/parachains/src/inclusion.rs | 33 +++++++++++-------- runtime/parachains/src/inclusion_inherent.rs | 4 +-- runtime/parachains/src/initializer.rs | 22 +++++++++---- runtime/parachains/src/lib.rs | 22 +++++++++++++ runtime/parachains/src/paras.rs | 4 +-- runtime/parachains/src/runtime_api_impl/v1.rs | 10 +++--- runtime/parachains/src/util.rs | 12 +++---- 9 files changed, 89 insertions(+), 51 deletions(-) diff --git a/runtime/common/src/paras_registrar.rs b/runtime/common/src/paras_registrar.rs index dab0bb02e250..6ecd99aee9fe 100644 --- a/runtime/common/src/paras_registrar.rs +++ b/runtime/common/src/paras_registrar.rs @@ -33,7 +33,7 @@ use runtime_parachains::{ self, ParaGenesisArgs, }, - router, + dmp, ump, hrmp, ensure_parachain, Origin, }; @@ -41,7 +41,7 @@ use runtime_parachains::{ type BalanceOf = <::Currency as Currency<::AccountId>>::Balance; -pub trait Trait: paras::Trait + router::Trait { +pub trait Trait: paras::Trait + dmp::Trait + ump::Trait + hrmp::Trait { /// The aggregated origin type must support the `parachains` origin. We require that we can /// infallibly convert between this origin and the system origin, but in reality, they're the /// same type, we just can't express that to the Rust type system without writing a `where` @@ -125,7 +125,7 @@ decl_module! { parachain: false, }; - >::schedule_para_initialize(id, genesis); + runtime_parachains::schedule_para_initialize::(id, genesis); Ok(()) } @@ -150,8 +150,7 @@ decl_module! { let debtor = >::take(id); let _ = ::Currency::unreserve(&debtor, T::ParathreadDeposit::get()); - >::schedule_para_cleanup(id); - >::schedule_para_cleanup(id); + runtime_parachains::schedule_para_cleanup::(id); Ok(()) } @@ -231,7 +230,7 @@ impl Module { parachain: true, }; - >::schedule_para_initialize(id, genesis); + runtime_parachains::schedule_para_initialize::(id, genesis); Ok(()) } @@ -242,8 +241,7 @@ impl Module { ensure!(is_parachain, Error::::InvalidChainId); - >::schedule_para_cleanup(id); - >::schedule_para_cleanup(id); + runtime_parachains::schedule_para_cleanup::(id); Ok(()) } @@ -267,7 +265,7 @@ mod tests { impl_outer_origin, impl_outer_dispatch, assert_ok, parameter_types, }; use keyring::Sr25519Keyring; - use runtime_parachains::{initializer, configuration, inclusion, router, scheduler}; + use runtime_parachains::{initializer, configuration, inclusion, scheduler, dmp, ump, hrmp}; use pallet_session::OneSessionHandler; impl_outer_origin! { @@ -425,8 +423,13 @@ mod tests { type WeightInfo = (); } - impl router::Trait for Test { + impl dmp::Trait for Test {} + + impl ump::Trait for Test { type UmpSink = (); + } + + impl hrmp::Trait for Test { type Origin = Origin; } diff --git a/runtime/common/src/paras_sudo_wrapper.rs b/runtime/common/src/paras_sudo_wrapper.rs index 80f64bf1718b..19245ac873d1 100644 --- a/runtime/common/src/paras_sudo_wrapper.rs +++ b/runtime/common/src/paras_sudo_wrapper.rs @@ -23,13 +23,12 @@ use frame_support::{ }; use frame_system::ensure_root; use runtime_parachains::{ - router, - paras::{self, ParaGenesisArgs}, + dmp, ump, hrmp, paras::{self, ParaGenesisArgs}, }; use primitives::v1::Id as ParaId; /// The module's configuration trait. -pub trait Trait: paras::Trait + router::Trait { } +pub trait Trait: paras::Trait + dmp::Trait + ump::Trait + hrmp::Trait { } decl_error! { pub enum Error for Module { } @@ -48,7 +47,7 @@ decl_module! { genesis: ParaGenesisArgs, ) -> DispatchResult { ensure_root(origin)?; - paras::Module::::schedule_para_initialize(id, genesis); + runtime_parachains::schedule_para_initialize::(id, genesis); Ok(()) } @@ -56,8 +55,7 @@ decl_module! { #[weight = (1_000, DispatchClass::Operational)] pub fn sudo_schedule_para_cleanup(origin, id: ParaId) -> DispatchResult { ensure_root(origin)?; - paras::Module::::schedule_para_cleanup(id); - router::Module::::schedule_para_cleanup(id); + runtime_parachains::schedule_para_cleanup::(id); Ok(()) } } diff --git a/runtime/parachains/src/inclusion.rs b/runtime/parachains/src/inclusion.rs index 572a426e3a8c..1509b884080d 100644 --- a/runtime/parachains/src/inclusion.rs +++ b/runtime/parachains/src/inclusion.rs @@ -36,7 +36,7 @@ use bitvec::{order::Lsb0 as BitOrderLsb0, vec::BitVec}; use sp_staking::SessionIndex; use sp_runtime::{DispatchError, traits::{One, Saturating}}; -use crate::{configuration, paras, router, scheduler::CoreAssignment}; +use crate::{configuration, paras, dmp, ump, hrmp, scheduler::CoreAssignment}; /// A bitfield signed by a validator indicating that it is keeping its piece of the erasure-coding /// for any backed candidates referred to by a `1` bit available. @@ -86,7 +86,12 @@ impl CandidatePendingAvailability { } pub trait Trait: - frame_system::Trait + paras::Trait + router::Trait + configuration::Trait + frame_system::Trait + + paras::Trait + + dmp::Trait + + ump::Trait + + hrmp::Trait + + configuration::Trait { type Event: From> + Into<::Event>; } @@ -600,19 +605,19 @@ impl Module { } // enact the messaging facet of the candidate. - weight += >::prune_dmq( + weight += >::prune_dmq( receipt.descriptor.para_id, commitments.processed_downward_messages, ); - weight += >::enact_upward_messages( + weight += >::enact_upward_messages( receipt.descriptor.para_id, commitments.upward_messages, ); - weight += >::prune_hrmp( + weight += >::prune_hrmp( receipt.descriptor.para_id, T::BlockNumber::from(commitments.hrmp_watermark), ); - weight += >::queue_outbound_hrmp( + weight += >::queue_outbound_hrmp( receipt.descriptor.para_id, commitments.horizontal_messages, ); @@ -719,10 +724,10 @@ enum AcceptanceCheckErr { HeadDataTooLarge, PrematureCodeUpgrade, NewCodeTooLarge, - ProcessedDownwardMessages(router::ProcessedDownwardMessagesAcceptanceErr), - UpwardMessages(router::UpwardMessagesAcceptanceCheckErr), - HrmpWatermark(router::HrmpWatermarkAcceptanceErr), - OutboundHrmp(router::OutboundHrmpAcceptanceErr), + ProcessedDownwardMessages(dmp::ProcessedDownwardMessagesAcceptanceErr), + UpwardMessages(ump::AcceptanceCheckErr), + HrmpWatermark(hrmp::HrmpWatermarkAcceptanceErr), + OutboundHrmp(hrmp::OutboundHrmpAcceptanceErr), } impl AcceptanceCheckErr { @@ -795,17 +800,17 @@ impl CandidateCheckContext { } // check if the candidate passes the messaging acceptance criteria - >::check_processed_downward_messages( + >::check_processed_downward_messages( para_id, processed_downward_messages, )?; - >::check_upward_messages(&self.config, para_id, upward_messages)?; - >::check_hrmp_watermark( + >::check_upward_messages(&self.config, para_id, upward_messages)?; + >::check_hrmp_watermark( para_id, self.relay_parent_number, hrmp_watermark, )?; - >::check_outbound_hrmp(&self.config, para_id, horizontal_messages)?; + >::check_outbound_hrmp(&self.config, para_id, horizontal_messages)?; Ok(()) } diff --git a/runtime/parachains/src/inclusion_inherent.rs b/runtime/parachains/src/inclusion_inherent.rs index 14f63c9dbbaf..b6cbf94133d9 100644 --- a/runtime/parachains/src/inclusion_inherent.rs +++ b/runtime/parachains/src/inclusion_inherent.rs @@ -35,7 +35,7 @@ use frame_system::ensure_none; use crate::{ inclusion, scheduler::{self, FreedReason}, - router, + ump, }; use inherents::{InherentIdentifier, InherentData, MakeFatalError, ProvideInherent}; @@ -117,7 +117,7 @@ decl_module! { >::occupied(&occupied); // Give some time slice to dispatch pending upward messages. - >::process_pending_upward_messages(); + >::process_pending_upward_messages(); // And track that we've finished processing the inherent for this block. Included::set(Some(())); diff --git a/runtime/parachains/src/initializer.rs b/runtime/parachains/src/initializer.rs index 8e2e88ff59eb..d32b8dd0eb8c 100644 --- a/runtime/parachains/src/initializer.rs +++ b/runtime/parachains/src/initializer.rs @@ -29,7 +29,7 @@ use sp_runtime::traits::One; use codec::{Encode, Decode}; use crate::{ configuration::{self, HostConfiguration}, - paras, router, scheduler, inclusion, + paras, scheduler, inclusion, dmp, ump, hrmp, }; /// Information about a session change that has just occurred. @@ -63,7 +63,9 @@ pub trait Trait: + paras::Trait + scheduler::Trait + inclusion::Trait - + router::Trait + + dmp::Trait + + ump::Trait + + hrmp::Trait { /// A randomness beacon. type Randomness: Randomness; @@ -122,12 +124,16 @@ decl_module! { // - Scheduler // - Inclusion // - Validity - // - Router + // - DMP + // - UMP + // - HRMP let total_weight = configuration::Module::::initializer_initialize(now) + paras::Module::::initializer_initialize(now) + scheduler::Module::::initializer_initialize(now) + inclusion::Module::::initializer_initialize(now) + - router::Module::::initializer_initialize(now); + dmp::Module::::initializer_initialize(now) + + ump::Module::::initializer_initialize(now) + + hrmp::Module::::initializer_initialize(now); HasInitialized::set(Some(())); @@ -137,7 +143,9 @@ decl_module! { fn on_finalize() { // reverse initialization order. - router::Module::::initializer_finalize(); + hrmp::Module::::initializer_finalize(); + ump::Module::::initializer_finalize(); + dmp::Module::::initializer_finalize(); inclusion::Module::::initializer_finalize(); scheduler::Module::::initializer_finalize(); paras::Module::::initializer_finalize(); @@ -181,7 +189,9 @@ impl Module { paras::Module::::initializer_on_new_session(¬ification); scheduler::Module::::initializer_on_new_session(¬ification); inclusion::Module::::initializer_on_new_session(¬ification); - router::Module::::initializer_on_new_session(¬ification); + dmp::Module::::initializer_on_new_session(¬ification); + ump::Module::::initializer_on_new_session(¬ification); + hrmp::Module::::initializer_on_new_session(¬ification); } /// Should be called when a new session occurs. Buffers the session notification to be applied diff --git a/runtime/parachains/src/lib.rs b/runtime/parachains/src/lib.rs index 10f7ed106ef7..c8c71a065606 100644 --- a/runtime/parachains/src/lib.rs +++ b/runtime/parachains/src/lib.rs @@ -43,3 +43,25 @@ mod util; mod mock; pub use origin::{Origin, ensure_parachain}; + +/// Schedule a para to be initialized at the start of the next session with the given genesis data. +pub fn schedule_para_initialize( + id: primitives::v1::Id, + genesis: paras::ParaGenesisArgs, +) { + >::schedule_para_initialize(id, genesis); +} + +/// Schedule a para to be cleaned up at the start of the next session. +pub fn schedule_para_cleanup(id: primitives::v1::Id) +where + T: paras::Trait + + dmp::Trait + + ump::Trait + + hrmp::Trait, +{ + >::schedule_para_cleanup(id); + >::schedule_para_cleanup(id); + >::schedule_para_cleanup(id); + >::schedule_para_cleanup(id); +} diff --git a/runtime/parachains/src/paras.rs b/runtime/parachains/src/paras.rs index 84bdf6cf73ad..ab811f0f7d48 100644 --- a/runtime/parachains/src/paras.rs +++ b/runtime/parachains/src/paras.rs @@ -396,7 +396,7 @@ impl Module { } /// Schedule a para to be initialized at the start of the next session. - pub fn schedule_para_initialize(id: ParaId, genesis: ParaGenesisArgs) -> Weight { + pub(crate) fn schedule_para_initialize(id: ParaId, genesis: ParaGenesisArgs) -> Weight { let dup = UpcomingParas::mutate(|v| { match v.binary_search(&id) { Ok(_) => true, @@ -418,7 +418,7 @@ impl Module { } /// Schedule a para to be cleaned up at the start of the next session. - pub fn schedule_para_cleanup(id: ParaId) -> Weight { + pub(crate) fn schedule_para_cleanup(id: ParaId) -> Weight { let upcoming_weight = UpcomingParas::mutate(|v| { match v.binary_search(&id) { Ok(i) => { diff --git a/runtime/parachains/src/runtime_api_impl/v1.rs b/runtime/parachains/src/runtime_api_impl/v1.rs index 48e21bf2bfa0..2f49f4af8c7e 100644 --- a/runtime/parachains/src/runtime_api_impl/v1.rs +++ b/runtime/parachains/src/runtime_api_impl/v1.rs @@ -28,7 +28,7 @@ use primitives::v1::{ }; use sp_runtime::traits::Zero; use frame_support::debug; -use crate::{initializer, inclusion, scheduler, configuration, paras, router}; +use crate::{initializer, inclusion, scheduler, configuration, paras, dmp, hrmp}; /// Implementation for the `validators` function of the runtime API. pub fn validators() -> Vec { @@ -310,15 +310,15 @@ where } /// Implementation for the `dmq_contents` function of the runtime API. -pub fn dmq_contents( +pub fn dmq_contents( recipient: ParaId, ) -> Vec> { - >::dmq_contents(recipient) + >::dmq_contents(recipient) } /// Implementation for the `inbound_hrmp_channels_contents` function of the runtime API. -pub fn inbound_hrmp_channels_contents( +pub fn inbound_hrmp_channels_contents( recipient: ParaId, ) -> BTreeMap>> { - >::inbound_hrmp_channels_contents(recipient) + >::inbound_hrmp_channels_contents(recipient) } diff --git a/runtime/parachains/src/util.rs b/runtime/parachains/src/util.rs index 34946de3e3d2..c827a86d6580 100644 --- a/runtime/parachains/src/util.rs +++ b/runtime/parachains/src/util.rs @@ -20,12 +20,12 @@ use sp_runtime::traits::{One, Saturating}; use primitives::v1::{Id as ParaId, PersistedValidationData, TransientValidationData}; -use crate::{configuration, paras, router}; +use crate::{configuration, paras, dmp, hrmp}; /// Make the persisted validation data for a particular parachain. /// /// This ties together the storage of several modules. -pub fn make_persisted_validation_data( +pub fn make_persisted_validation_data( para_id: ParaId, ) -> Option> { let relay_parent_number = >::block_number() - One::one(); @@ -33,15 +33,15 @@ pub fn make_persisted_validation_data( Some(PersistedValidationData { parent_head: >::para_head(¶_id)?, block_number: relay_parent_number, - hrmp_mqc_heads: >::hrmp_mqc_heads(para_id), - dmq_mqc_head: >::dmq_mqc_head(para_id), + hrmp_mqc_heads: >::hrmp_mqc_heads(para_id), + dmq_mqc_head: >::dmq_mqc_head(para_id), }) } /// Make the transient validation data for a particular parachain. /// /// This ties together the storage of several modules. -pub fn make_transient_validation_data( +pub fn make_transient_validation_data( para_id: ParaId, ) -> Option> { let config = >::config(); @@ -67,6 +67,6 @@ pub fn make_transient_validation_data( max_head_data_size: config.max_head_data_size, balance: 0, code_upgrade_allowed, - dmq_length: >::dmq_length(para_id), + dmq_length: >::dmq_length(para_id), }) } From a870f718c367f7dbe041376b420e61f24216081c Mon Sep 17 00:00:00 2001 From: Sergey Shulepov Date: Fri, 6 Nov 2020 19:27:03 +0100 Subject: [PATCH 07/16] Router: goodbye sweet prince --- runtime/parachains/src/lib.rs | 1 - runtime/parachains/src/mock.rs | 8 - runtime/parachains/src/router.rs | 331 ------------------------------- 3 files changed, 340 deletions(-) delete mode 100644 runtime/parachains/src/router.rs diff --git a/runtime/parachains/src/lib.rs b/runtime/parachains/src/lib.rs index c8c71a065606..3691b41c365c 100644 --- a/runtime/parachains/src/lib.rs +++ b/runtime/parachains/src/lib.rs @@ -27,7 +27,6 @@ pub mod inclusion; pub mod inclusion_inherent; pub mod initializer; pub mod paras; -pub mod router; pub mod scheduler; pub mod validity; pub mod origin; diff --git a/runtime/parachains/src/mock.rs b/runtime/parachains/src/mock.rs index 403d5f8068f8..edb84e2a1245 100644 --- a/runtime/parachains/src/mock.rs +++ b/runtime/parachains/src/mock.rs @@ -108,11 +108,6 @@ impl crate::paras::Trait for Test { type Origin = Origin; } -impl crate::router::Trait for Test { - type Origin = Origin; - type UmpSink = crate::router::MockUmpSink; -} - impl crate::dmp::Trait for Test { } impl crate::ump::Trait for Test { @@ -140,9 +135,6 @@ pub type Configuration = crate::configuration::Module; /// Mocked paras. pub type Paras = crate::paras::Module; -/// Mocked router. -pub type Router = crate::router::Module; - /// Mocked DMP pub type Dmp = crate::dmp::Module; diff --git a/runtime/parachains/src/router.rs b/runtime/parachains/src/router.rs deleted file mode 100644 index eefc6900b83f..000000000000 --- a/runtime/parachains/src/router.rs +++ /dev/null @@ -1,331 +0,0 @@ -// Copyright 2020 Parity Technologies (UK) Ltd. -// This file is part of Polkadot. - -// Polkadot is free software: you can redistribute it and/or modify -// it under the terms of the GNU General Public License as published by -// the Free Software Foundation, either version 3 of the License, or -// (at your option) any later version. - -// Polkadot is distributed in the hope that it will be useful, -// but WITHOUT ANY WARRANTY; without even the implied warranty of -// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -// GNU General Public License for more details. - -// You should have received a copy of the GNU General Public License -// along with Polkadot. If not, see . - -//! The router module is responsible for handling messaging. -//! -//! The core of the messaging is checking and processing messages sent out by the candidates, -//! routing the messages at their destinations and informing the parachains about the incoming -//! messages. - -use crate::{configuration, paras, initializer, ensure_parachain}; -use sp_std::prelude::*; -use frame_support::{decl_error, decl_module, decl_storage, dispatch::DispatchResult, weights::Weight}; -use sp_std::collections::vec_deque::VecDeque; -use primitives::v1::{ - Id as ParaId, InboundDownwardMessage, Hash, UpwardMessage, HrmpChannelId, InboundHrmpMessage, -}; - -mod dmp; -mod hrmp; -mod ump; - -use hrmp::{HrmpOpenChannelRequest, HrmpChannel}; -pub use dmp::{QueueDownwardMessageError, ProcessedDownwardMessagesAcceptanceErr}; -pub use ump::{UmpSink, AcceptanceCheckErr as UpwardMessagesAcceptanceCheckErr}; -pub use hrmp::{HrmpWatermarkAcceptanceErr, OutboundHrmpAcceptanceErr}; - -#[cfg(test)] -pub use ump::mock_sink::MockUmpSink; - -pub trait Trait: frame_system::Trait + configuration::Trait + paras::Trait { - type Origin: From - + From<::Origin> - + Into::Origin>>; - - /// A place where all received upward messages are funneled. - type UmpSink: UmpSink; -} - -decl_storage! { - trait Store for Module as Router { - /// Paras that are to be cleaned up at the end of the session. - /// The entries are sorted ascending by the para id. - OutgoingParas: Vec; - - /* - * Downward Message Passing (DMP) - * - * Storage layout required for implementation of DMP. - */ - - /// The downward messages addressed for a certain para. - DownwardMessageQueues: map hasher(twox_64_concat) ParaId => Vec>; - /// A mapping that stores the downward message queue MQC head for each para. - /// - /// Each link in this chain has a form: - /// `(prev_head, B, H(M))`, where - /// - `prev_head`: is the previous head hash or zero if none. - /// - `B`: is the relay-chain block number in which a message was appended. - /// - `H(M)`: is the hash of the message being appended. - DownwardMessageQueueHeads: map hasher(twox_64_concat) ParaId => Hash; - - /* - * Upward Message Passing (UMP) - * - * Storage layout required for UMP, specifically dispatchable upward messages. - */ - - /// The messages waiting to be handled by the relay-chain originating from a certain parachain. - /// - /// Note that some upward messages might have been already processed by the inclusion logic. E.g. - /// channel management messages. - /// - /// The messages are processed in FIFO order. - RelayDispatchQueues: map hasher(twox_64_concat) ParaId => VecDeque; - /// Size of the dispatch queues. Caches sizes of the queues in `RelayDispatchQueue`. - /// - /// First item in the tuple is the count of messages and second - /// is the total length (in bytes) of the message payloads. - /// - /// Note that this is an auxilary mapping: it's possible to tell the byte size and the number of - /// messages only looking at `RelayDispatchQueues`. This mapping is separate to avoid the cost of - /// loading the whole message queue if only the total size and count are required. - /// - /// Invariant: - /// - The set of keys should exactly match the set of keys of `RelayDispatchQueues`. - RelayDispatchQueueSize: map hasher(twox_64_concat) ParaId => (u32, u32); - /// The ordered list of `ParaId`s that have a `RelayDispatchQueue` entry. - /// - /// Invariant: - /// - The set of items from this vector should be exactly the set of the keys in - /// `RelayDispatchQueues` and `RelayDispatchQueueSize`. - NeedsDispatch: Vec; - /// This is the para that gets will get dispatched first during the next upward dispatchable queue - /// execution round. - /// - /// Invariant: - /// - If `Some(para)`, then `para` must be present in `NeedsDispatch`. - NextDispatchRoundStartWith: Option; - - /* - * Horizontally Relay-routed Message Passing (HRMP) - * - * HRMP related storage layout - */ - - /// The set of pending HRMP open channel requests. - /// - /// The set is accompanied by a list for iteration. - /// - /// Invariant: - /// - There are no channels that exists in list but not in the set and vice versa. - HrmpOpenChannelRequests: map hasher(twox_64_concat) HrmpChannelId => Option; - HrmpOpenChannelRequestsList: Vec; - - /// This mapping tracks how many open channel requests are inititated by a given sender para. - /// Invariant: `HrmpOpenChannelRequests` should contain the same number of items that has `(X, _)` - /// as the number of `HrmpOpenChannelRequestCount` for `X`. - HrmpOpenChannelRequestCount: map hasher(twox_64_concat) ParaId => u32; - /// This mapping tracks how many open channel requests were accepted by a given recipient para. - /// Invariant: `HrmpOpenChannelRequests` should contain the same number of items `(_, X)` with - /// `confirmed` set to true, as the number of `HrmpAcceptedChannelRequestCount` for `X`. - HrmpAcceptedChannelRequestCount: map hasher(twox_64_concat) ParaId => u32; - - /// A set of pending HRMP close channel requests that are going to be closed during the session change. - /// Used for checking if a given channel is registered for closure. - /// - /// The set is accompanied by a list for iteration. - /// - /// Invariant: - /// - There are no channels that exists in list but not in the set and vice versa. - HrmpCloseChannelRequests: map hasher(twox_64_concat) HrmpChannelId => Option<()>; - HrmpCloseChannelRequestsList: Vec; - - /// The HRMP watermark associated with each para. - /// Invariant: - /// - each para `P` used here as a key should satisfy `Paras::is_valid_para(P)` within a session. - HrmpWatermarks: map hasher(twox_64_concat) ParaId => Option; - /// HRMP channel data associated with each para. - /// Invariant: - /// - each participant in the channel should satisfy `Paras::is_valid_para(P)` within a session. - HrmpChannels: map hasher(twox_64_concat) HrmpChannelId => Option; - /// Ingress/egress indexes allow to find all the senders and receivers given the opposite - /// side. I.e. - /// - /// (a) ingress index allows to find all the senders for a given recipient. - /// (b) egress index allows to find all the recipients for a given sender. - /// - /// Invariants: - /// - for each ingress index entry for `P` each item `I` in the index should present in `HrmpChannels` - /// as `(I, P)`. - /// - for each egress index entry for `P` each item `E` in the index should present in `HrmpChannels` - /// as `(P, E)`. - /// - there should be no other dangling channels in `HrmpChannels`. - /// - the vectors are sorted. - HrmpIngressChannelsIndex: map hasher(twox_64_concat) ParaId => Vec; - HrmpEgressChannelsIndex: map hasher(twox_64_concat) ParaId => Vec; - /// Storage for the messages for each channel. - /// Invariant: cannot be non-empty if the corresponding channel in `HrmpChannels` is `None`. - HrmpChannelContents: map hasher(twox_64_concat) HrmpChannelId => Vec>; - /// Maintains a mapping that can be used to answer the question: - /// What paras sent a message at the given block number for a given reciever. - /// Invariants: - /// - The inner `Vec` is never empty. - /// - The inner `Vec` cannot store two same `ParaId`. - /// - The outer vector is sorted ascending by block number and cannot store two items with the same - /// block number. - HrmpChannelDigests: map hasher(twox_64_concat) ParaId => Vec<(T::BlockNumber, Vec)>; - } -} - -decl_error! { - pub enum Error for Module { - /// The sender tried to open a channel to themselves. - OpenHrmpChannelToSelf, - /// The recipient is not a valid para. - OpenHrmpChannelInvalidRecipient, - /// The requested capacity is zero. - OpenHrmpChannelZeroCapacity, - /// The requested capacity exceeds the global limit. - OpenHrmpChannelCapacityExceedsLimit, - /// The requested maximum message size is 0. - OpenHrmpChannelZeroMessageSize, - /// The open request requested the message size that exceeds the global limit. - OpenHrmpChannelMessageSizeExceedsLimit, - /// The channel already exists - OpenHrmpChannelAlreadyExists, - /// There is already a request to open the same channel. - OpenHrmpChannelAlreadyRequested, - /// The sender already has the maximum number of allowed outbound channels. - OpenHrmpChannelLimitExceeded, - /// The channel from the sender to the origin doesn't exist. - AcceptHrmpChannelDoesntExist, - /// The channel is already confirmed. - AcceptHrmpChannelAlreadyConfirmed, - /// The recipient already has the maximum number of allowed inbound channels. - AcceptHrmpChannelLimitExceeded, - /// The origin tries to close a channel where it is neither the sender nor the recipient. - CloseHrmpChannelUnauthorized, - /// The channel to be closed doesn't exist. - CloseHrmpChannelDoesntExist, - /// The channel close request is already requested. - CloseHrmpChannelAlreadyUnderway, - } -} - -decl_module! { - /// The router module. - pub struct Module for enum Call where origin: ::Origin { - type Error = Error; - - #[weight = 0] - fn hrmp_init_open_channel( - origin, - recipient: ParaId, - proposed_max_capacity: u32, - proposed_max_message_size: u32, - ) -> DispatchResult { - let origin = ensure_parachain(::Origin::from(origin))?; - Self::init_open_channel( - origin, - recipient, - proposed_max_capacity, - proposed_max_message_size - )?; - Ok(()) - } - - #[weight = 0] - fn hrmp_accept_open_channel(origin, sender: ParaId) -> DispatchResult { - let origin = ensure_parachain(::Origin::from(origin))?; - Self::accept_open_channel(origin, sender)?; - Ok(()) - } - - #[weight = 0] - fn hrmp_close_channel(origin, channel_id: HrmpChannelId) -> DispatchResult { - let origin = ensure_parachain(::Origin::from(origin))?; - Self::close_channel(origin, channel_id)?; - Ok(()) - } - } -} - -impl Module { - /// Block initialization logic, called by initializer. - pub(crate) fn initializer_initialize(_now: T::BlockNumber) -> Weight { - 0 - } - - /// Block finalization logic, called by initializer. - pub(crate) fn initializer_finalize() {} - - /// Called by the initializer to note that a new session has started. - pub(crate) fn initializer_on_new_session( - notification: &initializer::SessionChangeNotification, - ) { - Self::perform_outgoing_para_cleanup(); - Self::process_hrmp_open_channel_requests(¬ification.prev_config); - Self::process_hrmp_close_channel_requests(); - } - - /// Iterate over all paras that were registered for offboarding and remove all the data - /// associated with them. - fn perform_outgoing_para_cleanup() { - let outgoing = OutgoingParas::take(); - for outgoing_para in outgoing { - Self::clean_dmp_after_outgoing(outgoing_para); - Self::clean_ump_after_outgoing(outgoing_para); - Self::clean_hrmp_after_outgoing(outgoing_para); - } - } - - /// Schedule a para to be cleaned up at the start of the next session. - pub fn schedule_para_cleanup(id: ParaId) { - OutgoingParas::mutate(|v| { - if let Err(i) = v.binary_search(&id) { - v.insert(i, id); - } - }); - } -} - -#[cfg(test)] -mod tests { - use super::*; - use primitives::v1::BlockNumber; - use frame_support::traits::{OnFinalize, OnInitialize}; - - use crate::mock::{System, Router, GenesisConfig as MockGenesisConfig}; - - pub(crate) fn run_to_block(to: BlockNumber, new_session: Option>) { - while System::block_number() < to { - let b = System::block_number(); - Router::initializer_finalize(); - System::on_finalize(b); - - System::on_initialize(b + 1); - System::set_block_number(b + 1); - - if new_session.as_ref().map_or(false, |v| v.contains(&(b + 1))) { - Router::initializer_on_new_session(&Default::default()); - } - Router::initializer_initialize(b + 1); - } - } - - pub(crate) fn default_genesis_config() -> MockGenesisConfig { - MockGenesisConfig { - configuration: crate::configuration::GenesisConfig { - config: crate::configuration::HostConfiguration { - max_downward_message_size: 1024, - ..Default::default() - }, - }, - ..Default::default() - } - } -} From 8e37d000a4956630ef1c8ac7d28b6c35a6e0ce78 Mon Sep 17 00:00:00 2001 From: Sergey Shulepov Date: Tue, 10 Nov 2020 13:21:22 +0100 Subject: [PATCH 08/16] Link to messaging overview for details. --- roadmap/implementers-guide/src/runtime/dmp.md | 2 ++ roadmap/implementers-guide/src/runtime/hrmp.md | 2 ++ roadmap/implementers-guide/src/runtime/ump.md | 2 ++ 3 files changed, 6 insertions(+) diff --git a/roadmap/implementers-guide/src/runtime/dmp.md b/roadmap/implementers-guide/src/runtime/dmp.md index 74b2cb03e2ed..071485a0aabd 100644 --- a/roadmap/implementers-guide/src/runtime/dmp.md +++ b/roadmap/implementers-guide/src/runtime/dmp.md @@ -1,5 +1,7 @@ # DMP Module +A module responsible for DMP. See [Messaging Overview](../messaging.md) for more details. + ## Storage General storage entries diff --git a/roadmap/implementers-guide/src/runtime/hrmp.md b/roadmap/implementers-guide/src/runtime/hrmp.md index 2200956f055d..80b87c920282 100644 --- a/roadmap/implementers-guide/src/runtime/hrmp.md +++ b/roadmap/implementers-guide/src/runtime/hrmp.md @@ -1,5 +1,7 @@ # HRMP Module +A module responsible for HRMP. See [Messaging Overview](../messaging.md) for more details. + ## Storage General storage entries diff --git a/roadmap/implementers-guide/src/runtime/ump.md b/roadmap/implementers-guide/src/runtime/ump.md index c6017fb7853b..1e5d742657b4 100644 --- a/roadmap/implementers-guide/src/runtime/ump.md +++ b/roadmap/implementers-guide/src/runtime/ump.md @@ -1,5 +1,7 @@ # UMP Module +A module responsible for UMP. See [Messaging Overview](../messaging.md) for more details. + ## Storage General storage entries From 536a914877b629228933e34c52190ebcfdaae4ea Mon Sep 17 00:00:00 2001 From: Sergey Shulepov Date: Tue, 10 Nov 2020 14:27:14 +0100 Subject: [PATCH 09/16] Update missed rococo and test runtimes. --- runtime/rococo/src/lib.rs | 17 +++++++++++++---- runtime/test-runtime/src/lib.rs | 13 ++++++++++--- 2 files changed, 23 insertions(+), 7 deletions(-) diff --git a/runtime/rococo/src/lib.rs b/runtime/rococo/src/lib.rs index 7b5fcfabff5f..aa6fb87e98b4 100644 --- a/runtime/rococo/src/lib.rs +++ b/runtime/rococo/src/lib.rs @@ -73,7 +73,9 @@ use runtime_parachains::inclusion as parachains_inclusion; use runtime_parachains::inclusion_inherent as parachains_inclusion_inherent; use runtime_parachains::initializer as parachains_initializer; use runtime_parachains::paras as parachains_paras; -use runtime_parachains::router as parachains_router; +use runtime_parachains::dmp as parachains_dmp; +use runtime_parachains::ump as parachains_ump; +use runtime_parachains::hrmp as parachains_hrmp; use runtime_parachains::scheduler as parachains_scheduler; pub use pallet_balances::Call as BalancesCall; @@ -184,7 +186,9 @@ construct_runtime! { Scheduler: parachains_scheduler::{Module, Call, Storage}, Paras: parachains_paras::{Module, Call, Storage}, Initializer: parachains_initializer::{Module, Call, Storage}, - Router: parachains_router::{Module, Call, Storage}, + Dmp: parachains_dmp::{Module, Call, Storage}, + Ump: parachains_ump::{Module, Call, Storage}, + Hrmp: parachains_hrmp::{Module, Call, Storage}, Registrar: paras_registrar::{Module, Call, Storage}, ParasSudoWrapper: paras_sudo_wrapper::{Module, Call}, @@ -532,11 +536,16 @@ impl parachains_paras::Trait for Runtime { type Origin = Origin; } -impl parachains_router::Trait for Runtime { - type Origin = Origin; +impl parachains_ump::Trait for Runtime { type UmpSink = (); // TODO: #1873 To be handled by the XCM receiver. } +impl parachains_dmp::Trait for Runtime {} + +impl parachains_hrmp::Trait for Runtime { + type Origin = Origin; +} + impl parachains_inclusion_inherent::Trait for Runtime {} impl parachains_scheduler::Trait for Runtime {} diff --git a/runtime/test-runtime/src/lib.rs b/runtime/test-runtime/src/lib.rs index e54f4118fa29..43a5d186b352 100644 --- a/runtime/test-runtime/src/lib.rs +++ b/runtime/test-runtime/src/lib.rs @@ -30,7 +30,9 @@ use polkadot_runtime_parachains::inclusion as parachains_inclusion; use polkadot_runtime_parachains::inclusion_inherent as parachains_inclusion_inherent; use polkadot_runtime_parachains::initializer as parachains_initializer; use polkadot_runtime_parachains::paras as parachains_paras; -use polkadot_runtime_parachains::router as parachains_router; +use polkadot_runtime_parachains::dmp as parachains_dmp; +use polkadot_runtime_parachains::ump as parachains_ump; +use polkadot_runtime_parachains::hrmp as parachains_hrmp; use polkadot_runtime_parachains::scheduler as parachains_scheduler; use polkadot_runtime_parachains::runtime_api_impl::v1 as runtime_impl; @@ -459,11 +461,16 @@ impl parachains_paras::Trait for Runtime { type Origin = Origin; } -impl parachains_router::Trait for Runtime { - type Origin = Origin; +impl parachains_dmp::Trait for Runtime {} + +impl parachains_ump::Trait for Runtime { type UmpSink = (); } +impl parachains_hrmp::Trait for Runtime { + type Origin = Origin; +} + impl parachains_scheduler::Trait for Runtime {} impl paras_sudo_wrapper::Trait for Runtime {} From eb43e4c0448e87e3813bc4ab3502fb250d3f2feb Mon Sep 17 00:00:00 2001 From: Sergey Shulepov Date: Wed, 11 Nov 2020 13:20:06 +0100 Subject: [PATCH 10/16] Commit destroyed by rebase changes --- runtime/parachains/src/dmp.rs | 5 ++--- runtime/parachains/src/hrmp.rs | 12 +++++++----- runtime/parachains/src/ump.rs | 32 ++++++++++++++++---------------- 3 files changed, 25 insertions(+), 24 deletions(-) diff --git a/runtime/parachains/src/dmp.rs b/runtime/parachains/src/dmp.rs index 49f34aaa49dc..5b49479c4bbf 100644 --- a/runtime/parachains/src/dmp.rs +++ b/runtime/parachains/src/dmp.rs @@ -19,8 +19,7 @@ use crate::{ initializer, }; use frame_support::{decl_module, decl_storage, StorageMap, weights::Weight, traits::Get}; -use sp_std::prelude::*; -use sp_std::fmt; +use sp_std::{fmt, prelude::*}; use sp_runtime::traits::{BlakeTwo256, Hash as HashT, SaturatedConversion}; use primitives::v1::{Id as ParaId, DownwardMessage, InboundDownwardMessage, Hash}; @@ -31,7 +30,7 @@ pub enum QueueDownwardMessageError { ExceedsMaxMessageSize, } -/// An error returned by `check_processed_downward_messages` that indicates an acceptance check +/// An error returned by [`check_processed_downward_messages`] that indicates an acceptance check /// didn't pass. pub enum ProcessedDownwardMessagesAcceptanceErr { /// If there are pending messages then `processed_downward_messages` should be at least 1, diff --git a/runtime/parachains/src/hrmp.rs b/runtime/parachains/src/hrmp.rs index 7a9b5c9bfda8..af8ae8eb1363 100644 --- a/runtime/parachains/src/hrmp.rs +++ b/runtime/parachains/src/hrmp.rs @@ -29,9 +29,11 @@ use primitives::v1::{ SessionIndex, }; use sp_runtime::traits::{BlakeTwo256, Hash as HashT}; -use sp_std::collections::{btree_map::BTreeMap, btree_set::BTreeSet}; -use sp_std::{mem, fmt}; -use sp_std::prelude::*; +use sp_std::{ + mem, fmt, + collections::{btree_map::BTreeMap, btree_set::BTreeSet}, + prelude::*, +}; /// A description of a request to open an HRMP channel. #[derive(Encode, Decode)] @@ -80,7 +82,7 @@ pub struct HrmpChannel { pub mqc_head: Option, } -/// An error returned by `check_hrmp_watermark` that indicates an acceptance criteria check +/// An error returned by [`check_hrmp_watermark`] that indicates an acceptance criteria check /// didn't pass. pub enum HrmpWatermarkAcceptanceErr { AdvancementRule { @@ -96,7 +98,7 @@ pub enum HrmpWatermarkAcceptanceErr { }, } -/// An error returned by `check_outbound_hrmp` that indicates an acceptance criteria check +/// An error returned by [`check_outbound_hrmp`] that indicates an acceptance criteria check /// didn't pass. pub enum OutboundHrmpAcceptanceErr { MoreMessagesThanPermitted { diff --git a/runtime/parachains/src/ump.rs b/runtime/parachains/src/ump.rs index 258cf6513a51..03d52ebb2cd6 100644 --- a/runtime/parachains/src/ump.rs +++ b/runtime/parachains/src/ump.rs @@ -14,9 +14,11 @@ // You should have received a copy of the GNU General Public License // along with Polkadot. If not, see . -use crate::{configuration::{self, HostConfiguration}, initializer}; -use sp_std::prelude::*; -use sp_std::fmt; +use crate::{ + configuration::{self, HostConfiguration}, + initializer, +}; +use sp_std::{fmt, prelude::*}; use sp_std::collections::{btree_map::BTreeMap, vec_deque::VecDeque}; use frame_support::{decl_module, decl_storage, StorageMap, StorageValue, weights::Weight, traits::Get}; use primitives::v1::{Id as ParaId, UpwardMessage}; @@ -50,7 +52,7 @@ impl UmpSink for () { } } -/// An error returned by `check_upward_messages` that indicates a violation of one of acceptance +/// An error returned by [`check_upward_messages`] that indicates a violation of one of acceptance /// criteria rules. pub enum AcceptanceCheckErr { MoreMessagesThanPermitted { @@ -272,13 +274,10 @@ impl Module { v.extend(upward_messages.into_iter()) }); - ::RelayDispatchQueueSize::mutate( - ¶, - |(ref mut cnt, ref mut size)| { - *cnt += extra_cnt; - *size += extra_size; - }, - ); + ::RelayDispatchQueueSize::mutate(¶, |(ref mut cnt, ref mut size)| { + *cnt += extra_cnt; + *size += extra_size; + }); ::NeedsDispatch::mutate(|v| { if let Err(i) = v.binary_search(¶) { @@ -689,8 +688,7 @@ mod tests { // actually count the counts and sizes in queues and compare them to the bookkeeped version. for (para, queue) in ::RelayDispatchQueues::iter() { - let (expected_count, expected_size) = - ::RelayDispatchQueueSize::get(para); + let (expected_count, expected_size) = ::RelayDispatchQueueSize::get(para); let (actual_count, actual_size) = queue.into_iter().fold((0, 0), |(acc_count, acc_size), x| { (acc_count + 1, acc_size + x.len() as u32) @@ -720,9 +718,11 @@ mod tests { } // `NeedsDispatch` is always sorted. - assert!(::NeedsDispatch::get() - .windows(2) - .all(|xs| xs[0] <= xs[1])); + assert!( + ::NeedsDispatch::get() + .windows(2) + .all(|xs| xs[0] <= xs[1]) + ); } #[test] From 3800729204833f0115b85d86de82bfdc37af5bb4 Mon Sep 17 00:00:00 2001 From: Sergei Shulepov Date: Wed, 11 Nov 2020 17:43:53 +0100 Subject: [PATCH 11/16] Don't deprecate Router but rather make it a meta-project Co-authored-by: Bernhard Schuster --- roadmap/implementers-guide/src/glossary.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/roadmap/implementers-guide/src/glossary.md b/roadmap/implementers-guide/src/glossary.md index 706ba7c62f2e..2e5ac8fedacb 100644 --- a/roadmap/implementers-guide/src/glossary.md +++ b/roadmap/implementers-guide/src/glossary.md @@ -24,7 +24,7 @@ Here you can find definitions of a bunch of jargon, usually specific to the Polk - Parathread: A parachain which is scheduled on a pay-as-you-go basis. - Proof-of-Validity (PoV): A stateless-client proof that a parachain candidate is valid, with respect to some validation function. - Relay Parent: A block in the relay chain, referred to in a context where work is being done in the context of the state at this block. -- Router: The router module used to be a runtime module responsible for routing messages between paras and the relay chain. At some point it was split up into separate runtime modules: Dmp, Ump, Hrmp, each responsible for the respective part of message routing. +- Router: The router module is a meta module that consists of three runtime module responsible for routing messages between paras and the relay chain. The three separate separate runtime modules: Dmp, Ump, Hrmp, each responsible for the respective part of message routing. - Runtime: The relay-chain state machine. - Runtime Module: See Module. - Runtime API: A means for the node-side behavior to access structured information based on the state of a fork of the blockchain. From 2069f6e92e89e0ec2a3443aa8f25ca6990477bf9 Mon Sep 17 00:00:00 2001 From: Sergei Shulepov Date: Wed, 11 Nov 2020 17:55:11 +0100 Subject: [PATCH 12/16] Fix typos suggestion Co-authored-by: Bernhard Schuster --- roadmap/implementers-guide/src/glossary.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/roadmap/implementers-guide/src/glossary.md b/roadmap/implementers-guide/src/glossary.md index 2e5ac8fedacb..b8ec3a6ca78a 100644 --- a/roadmap/implementers-guide/src/glossary.md +++ b/roadmap/implementers-guide/src/glossary.md @@ -24,7 +24,7 @@ Here you can find definitions of a bunch of jargon, usually specific to the Polk - Parathread: A parachain which is scheduled on a pay-as-you-go basis. - Proof-of-Validity (PoV): A stateless-client proof that a parachain candidate is valid, with respect to some validation function. - Relay Parent: A block in the relay chain, referred to in a context where work is being done in the context of the state at this block. -- Router: The router module is a meta module that consists of three runtime module responsible for routing messages between paras and the relay chain. The three separate separate runtime modules: Dmp, Ump, Hrmp, each responsible for the respective part of message routing. +- Router: The router module is a meta module that consists of three runtime modules responsible for routing messages between paras and the relay chain. The three separate separate runtime modules are: Dmp, Ump, Hrmp, each responsible for the respective part of message routing. - Runtime: The relay-chain state machine. - Runtime Module: See Module. - Runtime API: A means for the node-side behavior to access structured information based on the state of a fork of the blockchain. From 64aa82c083f7da33bfb174a70a657697d120b818 Mon Sep 17 00:00:00 2001 From: Sergey Shulepov Date: Fri, 13 Nov 2020 17:09:40 +0100 Subject: [PATCH 13/16] Fix repetition in the impl guide --- roadmap/implementers-guide/src/glossary.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/roadmap/implementers-guide/src/glossary.md b/roadmap/implementers-guide/src/glossary.md index b8ec3a6ca78a..2dbe2ab14abe 100644 --- a/roadmap/implementers-guide/src/glossary.md +++ b/roadmap/implementers-guide/src/glossary.md @@ -24,7 +24,7 @@ Here you can find definitions of a bunch of jargon, usually specific to the Polk - Parathread: A parachain which is scheduled on a pay-as-you-go basis. - Proof-of-Validity (PoV): A stateless-client proof that a parachain candidate is valid, with respect to some validation function. - Relay Parent: A block in the relay chain, referred to in a context where work is being done in the context of the state at this block. -- Router: The router module is a meta module that consists of three runtime modules responsible for routing messages between paras and the relay chain. The three separate separate runtime modules are: Dmp, Ump, Hrmp, each responsible for the respective part of message routing. +- Router: The router module is a meta module that consists of three runtime modules responsible for routing messages between paras and the relay chain. The three separate runtime modules are: Dmp, Ump, Hrmp, each responsible for the respective part of message routing. - Runtime: The relay-chain state machine. - Runtime Module: See Module. - Runtime API: A means for the node-side behavior to access structured information based on the state of a fork of the blockchain. From d2052699675dcd3f0ce0fbe958fe1afd94f4d977 Mon Sep 17 00:00:00 2001 From: Sergey Shulepov Date: Fri, 13 Nov 2020 17:10:36 +0100 Subject: [PATCH 14/16] Clarify that processed_downward_messages has the u32 type --- roadmap/implementers-guide/src/runtime/dmp.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/roadmap/implementers-guide/src/runtime/dmp.md b/roadmap/implementers-guide/src/runtime/dmp.md index 071485a0aabd..c191c6a7a66b 100644 --- a/roadmap/implementers-guide/src/runtime/dmp.md +++ b/roadmap/implementers-guide/src/runtime/dmp.md @@ -35,13 +35,13 @@ No initialization routine runs for this module. Candidate Acceptance Function: -* `check_processed_downward_messages(P: ParaId, processed_downward_messages)`: +* `check_processed_downward_messages(P: ParaId, processed_downward_messages: u32)`: 1. Checks that `DownwardMessageQueues` for `P` is at least `processed_downward_messages` long. 1. Checks that `processed_downward_messages` is at least 1 if `DownwardMessageQueues` for `P` is not empty. Candidate Enactment: -* `prune_dmq(P: ParaId, processed_downward_messages)`: +* `prune_dmq(P: ParaId, processed_downward_messages: u32)`: 1. Remove the first `processed_downward_messages` from the `DownwardMessageQueues` of `P`. Utility routines. From 02f65ace9feb38120da4c881c415be47f9124996 Mon Sep 17 00:00:00 2001 From: Sergey Shulepov Date: Mon, 16 Nov 2020 12:15:20 +0100 Subject: [PATCH 15/16] Remove the router subdir. --- runtime/parachains/src/router/dmp.rs | 302 ------ runtime/parachains/src/router/hrmp.rs | 1345 ------------------------- runtime/parachains/src/router/ump.rs | 784 -------------- 3 files changed, 2431 deletions(-) delete mode 100644 runtime/parachains/src/router/dmp.rs delete mode 100644 runtime/parachains/src/router/hrmp.rs delete mode 100644 runtime/parachains/src/router/ump.rs diff --git a/runtime/parachains/src/router/dmp.rs b/runtime/parachains/src/router/dmp.rs deleted file mode 100644 index cc3163e5435c..000000000000 --- a/runtime/parachains/src/router/dmp.rs +++ /dev/null @@ -1,302 +0,0 @@ -// Copyright 2020 Parity Technologies (UK) Ltd. -// This file is part of Polkadot. - -// Polkadot is free software: you can redistribute it and/or modify -// it under the terms of the GNU General Public License as published by -// the Free Software Foundation, either version 3 of the License, or -// (at your option) any later version. - -// Polkadot is distributed in the hope that it will be useful, -// but WITHOUT ANY WARRANTY; without even the implied warranty of -// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -// GNU General Public License for more details. - -// You should have received a copy of the GNU General Public License -// along with Polkadot. If not, see . - -use super::{Trait, Module, Store}; -use crate::configuration::HostConfiguration; -use frame_support::{StorageMap, weights::Weight, traits::Get}; -use sp_std::{fmt, prelude::*}; -use sp_runtime::traits::{BlakeTwo256, Hash as HashT, SaturatedConversion}; -use primitives::v1::{Id as ParaId, DownwardMessage, InboundDownwardMessage, Hash}; - -/// An error sending a downward message. -#[cfg_attr(test, derive(Debug))] -pub enum QueueDownwardMessageError { - /// The message being sent exceeds the configured max message size. - ExceedsMaxMessageSize, -} - -/// An error returned by [`check_processed_downward_messages`] that indicates an acceptance check -/// didn't pass. -pub enum ProcessedDownwardMessagesAcceptanceErr { - /// If there are pending messages then `processed_downward_messages` should be at least 1, - AdvancementRule, - /// `processed_downward_messages` should not be greater than the number of pending messages. - Underflow { - processed_downward_messages: u32, - dmq_length: u32, - }, -} - -impl fmt::Debug for ProcessedDownwardMessagesAcceptanceErr { - fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { - use ProcessedDownwardMessagesAcceptanceErr::*; - match *self { - AdvancementRule => write!( - fmt, - "DMQ is not empty, but processed_downward_messages is 0", - ), - Underflow { - processed_downward_messages, - dmq_length, - } => write!( - fmt, - "processed_downward_messages = {}, but dmq_length is only {}", - processed_downward_messages, dmq_length, - ), - } - } -} - -/// Routines and getters related to downward message passing. -impl Module { - pub(crate) fn clean_dmp_after_outgoing(outgoing_para: ParaId) { - ::DownwardMessageQueues::remove(&outgoing_para); - ::DownwardMessageQueueHeads::remove(&outgoing_para); - } - - /// Enqueue a downward message to a specific recipient para. - /// - /// When encoded, the message should not exceed the `config.max_downward_message_size`. - /// Otherwise, the message won't be sent and `Err` will be returned. - /// - /// It is possible to send a downward message to a non-existent para. That, however, would lead - /// to a dangling storage. If the caller cannot statically prove that the recipient exists - /// then the caller should perform a runtime check. - pub fn queue_downward_message( - config: &HostConfiguration, - para: ParaId, - msg: DownwardMessage, - ) -> Result<(), QueueDownwardMessageError> { - let serialized_len = msg.len() as u32; - if serialized_len > config.max_downward_message_size { - return Err(QueueDownwardMessageError::ExceedsMaxMessageSize); - } - - let inbound = InboundDownwardMessage { - msg, - sent_at: >::block_number(), - }; - - // obtain the new link in the MQC and update the head. - ::DownwardMessageQueueHeads::mutate(para, |head| { - let new_head = - BlakeTwo256::hash_of(&(*head, inbound.sent_at, T::Hashing::hash_of(&inbound.msg))); - *head = new_head; - }); - - ::DownwardMessageQueues::mutate(para, |v| { - v.push(inbound); - }); - - Ok(()) - } - - /// Checks if the number of processed downward messages is valid. - pub(crate) fn check_processed_downward_messages( - para: ParaId, - processed_downward_messages: u32, - ) -> Result<(), ProcessedDownwardMessagesAcceptanceErr> { - let dmq_length = Self::dmq_length(para); - - if dmq_length > 0 && processed_downward_messages == 0 { - return Err(ProcessedDownwardMessagesAcceptanceErr::AdvancementRule); - } - if dmq_length < processed_downward_messages { - return Err(ProcessedDownwardMessagesAcceptanceErr::Underflow { - processed_downward_messages, - dmq_length, - }); - } - - Ok(()) - } - - /// Prunes the specified number of messages from the downward message queue of the given para. - pub(crate) fn prune_dmq(para: ParaId, processed_downward_messages: u32) -> Weight { - ::DownwardMessageQueues::mutate(para, |q| { - let processed_downward_messages = processed_downward_messages as usize; - if processed_downward_messages > q.len() { - // reaching this branch is unexpected due to the constraint established by - // `check_processed_downward_messages`. But better be safe than sorry. - q.clear(); - } else { - *q = q.split_off(processed_downward_messages); - } - }); - T::DbWeight::get().reads_writes(1, 1) - } - - /// Returns the Head of Message Queue Chain for the given para or `None` if there is none - /// associated with it. - pub(crate) fn dmq_mqc_head(para: ParaId) -> Hash { - ::DownwardMessageQueueHeads::get(¶) - } - - /// Returns the number of pending downward messages addressed to the given para. - /// - /// Returns 0 if the para doesn't have an associated downward message queue. - pub(crate) fn dmq_length(para: ParaId) -> u32 { - ::DownwardMessageQueues::decode_len(¶) - .unwrap_or(0) - .saturated_into::() - } - - /// Returns the downward message queue contents for the given para. - /// - /// The most recent messages are the latest in the vector. - pub(crate) fn dmq_contents(recipient: ParaId) -> Vec> { - ::DownwardMessageQueues::get(&recipient) - } -} - -#[cfg(test)] -mod tests { - use super::*; - use crate::mock::{Configuration, Router, new_test_ext}; - use crate::router::{ - OutgoingParas, - tests::{default_genesis_config, run_to_block}, - }; - use frame_support::StorageValue; - use codec::Encode; - - fn queue_downward_message( - para_id: ParaId, - msg: DownwardMessage, - ) -> Result<(), QueueDownwardMessageError> { - Router::queue_downward_message(&Configuration::config(), para_id, msg) - } - - #[test] - fn scheduled_cleanup_performed() { - let a = ParaId::from(1312); - let b = ParaId::from(228); - let c = ParaId::from(123); - - new_test_ext(default_genesis_config()).execute_with(|| { - run_to_block(1, None); - - // enqueue downward messages to A, B and C. - queue_downward_message(a, vec![1, 2, 3]).unwrap(); - queue_downward_message(b, vec![4, 5, 6]).unwrap(); - queue_downward_message(c, vec![7, 8, 9]).unwrap(); - - Router::schedule_para_cleanup(a); - - // run to block without session change. - run_to_block(2, None); - - assert!(!::DownwardMessageQueues::get(&a).is_empty()); - assert!(!::DownwardMessageQueues::get(&b).is_empty()); - assert!(!::DownwardMessageQueues::get(&c).is_empty()); - - Router::schedule_para_cleanup(b); - - // run to block changing the session. - run_to_block(3, Some(vec![3])); - - assert!(::DownwardMessageQueues::get(&a).is_empty()); - assert!(::DownwardMessageQueues::get(&b).is_empty()); - assert!(!::DownwardMessageQueues::get(&c).is_empty()); - - // verify that the outgoing paras are emptied. - assert!(OutgoingParas::get().is_empty()) - }); - } - - #[test] - fn dmq_length_and_head_updated_properly() { - let a = ParaId::from(1312); - let b = ParaId::from(228); - - new_test_ext(default_genesis_config()).execute_with(|| { - assert_eq!(Router::dmq_length(a), 0); - assert_eq!(Router::dmq_length(b), 0); - - queue_downward_message(a, vec![1, 2, 3]).unwrap(); - - assert_eq!(Router::dmq_length(a), 1); - assert_eq!(Router::dmq_length(b), 0); - assert!(!Router::dmq_mqc_head(a).is_zero()); - assert!(Router::dmq_mqc_head(b).is_zero()); - }); - } - - #[test] - fn check_processed_downward_messages() { - let a = ParaId::from(1312); - - new_test_ext(default_genesis_config()).execute_with(|| { - // processed_downward_messages=0 is allowed when the DMQ is empty. - assert!(Router::check_processed_downward_messages(a, 0).is_ok()); - - queue_downward_message(a, vec![1, 2, 3]).unwrap(); - queue_downward_message(a, vec![4, 5, 6]).unwrap(); - queue_downward_message(a, vec![7, 8, 9]).unwrap(); - - // 0 doesn't pass if the DMQ has msgs. - assert!(!Router::check_processed_downward_messages(a, 0).is_ok()); - // a candidate can consume up to 3 messages - assert!(Router::check_processed_downward_messages(a, 1).is_ok()); - assert!(Router::check_processed_downward_messages(a, 2).is_ok()); - assert!(Router::check_processed_downward_messages(a, 3).is_ok()); - // there is no 4 messages in the queue - assert!(!Router::check_processed_downward_messages(a, 4).is_ok()); - }); - } - - #[test] - fn dmq_pruning() { - let a = ParaId::from(1312); - - new_test_ext(default_genesis_config()).execute_with(|| { - assert_eq!(Router::dmq_length(a), 0); - - queue_downward_message(a, vec![1, 2, 3]).unwrap(); - queue_downward_message(a, vec![4, 5, 6]).unwrap(); - queue_downward_message(a, vec![7, 8, 9]).unwrap(); - assert_eq!(Router::dmq_length(a), 3); - - // pruning 0 elements shouldn't change anything. - Router::prune_dmq(a, 0); - assert_eq!(Router::dmq_length(a), 3); - - Router::prune_dmq(a, 2); - assert_eq!(Router::dmq_length(a), 1); - }); - } - - #[test] - fn queue_downward_message_critical() { - let a = ParaId::from(1312); - - let mut genesis = default_genesis_config(); - genesis.configuration.config.max_downward_message_size = 7; - - new_test_ext(genesis).execute_with(|| { - let smol = [0; 3].to_vec(); - let big = [0; 8].to_vec(); - - // still within limits - assert_eq!(smol.encode().len(), 4); - assert!(queue_downward_message(a, smol).is_ok()); - - // that's too big - assert_eq!(big.encode().len(), 9); - assert!(queue_downward_message(a, big).is_err()); - }); - } -} diff --git a/runtime/parachains/src/router/hrmp.rs b/runtime/parachains/src/router/hrmp.rs deleted file mode 100644 index 3bdd895cea8a..000000000000 --- a/runtime/parachains/src/router/hrmp.rs +++ /dev/null @@ -1,1345 +0,0 @@ -// Copyright 2020 Parity Technologies (UK) Ltd. -// This file is part of Polkadot. - -// Polkadot is free software: you can redistribute it and/or modify -// it under the terms of the GNU General Public License as published by -// the Free Software Foundation, either version 3 of the License, or -// (at your option) any later version. - -// Polkadot is distributed in the hope that it will be useful, -// but WITHOUT ANY WARRANTY; without even the implied warranty of -// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -// GNU General Public License for more details. - -// You should have received a copy of the GNU General Public License -// along with Polkadot. If not, see . - -use super::{dmp, Error as DispatchError, Module, Store, Trait}; -use crate::{ - configuration::{self, HostConfiguration}, - paras, -}; -use codec::{Decode, Encode}; -use frame_support::{ensure, traits::Get, weights::Weight, StorageMap, StorageValue}; -use primitives::v1::{ - Balance, Hash, HrmpChannelId, Id as ParaId, InboundHrmpMessage, OutboundHrmpMessage, - SessionIndex, -}; -use sp_runtime::traits::{BlakeTwo256, Hash as HashT}; -use sp_std::{mem, fmt, collections::{btree_map::BTreeMap, btree_set::BTreeSet}, prelude::*}; - -/// A description of a request to open an HRMP channel. -#[derive(Encode, Decode)] -pub struct HrmpOpenChannelRequest { - /// Indicates if this request was confirmed by the recipient. - pub confirmed: bool, - /// How many session boundaries ago this request was seen. - pub age: SessionIndex, - /// The amount that the sender supplied at the time of creation of this request. - pub sender_deposit: Balance, - /// The maximum message size that could be put into the channel. - pub max_message_size: u32, - /// The maximum number of messages that can be pending in the channel at once. - pub max_capacity: u32, - /// The maximum total size of the messages that can be pending in the channel at once. - pub max_total_size: u32, -} - -/// A metadata of an HRMP channel. -#[derive(Encode, Decode)] -#[cfg_attr(test, derive(Debug))] -pub struct HrmpChannel { - /// The amount that the sender supplied as a deposit when opening this channel. - pub sender_deposit: Balance, - /// The amount that the recipient supplied as a deposit when accepting opening this channel. - pub recipient_deposit: Balance, - /// The maximum number of messages that can be pending in the channel at once. - pub max_capacity: u32, - /// The maximum total size of the messages that can be pending in the channel at once. - pub max_total_size: u32, - /// The maximum message size that could be put into the channel. - pub max_message_size: u32, - /// The current number of messages pending in the channel. - /// Invariant: should be less or equal to `max_capacity`.s`. - pub msg_count: u32, - /// The total size in bytes of all message payloads in the channel. - /// Invariant: should be less or equal to `max_total_size`. - pub total_size: u32, - /// A head of the Message Queue Chain for this channel. Each link in this chain has a form: - /// `(prev_head, B, H(M))`, where - /// - `prev_head`: is the previous value of `mqc_head` or zero if none. - /// - `B`: is the [relay-chain] block number in which a message was appended - /// - `H(M)`: is the hash of the message being appended. - /// This value is initialized to a special value that consists of all zeroes which indicates - /// that no messages were previously added. - pub mqc_head: Option, -} - -/// An error returned by [`check_hrmp_watermark`] that indicates an acceptance criteria check -/// didn't pass. -pub enum HrmpWatermarkAcceptanceErr { - AdvancementRule { - new_watermark: BlockNumber, - last_watermark: BlockNumber, - }, - AheadRelayParent { - new_watermark: BlockNumber, - relay_chain_parent_number: BlockNumber, - }, - LandsOnBlockWithNoMessages { - new_watermark: BlockNumber, - }, -} - -/// An error returned by [`check_outbound_hrmp`] that indicates an acceptance criteria check -/// didn't pass. -pub enum OutboundHrmpAcceptanceErr { - MoreMessagesThanPermitted { - sent: u32, - permitted: u32, - }, - NotSorted { - idx: u32, - }, - NoSuchChannel { - idx: u32, - channel_id: HrmpChannelId, - }, - MaxMessageSizeExceeded { - idx: u32, - msg_size: u32, - max_size: u32, - }, - TotalSizeExceeded { - idx: u32, - total_size: u32, - limit: u32, - }, - CapacityExceeded { - idx: u32, - count: u32, - limit: u32, - }, -} - -impl fmt::Debug for HrmpWatermarkAcceptanceErr -where - BlockNumber: fmt::Debug, -{ - fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { - use HrmpWatermarkAcceptanceErr::*; - match self { - AdvancementRule { - new_watermark, - last_watermark, - } => write!( - fmt, - "the HRMP watermark is not advanced relative to the last watermark ({:?} > {:?})", - new_watermark, - last_watermark, - ), - AheadRelayParent { - new_watermark, - relay_chain_parent_number, - } => write!( - fmt, - "the HRMP watermark is ahead the relay-parent ({:?} > {:?})", - new_watermark, - relay_chain_parent_number, - ), - LandsOnBlockWithNoMessages { new_watermark } => write!( - fmt, - "the HRMP watermark ({:?}) doesn't land on a block with messages received", - new_watermark, - ), - } - } -} - -impl fmt::Debug for OutboundHrmpAcceptanceErr { - fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { - use OutboundHrmpAcceptanceErr::*; - match self { - MoreMessagesThanPermitted { sent, permitted } => write!( - fmt, - "more HRMP messages than permitted by config ({} > {})", - sent, - permitted, - ), - NotSorted { idx } => write!( - fmt, - "the HRMP messages are not sorted (first unsorted is at index {})", - idx, - ), - NoSuchChannel { idx, channel_id } => write!( - fmt, - "the HRMP message at index {} is sent to a non existent channel {:?}->{:?}", - idx, - channel_id.sender, - channel_id.recipient, - ), - MaxMessageSizeExceeded { - idx, - msg_size, - max_size, - } => write!( - fmt, - "the HRMP message at index {} exceeds the negotiated channel maximum message size ({} > {})", - idx, - msg_size, - max_size, - ), - TotalSizeExceeded { - idx, - total_size, - limit, - } => write!( - fmt, - "sending the HRMP message at index {} would exceed the neogitiated channel total size ({} > {})", - idx, - total_size, - limit, - ), - CapacityExceeded { idx, count, limit } => write!( - fmt, - "sending the HRMP message at index {} would exceed the neogitiated channel capacity ({} > {})", - idx, - count, - limit, - ), - } - } -} - -/// Routines and getters related to HRMP. -impl Module { - /// Remove all storage entries associated with the given para. - pub(super) fn clean_hrmp_after_outgoing(outgoing_para: ParaId) { - ::HrmpOpenChannelRequestCount::remove(&outgoing_para); - ::HrmpAcceptedChannelRequestCount::remove(&outgoing_para); - - // close all channels where the outgoing para acts as the recipient. - for sender in ::HrmpIngressChannelsIndex::take(&outgoing_para) { - Self::close_hrmp_channel(&HrmpChannelId { - sender, - recipient: outgoing_para.clone(), - }); - } - // close all channels where the outgoing para acts as the sender. - for recipient in ::HrmpEgressChannelsIndex::take(&outgoing_para) { - Self::close_hrmp_channel(&HrmpChannelId { - sender: outgoing_para.clone(), - recipient, - }); - } - } - - /// Iterate over all open channel requests and: - /// - /// - prune the stale requests - /// - enact the confirmed requests - pub(super) fn process_hrmp_open_channel_requests(config: &HostConfiguration) { - let mut open_req_channels = ::HrmpOpenChannelRequestsList::get(); - if open_req_channels.is_empty() { - return; - } - - // iterate the vector starting from the end making our way to the beginning. This way we - // can leverage `swap_remove` to efficiently remove an item during iteration. - let mut idx = open_req_channels.len(); - loop { - // bail if we've iterated over all items. - if idx == 0 { - break; - } - - idx -= 1; - let channel_id = open_req_channels[idx].clone(); - let mut request = ::HrmpOpenChannelRequests::get(&channel_id).expect( - "can't be `None` due to the invariant that the list contains the same items as the set; qed", - ); - - if request.confirmed { - if >::is_valid_para(channel_id.sender) - && >::is_valid_para(channel_id.recipient) - { - ::HrmpChannels::insert( - &channel_id, - HrmpChannel { - sender_deposit: request.sender_deposit, - recipient_deposit: config.hrmp_recipient_deposit, - max_capacity: request.max_capacity, - max_total_size: request.max_total_size, - max_message_size: request.max_message_size, - msg_count: 0, - total_size: 0, - mqc_head: None, - }, - ); - - ::HrmpIngressChannelsIndex::mutate(&channel_id.recipient, |v| { - if let Err(i) = v.binary_search(&channel_id.sender) { - v.insert(i, channel_id.sender); - } - }); - ::HrmpEgressChannelsIndex::mutate(&channel_id.sender, |v| { - if let Err(i) = v.binary_search(&channel_id.recipient) { - v.insert(i, channel_id.recipient); - } - }); - } - - let new_open_channel_req_cnt = - ::HrmpOpenChannelRequestCount::get(&channel_id.sender) - .saturating_sub(1); - if new_open_channel_req_cnt != 0 { - ::HrmpOpenChannelRequestCount::insert( - &channel_id.sender, - new_open_channel_req_cnt, - ); - } else { - ::HrmpOpenChannelRequestCount::remove(&channel_id.sender); - } - - let new_accepted_channel_req_cnt = - ::HrmpAcceptedChannelRequestCount::get(&channel_id.recipient) - .saturating_sub(1); - if new_accepted_channel_req_cnt != 0 { - ::HrmpAcceptedChannelRequestCount::insert( - &channel_id.recipient, - new_accepted_channel_req_cnt, - ); - } else { - ::HrmpAcceptedChannelRequestCount::remove(&channel_id.recipient); - } - - let _ = open_req_channels.swap_remove(idx); - ::HrmpOpenChannelRequests::remove(&channel_id); - } else { - request.age += 1; - if request.age == config.hrmp_open_request_ttl { - // got stale - - ::HrmpOpenChannelRequestCount::mutate(&channel_id.sender, |v| { - *v -= 1; - }); - - // TODO: return deposit https://github.com/paritytech/polkadot/issues/1907 - - let _ = open_req_channels.swap_remove(idx); - ::HrmpOpenChannelRequests::remove(&channel_id); - } - } - } - - ::HrmpOpenChannelRequestsList::put(open_req_channels); - } - - /// Iterate over all close channel requests unconditionally closing the channels. - pub(super) fn process_hrmp_close_channel_requests() { - let close_reqs = ::HrmpCloseChannelRequestsList::take(); - for condemned_ch_id in close_reqs { - ::HrmpCloseChannelRequests::remove(&condemned_ch_id); - Self::close_hrmp_channel(&condemned_ch_id); - - // clean up the indexes. - ::HrmpEgressChannelsIndex::mutate(&condemned_ch_id.sender, |v| { - if let Ok(i) = v.binary_search(&condemned_ch_id.recipient) { - v.remove(i); - } - }); - ::HrmpIngressChannelsIndex::mutate(&condemned_ch_id.recipient, |v| { - if let Ok(i) = v.binary_search(&condemned_ch_id.sender) { - v.remove(i); - } - }); - } - } - - /// Close and remove the designated HRMP channel. - /// - /// This includes returning the deposits. However, it doesn't include updating the ingress/egress - /// indicies. - pub(super) fn close_hrmp_channel(channel_id: &HrmpChannelId) { - // TODO: return deposit https://github.com/paritytech/polkadot/issues/1907 - - ::HrmpChannels::remove(channel_id); - ::HrmpChannelContents::remove(channel_id); - } - - /// Check that the candidate of the given recipient controls the HRMP watermark properly. - pub(crate) fn check_hrmp_watermark( - recipient: ParaId, - relay_chain_parent_number: T::BlockNumber, - new_hrmp_watermark: T::BlockNumber, - ) -> Result<(), HrmpWatermarkAcceptanceErr> { - // First, check where the watermark CANNOT legally land. - // - // (a) For ensuring that messages are eventually, a rule requires each parablock new - // watermark should be greater than the last one. - // - // (b) However, a parachain cannot read into "the future", therefore the watermark should - // not be greater than the relay-chain context block which the parablock refers to. - if let Some(last_watermark) = ::HrmpWatermarks::get(&recipient) { - if new_hrmp_watermark <= last_watermark { - return Err(HrmpWatermarkAcceptanceErr::AdvancementRule { - new_watermark: new_hrmp_watermark, - last_watermark, - }); - } - } - if new_hrmp_watermark > relay_chain_parent_number { - return Err(HrmpWatermarkAcceptanceErr::AheadRelayParent { - new_watermark: new_hrmp_watermark, - relay_chain_parent_number, - }); - } - - // Second, check where the watermark CAN land. It's one of the following: - // - // (a) The relay parent block number. - // (b) A relay-chain block in which this para received at least one message. - if new_hrmp_watermark == relay_chain_parent_number { - Ok(()) - } else { - let digest = ::HrmpChannelDigests::get(&recipient); - if !digest - .binary_search_by_key(&new_hrmp_watermark, |(block_no, _)| *block_no) - .is_ok() - { - return Err(HrmpWatermarkAcceptanceErr::LandsOnBlockWithNoMessages { - new_watermark: new_hrmp_watermark, - }); - } - Ok(()) - } - } - - pub(crate) fn check_outbound_hrmp( - config: &HostConfiguration, - sender: ParaId, - out_hrmp_msgs: &[OutboundHrmpMessage], - ) -> Result<(), OutboundHrmpAcceptanceErr> { - if out_hrmp_msgs.len() as u32 > config.hrmp_max_message_num_per_candidate { - return Err(OutboundHrmpAcceptanceErr::MoreMessagesThanPermitted { - sent: out_hrmp_msgs.len() as u32, - permitted: config.hrmp_max_message_num_per_candidate, - }); - } - - let mut last_recipient = None::; - - for (idx, out_msg) in out_hrmp_msgs - .iter() - .enumerate() - .map(|(idx, out_msg)| (idx as u32, out_msg)) - { - match last_recipient { - // the messages must be sorted in ascending order and there must be no two messages sent - // to the same recipient. Thus we can check that every recipient is strictly greater than - // the previous one. - Some(last_recipient) if out_msg.recipient <= last_recipient => { - return Err(OutboundHrmpAcceptanceErr::NotSorted { idx }); - } - _ => last_recipient = Some(out_msg.recipient), - } - - let channel_id = HrmpChannelId { - sender, - recipient: out_msg.recipient, - }; - - let channel = match ::HrmpChannels::get(&channel_id) { - Some(channel) => channel, - None => { - return Err(OutboundHrmpAcceptanceErr::NoSuchChannel { channel_id, idx }); - } - }; - - let msg_size = out_msg.data.len() as u32; - if msg_size > channel.max_message_size { - return Err(OutboundHrmpAcceptanceErr::MaxMessageSizeExceeded { - idx, - msg_size, - max_size: channel.max_message_size, - }); - } - - let new_total_size = channel.total_size + out_msg.data.len() as u32; - if new_total_size > channel.max_total_size { - return Err(OutboundHrmpAcceptanceErr::TotalSizeExceeded { - idx, - total_size: new_total_size, - limit: channel.max_total_size, - }); - } - - let new_msg_count = channel.msg_count + 1; - if new_msg_count > channel.max_capacity { - return Err(OutboundHrmpAcceptanceErr::CapacityExceeded { - idx, - count: new_msg_count, - limit: channel.max_capacity, - }); - } - } - - Ok(()) - } - - pub(crate) fn prune_hrmp(recipient: ParaId, new_hrmp_watermark: T::BlockNumber) -> Weight { - let mut weight = 0; - - // sift through the incoming messages digest to collect the paras that sent at least one - // message to this parachain between the old and new watermarks. - let senders = ::HrmpChannelDigests::mutate(&recipient, |digest| { - let mut senders = BTreeSet::new(); - let mut leftover = Vec::with_capacity(digest.len()); - for (block_no, paras_sent_msg) in mem::replace(digest, Vec::new()) { - if block_no <= new_hrmp_watermark { - senders.extend(paras_sent_msg); - } else { - leftover.push((block_no, paras_sent_msg)); - } - } - *digest = leftover; - senders - }); - weight += T::DbWeight::get().reads_writes(1, 1); - - // having all senders we can trivially find out the channels which we need to prune. - let channels_to_prune = senders - .into_iter() - .map(|sender| HrmpChannelId { sender, recipient }); - for channel_id in channels_to_prune { - // prune each channel up to the new watermark keeping track how many messages we removed - // and what is the total byte size of them. - let (mut pruned_cnt, mut pruned_size) = (0, 0); - - let contents = ::HrmpChannelContents::get(&channel_id); - let mut leftover = Vec::with_capacity(contents.len()); - for msg in contents { - if msg.sent_at <= new_hrmp_watermark { - pruned_cnt += 1; - pruned_size += msg.data.len(); - } else { - leftover.push(msg); - } - } - if !leftover.is_empty() { - ::HrmpChannelContents::insert(&channel_id, leftover); - } else { - ::HrmpChannelContents::remove(&channel_id); - } - - // update the channel metadata. - ::HrmpChannels::mutate(&channel_id, |channel| { - if let Some(ref mut channel) = channel { - channel.msg_count -= pruned_cnt as u32; - channel.total_size -= pruned_size as u32; - } - }); - - weight += T::DbWeight::get().reads_writes(2, 2); - } - - ::HrmpWatermarks::insert(&recipient, new_hrmp_watermark); - weight += T::DbWeight::get().reads_writes(0, 1); - - weight - } - - /// Process the outbound HRMP messages by putting them into the appropriate recipient queues. - /// - /// Returns the amount of weight consumed. - pub(crate) fn queue_outbound_hrmp( - sender: ParaId, - out_hrmp_msgs: Vec>, - ) -> Weight { - let mut weight = 0; - let now = >::block_number(); - - for out_msg in out_hrmp_msgs { - let channel_id = HrmpChannelId { - sender, - recipient: out_msg.recipient, - }; - - let mut channel = match ::HrmpChannels::get(&channel_id) { - Some(channel) => channel, - None => { - // apparently, that since acceptance of this candidate the recipient was - // offboarded and the channel no longer exists. - continue; - } - }; - - let inbound = InboundHrmpMessage { - sent_at: now, - data: out_msg.data, - }; - - // book keeping - channel.msg_count += 1; - channel.total_size += inbound.data.len() as u32; - - // compute the new MQC head of the channel - let prev_head = channel.mqc_head.clone().unwrap_or(Default::default()); - let new_head = BlakeTwo256::hash_of(&( - prev_head, - inbound.sent_at, - T::Hashing::hash_of(&inbound.data), - )); - channel.mqc_head = Some(new_head); - - ::HrmpChannels::insert(&channel_id, channel); - ::HrmpChannelContents::append(&channel_id, inbound); - - // The digests are sorted in ascending by block number order. Assuming absence of - // contextual execution, there are only two possible scenarios here: - // - // (a) It's the first time anybody sends a message to this recipient within this block. - // In this case, the digest vector would be empty or the block number of the latest - // entry is smaller than the current. - // - // (b) Somebody has already sent a message within the current block. That means that - // the block number of the latest entry is equal to the current. - // - // Note that having the latest entry greater than the current block number is a logical - // error. - let mut recipient_digest = - ::HrmpChannelDigests::get(&channel_id.recipient); - if let Some(cur_block_digest) = recipient_digest - .last_mut() - .filter(|(block_no, _)| *block_no == now) - .map(|(_, ref mut d)| d) - { - cur_block_digest.push(sender); - } else { - recipient_digest.push((now, vec![sender])); - } - ::HrmpChannelDigests::insert(&channel_id.recipient, recipient_digest); - - weight += T::DbWeight::get().reads_writes(2, 2); - } - - weight - } - - pub(super) fn init_open_channel( - origin: ParaId, - recipient: ParaId, - proposed_max_capacity: u32, - proposed_max_message_size: u32, - ) -> Result<(), DispatchError> { - ensure!( - origin != recipient, - DispatchError::::OpenHrmpChannelToSelf - ); - ensure!( - >::is_valid_para(recipient), - DispatchError::::OpenHrmpChannelInvalidRecipient, - ); - - let config = >::config(); - ensure!( - proposed_max_capacity > 0, - DispatchError::::OpenHrmpChannelZeroCapacity, - ); - ensure!( - proposed_max_capacity <= config.hrmp_channel_max_capacity, - DispatchError::::OpenHrmpChannelCapacityExceedsLimit, - ); - ensure!( - proposed_max_message_size > 0, - DispatchError::::OpenHrmpChannelZeroMessageSize, - ); - ensure!( - proposed_max_message_size <= config.hrmp_channel_max_message_size, - DispatchError::::OpenHrmpChannelMessageSizeExceedsLimit, - ); - - let channel_id = HrmpChannelId { - sender: origin, - recipient, - }; - ensure!( - ::HrmpOpenChannelRequests::get(&channel_id).is_none(), - DispatchError::::OpenHrmpChannelAlreadyExists, - ); - ensure!( - ::HrmpChannels::get(&channel_id).is_none(), - DispatchError::::OpenHrmpChannelAlreadyRequested, - ); - - let egress_cnt = - ::HrmpEgressChannelsIndex::decode_len(&origin).unwrap_or(0) as u32; - let open_req_cnt = ::HrmpOpenChannelRequestCount::get(&origin); - let channel_num_limit = if >::is_parathread(origin) { - config.hrmp_max_parathread_outbound_channels - } else { - config.hrmp_max_parachain_outbound_channels - }; - ensure!( - egress_cnt + open_req_cnt < channel_num_limit, - DispatchError::::OpenHrmpChannelLimitExceeded, - ); - - // TODO: Deposit https://github.com/paritytech/polkadot/issues/1907 - - ::HrmpOpenChannelRequestCount::insert(&origin, open_req_cnt + 1); - ::HrmpOpenChannelRequests::insert( - &channel_id, - HrmpOpenChannelRequest { - confirmed: false, - age: 0, - sender_deposit: config.hrmp_sender_deposit, - max_capacity: proposed_max_capacity, - max_message_size: proposed_max_message_size, - max_total_size: config.hrmp_channel_max_total_size, - }, - ); - ::HrmpOpenChannelRequestsList::append(channel_id); - - let notification_bytes = { - use xcm::v0::Xcm; - use codec::Encode as _; - - Xcm::HrmpNewChannelOpenRequest { - sender: u32::from(origin), - max_capacity: proposed_max_capacity, - max_message_size: proposed_max_message_size, - } - .encode() - }; - if let Err(dmp::QueueDownwardMessageError::ExceedsMaxMessageSize) = - Self::queue_downward_message(&config, recipient, notification_bytes) - { - // this should never happen unless the max downward message size is configured to an - // jokingly small number. - debug_assert!(false); - } - - Ok(()) - } - - pub(super) fn accept_open_channel( - origin: ParaId, - sender: ParaId, - ) -> Result<(), DispatchError> { - let channel_id = HrmpChannelId { - sender, - recipient: origin, - }; - let mut channel_req = ::HrmpOpenChannelRequests::get(&channel_id) - .ok_or(DispatchError::::AcceptHrmpChannelDoesntExist)?; - ensure!( - !channel_req.confirmed, - DispatchError::::AcceptHrmpChannelAlreadyConfirmed, - ); - - // check if by accepting this open channel request, this parachain would exceed the - // number of inbound channels. - let config = >::config(); - let channel_num_limit = if >::is_parathread(origin) { - config.hrmp_max_parathread_inbound_channels - } else { - config.hrmp_max_parachain_inbound_channels - }; - let ingress_cnt = - ::HrmpIngressChannelsIndex::decode_len(&origin).unwrap_or(0) as u32; - let accepted_cnt = ::HrmpAcceptedChannelRequestCount::get(&origin); - ensure!( - ingress_cnt + accepted_cnt < channel_num_limit, - DispatchError::::AcceptHrmpChannelLimitExceeded, - ); - - // TODO: Deposit https://github.com/paritytech/polkadot/issues/1907 - - // persist the updated open channel request and then increment the number of accepted - // channels. - channel_req.confirmed = true; - ::HrmpOpenChannelRequests::insert(&channel_id, channel_req); - ::HrmpAcceptedChannelRequestCount::insert(&origin, accepted_cnt + 1); - - let notification_bytes = { - use codec::Encode as _; - use xcm::v0::Xcm; - - Xcm::HrmpChannelAccepted { - recipient: u32::from(origin), - } - .encode() - }; - if let Err(dmp::QueueDownwardMessageError::ExceedsMaxMessageSize) = - Self::queue_downward_message(&config, sender, notification_bytes) - { - // this should never happen unless the max downward message size is configured to an - // jokingly small number. - debug_assert!(false); - } - - Ok(()) - } - - pub(super) fn close_channel( - origin: ParaId, - channel_id: HrmpChannelId, - ) -> Result<(), DispatchError> { - // check if the origin is allowed to close the channel. - ensure!( - origin == channel_id.sender || origin == channel_id.recipient, - DispatchError::::CloseHrmpChannelUnauthorized, - ); - - // check if the channel requested to close does exist. - ensure!( - ::HrmpChannels::get(&channel_id).is_some(), - DispatchError::::CloseHrmpChannelDoesntExist, - ); - - // check that there is no outstanding close request for this channel - ensure!( - ::HrmpCloseChannelRequests::get(&channel_id).is_none(), - DispatchError::::CloseHrmpChannelAlreadyUnderway, - ); - - ::HrmpCloseChannelRequests::insert(&channel_id, ()); - ::HrmpCloseChannelRequestsList::append(channel_id.clone()); - - let config = >::config(); - let notification_bytes = { - use codec::Encode as _; - use xcm::v0::Xcm; - - Xcm::HrmpChannelClosing { - initiator: u32::from(origin), - sender: u32::from(channel_id.sender), - recipient: u32::from(channel_id.recipient), - } - .encode() - }; - let opposite_party = if origin == channel_id.sender { - channel_id.recipient - } else { - channel_id.sender - }; - if let Err(dmp::QueueDownwardMessageError::ExceedsMaxMessageSize) = - Self::queue_downward_message(&config, opposite_party, notification_bytes) - { - // this should never happen unless the max downward message size is configured to an - // jokingly small number. - debug_assert!(false); - } - - Ok(()) - } - - /// Returns the list of MQC heads for the inbound channels of the given recipient para paired - /// with the sender para ids. This vector is sorted ascending by the para id and doesn't contain - /// multiple entries with the same sender. - pub(crate) fn hrmp_mqc_heads(recipient: ParaId) -> Vec<(ParaId, Hash)> { - let sender_set = ::HrmpIngressChannelsIndex::get(&recipient); - - // The ingress channels vector is sorted, thus `mqc_heads` is sorted as well. - let mut mqc_heads = Vec::with_capacity(sender_set.len()); - for sender in sender_set { - let channel_metadata = - ::HrmpChannels::get(&HrmpChannelId { sender, recipient }); - let mqc_head = channel_metadata - .and_then(|metadata| metadata.mqc_head) - .unwrap_or(Hash::default()); - mqc_heads.push((sender, mqc_head)); - } - - mqc_heads - } - - /// Returns contents of all channels addressed to the given recipient. Channels that have no - /// messages in them are also included. - pub(crate) fn inbound_hrmp_channels_contents( - recipient: ParaId, - ) -> BTreeMap>> { - let sender_set = ::HrmpIngressChannelsIndex::get(&recipient); - - let mut inbound_hrmp_channels_contents = BTreeMap::new(); - for sender in sender_set { - let channel_contents = - ::HrmpChannelContents::get(&HrmpChannelId { sender, recipient }); - inbound_hrmp_channels_contents.insert(sender, channel_contents); - } - - inbound_hrmp_channels_contents - } -} - -#[cfg(test)] -mod tests { - use super::*; - use crate::mock::{new_test_ext, Configuration, Paras, Router, System}; - use crate::router::tests::default_genesis_config; - use primitives::v1::BlockNumber; - use std::collections::{BTreeMap, HashSet}; - - pub(crate) fn run_to_block(to: BlockNumber, new_session: Option>) { - use frame_support::traits::{OnFinalize as _, OnInitialize as _}; - - while System::block_number() < to { - let b = System::block_number(); - - // NOTE: this is in reverse initialization order. - Router::initializer_finalize(); - Paras::initializer_finalize(); - - System::on_finalize(b); - - System::on_initialize(b + 1); - System::set_block_number(b + 1); - - if new_session.as_ref().map_or(false, |v| v.contains(&(b + 1))) { - // NOTE: this is in initialization order. - Paras::initializer_on_new_session(&Default::default()); - Router::initializer_on_new_session(&Default::default()); - } - - // NOTE: this is in initialization order. - Paras::initializer_initialize(b + 1); - Router::initializer_initialize(b + 1); - } - } - - struct GenesisConfigBuilder { - hrmp_channel_max_capacity: u32, - hrmp_channel_max_message_size: u32, - hrmp_max_parathread_outbound_channels: u32, - hrmp_max_parachain_outbound_channels: u32, - hrmp_max_parathread_inbound_channels: u32, - hrmp_max_parachain_inbound_channels: u32, - hrmp_max_message_num_per_candidate: u32, - hrmp_channel_max_total_size: u32, - } - - impl Default for GenesisConfigBuilder { - fn default() -> Self { - Self { - hrmp_channel_max_capacity: 2, - hrmp_channel_max_message_size: 8, - hrmp_max_parathread_outbound_channels: 1, - hrmp_max_parachain_outbound_channels: 2, - hrmp_max_parathread_inbound_channels: 1, - hrmp_max_parachain_inbound_channels: 2, - hrmp_max_message_num_per_candidate: 2, - hrmp_channel_max_total_size: 16, - } - } - } - - impl GenesisConfigBuilder { - fn build(self) -> crate::mock::GenesisConfig { - let mut genesis = default_genesis_config(); - let config = &mut genesis.configuration.config; - config.hrmp_channel_max_capacity = self.hrmp_channel_max_capacity; - config.hrmp_channel_max_message_size = self.hrmp_channel_max_message_size; - config.hrmp_max_parathread_outbound_channels = - self.hrmp_max_parathread_outbound_channels; - config.hrmp_max_parachain_outbound_channels = self.hrmp_max_parachain_outbound_channels; - config.hrmp_max_parathread_inbound_channels = self.hrmp_max_parathread_inbound_channels; - config.hrmp_max_parachain_inbound_channels = self.hrmp_max_parachain_inbound_channels; - config.hrmp_max_message_num_per_candidate = self.hrmp_max_message_num_per_candidate; - config.hrmp_channel_max_total_size = self.hrmp_channel_max_total_size; - genesis - } - } - - fn register_parachain(id: ParaId) { - Paras::schedule_para_initialize( - id, - crate::paras::ParaGenesisArgs { - parachain: true, - genesis_head: vec![1].into(), - validation_code: vec![1].into(), - }, - ); - } - - fn deregister_parachain(id: ParaId) { - Paras::schedule_para_cleanup(id); - } - - fn channel_exists(sender: ParaId, recipient: ParaId) -> bool { - ::HrmpChannels::get(&HrmpChannelId { sender, recipient }).is_some() - } - - fn assert_storage_consistency_exhaustive() { - use frame_support::IterableStorageMap; - - assert_eq!( - ::HrmpOpenChannelRequests::iter() - .map(|(k, _)| k) - .collect::>(), - ::HrmpOpenChannelRequestsList::get() - .into_iter() - .collect::>(), - ); - - // verify that the set of keys in `HrmpOpenChannelRequestCount` corresponds to the set - // of _senders_ in `HrmpOpenChannelRequests`. - // - // having ensured that, we can go ahead and go over all counts and verify that they match. - assert_eq!( - ::HrmpOpenChannelRequestCount::iter() - .map(|(k, _)| k) - .collect::>(), - ::HrmpOpenChannelRequests::iter() - .map(|(k, _)| k.sender) - .collect::>(), - ); - for (open_channel_initiator, expected_num) in - ::HrmpOpenChannelRequestCount::iter() - { - let actual_num = ::HrmpOpenChannelRequests::iter() - .filter(|(ch, _)| ch.sender == open_channel_initiator) - .count() as u32; - assert_eq!(expected_num, actual_num); - } - - // The same as above, but for accepted channel request count. Note that we are interested - // only in confirmed open requests. - assert_eq!( - ::HrmpAcceptedChannelRequestCount::iter() - .map(|(k, _)| k) - .collect::>(), - ::HrmpOpenChannelRequests::iter() - .filter(|(_, v)| v.confirmed) - .map(|(k, _)| k.recipient) - .collect::>(), - ); - for (channel_recipient, expected_num) in - ::HrmpAcceptedChannelRequestCount::iter() - { - let actual_num = ::HrmpOpenChannelRequests::iter() - .filter(|(ch, v)| ch.recipient == channel_recipient && v.confirmed) - .count() as u32; - assert_eq!(expected_num, actual_num); - } - - assert_eq!( - ::HrmpCloseChannelRequests::iter() - .map(|(k, _)| k) - .collect::>(), - ::HrmpCloseChannelRequestsList::get() - .into_iter() - .collect::>(), - ); - - // A HRMP watermark can be None for an onboarded parachain. However, an offboarded parachain - // cannot have an HRMP watermark: it should've been cleanup. - assert_contains_only_onboarded( - ::HrmpWatermarks::iter().map(|(k, _)| k), - "HRMP watermarks should contain only onboarded paras", - ); - - // An entry in `HrmpChannels` indicates that the channel is open. Only open channels can - // have contents. - for (non_empty_channel, contents) in ::HrmpChannelContents::iter() { - assert!(::HrmpChannels::contains_key( - &non_empty_channel - )); - - // pedantic check: there should be no empty vectors in storage, those should be modeled - // by a removed kv pair. - assert!(!contents.is_empty()); - } - - // Senders and recipients must be onboarded. Otherwise, all channels associated with them - // are removed. - assert_contains_only_onboarded( - ::HrmpChannels::iter().flat_map(|(k, _)| vec![k.sender, k.recipient]), - "senders and recipients in all channels should be onboarded", - ); - - // Check the docs for `HrmpIngressChannelsIndex` and `HrmpEgressChannelsIndex` in decl - // storage to get an index what are the channel mappings indexes. - // - // Here, from indexes. - // - // ingress egress - // - // a -> [x, y] x -> [a, b] - // b -> [x, z] y -> [a] - // z -> [b] - // - // we derive a list of channels they represent. - // - // (a, x) (a, x) - // (a, y) (a, y) - // (b, x) (b, x) - // (b, z) (b, z) - // - // and then that we compare that to the channel list in the `HrmpChannels`. - let channel_set_derived_from_ingress = ::HrmpIngressChannelsIndex::iter() - .flat_map(|(p, v)| v.into_iter().map(|i| (i, p)).collect::>()) - .collect::>(); - let channel_set_derived_from_egress = ::HrmpEgressChannelsIndex::iter() - .flat_map(|(p, v)| v.into_iter().map(|e| (p, e)).collect::>()) - .collect::>(); - let channel_set_ground_truth = ::HrmpChannels::iter() - .map(|(k, _)| (k.sender, k.recipient)) - .collect::>(); - assert_eq!( - channel_set_derived_from_ingress, - channel_set_derived_from_egress - ); - assert_eq!(channel_set_derived_from_egress, channel_set_ground_truth); - - ::HrmpIngressChannelsIndex::iter() - .map(|(_, v)| v) - .for_each(|v| assert_is_sorted(&v, "HrmpIngressChannelsIndex")); - ::HrmpEgressChannelsIndex::iter() - .map(|(_, v)| v) - .for_each(|v| assert_is_sorted(&v, "HrmpIngressChannelsIndex")); - - assert_contains_only_onboarded( - ::HrmpChannelDigests::iter().map(|(k, _)| k), - "HRMP channel digests should contain only onboarded paras", - ); - for (_digest_for_para, digest) in ::HrmpChannelDigests::iter() { - // Assert that items are in **strictly** ascending order. The strictness also implies - // there are no duplicates. - assert!(digest.windows(2).all(|xs| xs[0].0 < xs[1].0)); - - for (_, mut senders) in digest { - assert!(!senders.is_empty()); - - // check for duplicates. For that we sort the vector, then perform deduplication. - // if the vector stayed the same, there are no duplicates. - senders.sort(); - let orig_senders = senders.clone(); - senders.dedup(); - assert_eq!( - orig_senders, senders, - "duplicates removed implies existence of duplicates" - ); - } - } - - fn assert_contains_only_onboarded(iter: impl Iterator, cause: &str) { - for para in iter { - assert!( - Paras::is_valid_para(para), - "{}: {} para is offboarded", - cause, - para - ); - } - } - } - - fn assert_is_sorted(slice: &[T], id: &str) { - assert!( - slice.windows(2).all(|xs| xs[0] <= xs[1]), - "{} supposed to be sorted", - id - ); - } - - #[test] - fn empty_state_consistent_state() { - new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| { - assert_storage_consistency_exhaustive(); - }); - } - - #[test] - fn open_channel_works() { - let para_a = 1.into(); - let para_b = 3.into(); - - new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| { - // We need both A & B to be registered and alive parachains. - register_parachain(para_a); - register_parachain(para_b); - - run_to_block(5, Some(vec![5])); - Router::init_open_channel(para_a, para_b, 2, 8).unwrap(); - assert_storage_consistency_exhaustive(); - - Router::accept_open_channel(para_b, para_a).unwrap(); - assert_storage_consistency_exhaustive(); - - // Advance to a block 6, but without session change. That means that the channel has - // not been created yet. - run_to_block(6, None); - assert!(!channel_exists(para_a, para_b)); - assert_storage_consistency_exhaustive(); - - // Now let the session change happen and thus open the channel. - run_to_block(8, Some(vec![8])); - assert!(channel_exists(para_a, para_b)); - }); - } - - #[test] - fn close_channel_works() { - let para_a = 5.into(); - let para_b = 2.into(); - - new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| { - register_parachain(para_a); - register_parachain(para_b); - - run_to_block(5, Some(vec![5])); - Router::init_open_channel(para_a, para_b, 2, 8).unwrap(); - Router::accept_open_channel(para_b, para_a).unwrap(); - - run_to_block(6, Some(vec![6])); - assert!(channel_exists(para_a, para_b)); - - // Close the channel. The effect is not immediate, but rather deferred to the next - // session change. - Router::close_channel( - para_b, - HrmpChannelId { - sender: para_a, - recipient: para_b, - }, - ) - .unwrap(); - assert!(channel_exists(para_a, para_b)); - assert_storage_consistency_exhaustive(); - - // After the session change the channel should be closed. - run_to_block(8, Some(vec![8])); - assert!(!channel_exists(para_a, para_b)); - assert_storage_consistency_exhaustive(); - }); - } - - #[test] - fn send_recv_messages() { - let para_a = 32.into(); - let para_b = 64.into(); - - let mut genesis = GenesisConfigBuilder::default(); - genesis.hrmp_channel_max_message_size = 20; - genesis.hrmp_channel_max_total_size = 20; - new_test_ext(genesis.build()).execute_with(|| { - register_parachain(para_a); - register_parachain(para_b); - - run_to_block(5, Some(vec![5])); - Router::init_open_channel(para_a, para_b, 2, 20).unwrap(); - Router::accept_open_channel(para_b, para_a).unwrap(); - - // On Block 6: - // A sends a message to B - run_to_block(6, Some(vec![6])); - assert!(channel_exists(para_a, para_b)); - let msgs = vec![OutboundHrmpMessage { - recipient: para_b, - data: b"this is an emergency".to_vec(), - }]; - let config = Configuration::config(); - assert!(Router::check_outbound_hrmp(&config, para_a, &msgs).is_ok()); - let _ = Router::queue_outbound_hrmp(para_a, msgs); - assert_storage_consistency_exhaustive(); - - // On Block 7: - // B receives the message sent by A. B sets the watermark to 6. - run_to_block(7, None); - assert!(Router::check_hrmp_watermark(para_b, 7, 6).is_ok()); - let _ = Router::prune_hrmp(para_b, 6); - assert_storage_consistency_exhaustive(); - }); - } - - #[test] - fn accept_incoming_request_and_offboard() { - let para_a = 32.into(); - let para_b = 64.into(); - - new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| { - register_parachain(para_a); - register_parachain(para_b); - - run_to_block(5, Some(vec![5])); - Router::init_open_channel(para_a, para_b, 2, 8).unwrap(); - Router::accept_open_channel(para_b, para_a).unwrap(); - deregister_parachain(para_a); - - // On Block 6: session change. The channel should not be created. - run_to_block(6, Some(vec![6])); - assert!(!Paras::is_valid_para(para_a)); - assert!(!channel_exists(para_a, para_b)); - assert_storage_consistency_exhaustive(); - }); - } - - #[test] - fn check_sent_messages() { - let para_a = 32.into(); - let para_b = 64.into(); - let para_c = 97.into(); - - new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| { - register_parachain(para_a); - register_parachain(para_b); - register_parachain(para_c); - - run_to_block(5, Some(vec![5])); - - // Open two channels to the same receiver, b: - // a -> b, c -> b - Router::init_open_channel(para_a, para_b, 2, 8).unwrap(); - Router::accept_open_channel(para_b, para_a).unwrap(); - Router::init_open_channel(para_c, para_b, 2, 8).unwrap(); - Router::accept_open_channel(para_b, para_c).unwrap(); - - // On Block 6: session change. - run_to_block(6, Some(vec![6])); - assert!(Paras::is_valid_para(para_a)); - - let msgs = vec![OutboundHrmpMessage { - recipient: para_b, - data: b"knock".to_vec(), - }]; - let config = Configuration::config(); - assert!(Router::check_outbound_hrmp(&config, para_a, &msgs).is_ok()); - let _ = Router::queue_outbound_hrmp(para_a, msgs.clone()); - - // Verify that the sent messages are there and that also the empty channels are present. - let mqc_heads = Router::hrmp_mqc_heads(para_b); - let contents = Router::inbound_hrmp_channels_contents(para_b); - assert_eq!( - contents, - vec![ - ( - para_a, - vec![InboundHrmpMessage { - sent_at: 6, - data: b"knock".to_vec(), - }] - ), - (para_c, vec![]) - ] - .into_iter() - .collect::>(), - ); - assert_eq!( - mqc_heads, - vec![ - ( - para_a, - hex_literal::hex!( - "3bba6404e59c91f51deb2ae78f1273ebe75896850713e13f8c0eba4b0996c483" - ) - .into() - ), - (para_c, Default::default()) - ], - ); - - assert_storage_consistency_exhaustive(); - }); - } -} diff --git a/runtime/parachains/src/router/ump.rs b/runtime/parachains/src/router/ump.rs deleted file mode 100644 index 2bfdafbb6c34..000000000000 --- a/runtime/parachains/src/router/ump.rs +++ /dev/null @@ -1,784 +0,0 @@ -// Copyright 2020 Parity Technologies (UK) Ltd. -// This file is part of Polkadot. - -// Polkadot is free software: you can redistribute it and/or modify -// it under the terms of the GNU General Public License as published by -// the Free Software Foundation, either version 3 of the License, or -// (at your option) any later version. - -// Polkadot is distributed in the hope that it will be useful, -// but WITHOUT ANY WARRANTY; without even the implied warranty of -// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -// GNU General Public License for more details. - -// You should have received a copy of the GNU General Public License -// along with Polkadot. If not, see . - -use super::{Trait, Module, Store}; -use crate::configuration::{self, HostConfiguration}; -use sp_std::{fmt, prelude::*}; -use sp_std::collections::{btree_map::BTreeMap, vec_deque::VecDeque}; -use frame_support::{StorageMap, StorageValue, weights::Weight, traits::Get}; -use primitives::v1::{Id as ParaId, UpwardMessage}; - -/// All upward messages coming from parachains will be funneled into an implementation of this trait. -/// -/// The message is opaque from the perspective of UMP. The message size can range from 0 to -/// `config.max_upward_message_size`. -/// -/// It's up to the implementation of this trait to decide what to do with a message as long as it -/// returns the amount of weight consumed in the process of handling. Ignoring a message is a valid -/// strategy. -/// -/// There are no guarantees on how much time it takes for the message sent by a candidate to end up -/// in the sink after the candidate was enacted. That typically depends on the UMP traffic, the sizes -/// of upward messages and the configuration of UMP. -/// -/// It is possible that by the time the message is sank the origin parachain was offboarded. It is -/// up to the implementer to check that if it cares. -pub trait UmpSink { - /// Process an incoming upward message and return the amount of weight it consumed. - /// - /// See the trait docs for more details. - fn process_upward_message(origin: ParaId, msg: Vec) -> Weight; -} - -/// An implementation of a sink that just swallows the message without consuming any weight. -impl UmpSink for () { - fn process_upward_message(_: ParaId, _: Vec) -> Weight { - 0 - } -} - -/// An error returned by [`check_upward_messages`] that indicates a violation of one of acceptance -/// criteria rules. -pub enum AcceptanceCheckErr { - MoreMessagesThanPermitted { - sent: u32, - permitted: u32, - }, - MessageSize { - idx: u32, - msg_size: u32, - max_size: u32, - }, - CapacityExceeded { - count: u32, - limit: u32, - }, - TotalSizeExceeded { - total_size: u32, - limit: u32, - }, -} - -impl fmt::Debug for AcceptanceCheckErr { - fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { - match *self { - AcceptanceCheckErr::MoreMessagesThanPermitted { sent, permitted } => write!( - fmt, - "more upward messages than permitted by config ({} > {})", - sent, - permitted, - ), - AcceptanceCheckErr::MessageSize { - idx, - msg_size, - max_size, - } => write!( - fmt, - "upward message idx {} larger than permitted by config ({} > {})", - idx, - msg_size, - max_size, - ), - AcceptanceCheckErr::CapacityExceeded { count, limit } => write!( - fmt, - "the ump queue would have more items than permitted by config ({} > {})", - count, - limit, - ), - AcceptanceCheckErr::TotalSizeExceeded { total_size, limit } => write!( - fmt, - "the ump queue would have grown past the max size permitted by config ({} > {})", - total_size, - limit, - ), - } - } -} - -/// Routines related to the upward message passing. -impl Module { - pub(super) fn clean_ump_after_outgoing(outgoing_para: ParaId) { - ::RelayDispatchQueueSize::remove(&outgoing_para); - ::RelayDispatchQueues::remove(&outgoing_para); - - // Remove the outgoing para from the `NeedsDispatch` list and from - // `NextDispatchRoundStartWith`. - // - // That's needed for maintaining invariant that `NextDispatchRoundStartWith` points to an - // existing item in `NeedsDispatch`. - ::NeedsDispatch::mutate(|v| { - if let Ok(i) = v.binary_search(&outgoing_para) { - v.remove(i); - } - }); - ::NextDispatchRoundStartWith::mutate(|v| { - *v = v.filter(|p| *p == outgoing_para) - }); - } - - /// Check that all the upward messages sent by a candidate pass the acceptance criteria. Returns - /// false, if any of the messages doesn't pass. - pub(crate) fn check_upward_messages( - config: &HostConfiguration, - para: ParaId, - upward_messages: &[UpwardMessage], - ) -> Result<(), AcceptanceCheckErr> { - if upward_messages.len() as u32 > config.max_upward_message_num_per_candidate { - return Err(AcceptanceCheckErr::MoreMessagesThanPermitted { - sent: upward_messages.len() as u32, - permitted: config.max_upward_message_num_per_candidate, - }); - } - - let (mut para_queue_count, mut para_queue_size) = - ::RelayDispatchQueueSize::get(¶); - - for (idx, msg) in upward_messages.into_iter().enumerate() { - let msg_size = msg.len() as u32; - if msg_size > config.max_upward_message_size { - return Err(AcceptanceCheckErr::MessageSize { - idx: idx as u32, - msg_size, - max_size: config.max_upward_message_size, - }); - } - para_queue_count += 1; - para_queue_size += msg_size; - } - - // make sure that the queue is not overfilled. - // we do it here only once since returning false invalidates the whole relay-chain block. - if para_queue_count > config.max_upward_queue_count { - return Err(AcceptanceCheckErr::CapacityExceeded { - count: para_queue_count, - limit: config.max_upward_queue_count, - }); - } - if para_queue_size > config.max_upward_queue_size { - return Err(AcceptanceCheckErr::TotalSizeExceeded { - total_size: para_queue_size, - limit: config.max_upward_queue_size, - }); - } - - Ok(()) - } - - /// Enacts all the upward messages sent by a candidate. - pub(crate) fn enact_upward_messages( - para: ParaId, - upward_messages: Vec, - ) -> Weight { - let mut weight = 0; - - if !upward_messages.is_empty() { - let (extra_cnt, extra_size) = upward_messages - .iter() - .fold((0, 0), |(cnt, size), d| (cnt + 1, size + d.len() as u32)); - - ::RelayDispatchQueues::mutate(¶, |v| { - v.extend(upward_messages.into_iter()) - }); - - ::RelayDispatchQueueSize::mutate( - ¶, - |(ref mut cnt, ref mut size)| { - *cnt += extra_cnt; - *size += extra_size; - }, - ); - - ::NeedsDispatch::mutate(|v| { - if let Err(i) = v.binary_search(¶) { - v.insert(i, para); - } - }); - - weight += T::DbWeight::get().reads_writes(3, 3); - } - - weight - } - - /// Devote some time into dispatching pending upward messages. - pub(crate) fn process_pending_upward_messages() { - let mut used_weight_so_far = 0; - - let config = >::config(); - let mut cursor = NeedsDispatchCursor::new::(); - let mut queue_cache = QueueCache::new(); - - while let Some(dispatchee) = cursor.peek() { - if used_weight_so_far >= config.preferred_dispatchable_upward_messages_step_weight { - // Then check whether we've reached or overshoot the - // preferred weight for the dispatching stage. - // - // if so - bail. - break; - } - - // dequeue the next message from the queue of the dispatchee - let (upward_message, became_empty) = queue_cache.dequeue::(dispatchee); - if let Some(upward_message) = upward_message { - used_weight_so_far += - T::UmpSink::process_upward_message(dispatchee, upward_message); - } - - if became_empty { - // the queue is empty now - this para doesn't need attention anymore. - cursor.remove(); - } else { - cursor.advance(); - } - } - - cursor.flush::(); - queue_cache.flush::(); - } -} - -/// To avoid constant fetching, deserializing and serialization the queues are cached. -/// -/// After an item dequeued from a queue for the first time, the queue is stored in this struct rather -/// than being serialized and persisted. -/// -/// This implementation works best when: -/// -/// 1. when the queues are shallow -/// 2. the dispatcher makes more than one cycle -/// -/// if the queues are deep and there are many we would load and keep the queues for a long time, -/// thus increasing the peak memory consumption of the wasm runtime. Under such conditions persisting -/// queues might play better since it's unlikely that they are going to be requested once more. -/// -/// On the other hand, the situation when deep queues exist and it takes more than one dipsatcher -/// cycle to traverse the queues is already sub-optimal and better be avoided. -/// -/// This struct is not supposed to be dropped but rather to be consumed by [`flush`]. -struct QueueCache(BTreeMap); - -struct QueueCacheEntry { - queue: VecDeque, - count: u32, - total_size: u32, -} - -impl QueueCache { - fn new() -> Self { - Self(BTreeMap::new()) - } - - /// Dequeues one item from the upward message queue of the given para. - /// - /// Returns `(upward_message, became_empty)`, where - /// - /// - `upward_message` a dequeued message or `None` if the queue _was_ empty. - /// - `became_empty` is true if the queue _became_ empty. - fn dequeue(&mut self, para: ParaId) -> (Option, bool) { - let cache_entry = self.0.entry(para).or_insert_with(|| { - let queue = as Store>::RelayDispatchQueues::get(¶); - let (count, total_size) = as Store>::RelayDispatchQueueSize::get(¶); - QueueCacheEntry { - queue, - count, - total_size, - } - }); - let upward_message = cache_entry.queue.pop_front(); - if let Some(ref msg) = upward_message { - cache_entry.count -= 1; - cache_entry.total_size -= msg.len() as u32; - } - - let became_empty = cache_entry.queue.is_empty(); - (upward_message, became_empty) - } - - /// Flushes the updated queues into the storage. - fn flush(self) { - // NOTE we use an explicit method here instead of Drop impl because it has unwanted semantics - // within runtime. It is dangerous to use because of double-panics and flushing on a panic - // is not necessary as well. - for ( - para, - QueueCacheEntry { - queue, - count, - total_size, - }, - ) in self.0 - { - if queue.is_empty() { - // remove the entries altogether. - as Store>::RelayDispatchQueues::remove(¶); - as Store>::RelayDispatchQueueSize::remove(¶); - } else { - as Store>::RelayDispatchQueues::insert(¶, queue); - as Store>::RelayDispatchQueueSize::insert(¶, (count, total_size)); - } - } - } -} - -/// A cursor that iterates over all entries in `NeedsDispatch`. -/// -/// This cursor will start with the para indicated by `NextDispatchRoundStartWith` storage entry. -/// This cursor is cyclic meaning that after reaching the end it will jump to the beginning. Unlike -/// an iterator, this cursor allows removing items during the iteration. -/// -/// Each iteration cycle *must be* concluded with a call to either `advance` or `remove`. -/// -/// This struct is not supposed to be dropped but rather to be consumed by [`flush`]. -#[derive(Debug)] -struct NeedsDispatchCursor { - needs_dispatch: Vec, - cur_idx: usize, -} - -impl NeedsDispatchCursor { - fn new() -> Self { - let needs_dispatch: Vec = as Store>::NeedsDispatch::get(); - let start_with = as Store>::NextDispatchRoundStartWith::get(); - - let start_with_idx = match start_with { - Some(para) => match needs_dispatch.binary_search(¶) { - Ok(found_idx) => found_idx, - Err(_supposed_idx) => { - // well that's weird because we maintain an invariant that - // `NextDispatchRoundStartWith` must point into one of the items in - // `NeedsDispatch`. - // - // let's select 0 as the starting index as a safe bet. - debug_assert!(false); - 0 - } - }, - None => 0, - }; - - Self { - needs_dispatch, - cur_idx: start_with_idx, - } - } - - /// Returns the item the cursor points to. - fn peek(&self) -> Option { - self.needs_dispatch.get(self.cur_idx).cloned() - } - - /// Moves the cursor to the next item. - fn advance(&mut self) { - if self.needs_dispatch.is_empty() { - return; - } - self.cur_idx = (self.cur_idx + 1) % self.needs_dispatch.len(); - } - - /// Removes the item under the cursor. - fn remove(&mut self) { - if self.needs_dispatch.is_empty() { - return; - } - let _ = self.needs_dispatch.remove(self.cur_idx); - - // we might've removed the last element and that doesn't necessarily mean that `needs_dispatch` - // became empty. Reposition the cursor in this case to the beginning. - if self.needs_dispatch.get(self.cur_idx).is_none() { - self.cur_idx = 0; - } - } - - /// Flushes the dispatcher state into the persistent storage. - fn flush(self) { - let next_one = self.peek(); - as Store>::NextDispatchRoundStartWith::set(next_one); - as Store>::NeedsDispatch::put(self.needs_dispatch); - } -} - -#[cfg(test)] -pub(crate) mod mock_sink { - //! An implementation of a mock UMP sink that allows attaching a probe for mocking the weights - //! and checking the sent messages. - //! - //! A default behavior of the UMP sink is to ignore an incoming message and return 0 weight. - //! - //! A probe can be attached to the mock UMP sink. When attached, the mock sink would consult the - //! probe to check whether the received message was expected and what weight it should return. - //! - //! There are two rules on how to use a probe: - //! - //! 1. There can be only one active probe at a time. Creation of another probe while there is - //! already an active one leads to a panic. The probe is scoped to a thread where it was created. - //! - //! 2. All messages expected by the probe must be received by the time of dropping it. Unreceived - //! messages will lead to a panic while dropping a probe. - - use super::{UmpSink, UpwardMessage, ParaId}; - use std::cell::RefCell; - use std::collections::vec_deque::VecDeque; - use frame_support::weights::Weight; - - #[derive(Debug)] - struct UmpExpectation { - expected_origin: ParaId, - expected_msg: UpwardMessage, - mock_weight: Weight, - } - - std::thread_local! { - // `Some` here indicates that there is an active probe. - static HOOK: RefCell>> = RefCell::new(None); - } - - pub struct MockUmpSink; - impl UmpSink for MockUmpSink { - fn process_upward_message(actual_origin: ParaId, actual_msg: Vec) -> Weight { - HOOK.with(|opt_hook| match &mut *opt_hook.borrow_mut() { - Some(hook) => { - let UmpExpectation { - expected_origin, - expected_msg, - mock_weight, - } = match hook.pop_front() { - Some(expectation) => expectation, - None => { - panic!( - "The probe is active but didn't expect the message:\n\n\t{:?}.", - actual_msg, - ); - } - }; - assert_eq!(expected_origin, actual_origin); - assert_eq!(expected_msg, actual_msg); - mock_weight - } - None => 0, - }) - } - } - - pub struct Probe { - _private: (), - } - - impl Probe { - pub fn new() -> Self { - HOOK.with(|opt_hook| { - let prev = opt_hook.borrow_mut().replace(VecDeque::default()); - - // that can trigger if there were two probes were created during one session which - // is may be a bit strict, but may save time figuring out what's wrong. - // if you land here and you do need the two probes in one session consider - // dropping the the existing probe explicitly. - assert!(prev.is_none()); - }); - Self { _private: () } - } - - /// Add an expected message. - /// - /// The enqueued messages are processed in FIFO order. - pub fn assert_msg( - &mut self, - expected_origin: ParaId, - expected_msg: UpwardMessage, - mock_weight: Weight, - ) { - HOOK.with(|opt_hook| { - opt_hook - .borrow_mut() - .as_mut() - .unwrap() - .push_back(UmpExpectation { - expected_origin, - expected_msg, - mock_weight, - }) - }); - } - } - - impl Drop for Probe { - fn drop(&mut self) { - let _ = HOOK.try_with(|opt_hook| { - let prev = opt_hook.borrow_mut().take().expect( - "this probe was created and hasn't been yet destroyed; - the probe cannot be replaced; - there is only one probe at a time allowed; - thus it cannot be `None`; - qed", - ); - - if !prev.is_empty() { - // some messages are left unchecked. We should notify the developer about this. - // however, we do so only if the thread doesn't panic already. Otherwise, the - // developer would get a SIGILL or SIGABRT without a meaningful error message. - if !std::thread::panicking() { - panic!( - "the probe is dropped and not all expected messages arrived: {:?}", - prev - ); - } - } - }); - // an `Err` here signals here that the thread local was already destroyed. - } - } -} - -#[cfg(test)] -mod tests { - use super::*; - use super::mock_sink::Probe; - use crate::router::tests::default_genesis_config; - use crate::mock::{Configuration, Router, new_test_ext}; - use frame_support::IterableStorageMap; - use std::collections::HashSet; - - struct GenesisConfigBuilder { - max_upward_message_size: u32, - max_upward_message_num_per_candidate: u32, - max_upward_queue_count: u32, - max_upward_queue_size: u32, - preferred_dispatchable_upward_messages_step_weight: Weight, - } - - impl Default for GenesisConfigBuilder { - fn default() -> Self { - Self { - max_upward_message_size: 16, - max_upward_message_num_per_candidate: 2, - max_upward_queue_count: 4, - max_upward_queue_size: 64, - preferred_dispatchable_upward_messages_step_weight: 1000, - } - } - } - - impl GenesisConfigBuilder { - fn build(self) -> crate::mock::GenesisConfig { - let mut genesis = default_genesis_config(); - let config = &mut genesis.configuration.config; - - config.max_upward_message_size = self.max_upward_message_size; - config.max_upward_message_num_per_candidate = self.max_upward_message_num_per_candidate; - config.max_upward_queue_count = self.max_upward_queue_count; - config.max_upward_queue_size = self.max_upward_queue_size; - config.preferred_dispatchable_upward_messages_step_weight = - self.preferred_dispatchable_upward_messages_step_weight; - genesis - } - } - - fn queue_upward_msg(para: ParaId, msg: UpwardMessage) { - let msgs = vec![msg]; - assert!(Router::check_upward_messages(&Configuration::config(), para, &msgs).is_ok()); - let _ = Router::enact_upward_messages(para, msgs); - } - - fn assert_storage_consistency_exhaustive() { - // check that empty queues don't clutter the storage. - for (_para, queue) in ::RelayDispatchQueues::iter() { - assert!(!queue.is_empty()); - } - - // actually count the counts and sizes in queues and compare them to the bookkeeped version. - for (para, queue) in ::RelayDispatchQueues::iter() { - let (expected_count, expected_size) = - ::RelayDispatchQueueSize::get(para); - let (actual_count, actual_size) = - queue.into_iter().fold((0, 0), |(acc_count, acc_size), x| { - (acc_count + 1, acc_size + x.len() as u32) - }); - - assert_eq!(expected_count, actual_count); - assert_eq!(expected_size, actual_size); - } - - // since we wipe the empty queues the sets of paras in queue contents, queue sizes and - // need dispatch set should all be equal. - let queue_contents_set = ::RelayDispatchQueues::iter() - .map(|(k, _)| k) - .collect::>(); - let queue_sizes_set = ::RelayDispatchQueueSize::iter() - .map(|(k, _)| k) - .collect::>(); - let needs_dispatch_set = ::NeedsDispatch::get() - .into_iter() - .collect::>(); - assert_eq!(queue_contents_set, queue_sizes_set); - assert_eq!(queue_contents_set, needs_dispatch_set); - - // `NextDispatchRoundStartWith` should point into a para that is tracked. - if let Some(para) = ::NextDispatchRoundStartWith::get() { - assert!(queue_contents_set.contains(¶)); - } - - // `NeedsDispatch` is always sorted. - assert!(::NeedsDispatch::get() - .windows(2) - .all(|xs| xs[0] <= xs[1])); - } - - #[test] - fn dispatch_empty() { - new_test_ext(default_genesis_config()).execute_with(|| { - assert_storage_consistency_exhaustive(); - - // make sure that the case with empty queues is handled properly - Router::process_pending_upward_messages(); - - assert_storage_consistency_exhaustive(); - }); - } - - #[test] - fn dispatch_single_message() { - let a = ParaId::from(228); - let msg = vec![1, 2, 3]; - - new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| { - let mut probe = Probe::new(); - - probe.assert_msg(a, msg.clone(), 0); - queue_upward_msg(a, msg); - - Router::process_pending_upward_messages(); - - assert_storage_consistency_exhaustive(); - }); - } - - #[test] - fn dispatch_resume_after_exceeding_dispatch_stage_weight() { - let a = ParaId::from(128); - let c = ParaId::from(228); - let q = ParaId::from(911); - - let a_msg_1 = vec![1, 2, 3]; - let a_msg_2 = vec![3, 2, 1]; - let c_msg_1 = vec![4, 5, 6]; - let c_msg_2 = vec![9, 8, 7]; - let q_msg = b"we are Q".to_vec(); - - new_test_ext( - GenesisConfigBuilder { - preferred_dispatchable_upward_messages_step_weight: 500, - ..Default::default() - } - .build(), - ) - .execute_with(|| { - queue_upward_msg(q, q_msg.clone()); - queue_upward_msg(c, c_msg_1.clone()); - queue_upward_msg(a, a_msg_1.clone()); - queue_upward_msg(a, a_msg_2.clone()); - - assert_storage_consistency_exhaustive(); - - // we expect only two first messages to fit in the first iteration. - { - let mut probe = Probe::new(); - - probe.assert_msg(a, a_msg_1.clone(), 300); - probe.assert_msg(c, c_msg_1.clone(), 300); - Router::process_pending_upward_messages(); - assert_storage_consistency_exhaustive(); - - drop(probe); - } - - queue_upward_msg(c, c_msg_2.clone()); - assert_storage_consistency_exhaustive(); - - // second iteration should process the second message. - { - let mut probe = Probe::new(); - - probe.assert_msg(q, q_msg.clone(), 500); - Router::process_pending_upward_messages(); - assert_storage_consistency_exhaustive(); - - drop(probe); - } - - // 3rd iteration. - { - let mut probe = Probe::new(); - - probe.assert_msg(a, a_msg_2.clone(), 100); - probe.assert_msg(c, c_msg_2.clone(), 100); - Router::process_pending_upward_messages(); - assert_storage_consistency_exhaustive(); - - drop(probe); - } - - // finally, make sure that the queue is empty. - { - let probe = Probe::new(); - - Router::process_pending_upward_messages(); - assert_storage_consistency_exhaustive(); - - drop(probe); - } - }); - } - - #[test] - fn dispatch_correctly_handle_remove_of_latest() { - let a = ParaId::from(1991); - let b = ParaId::from(1999); - - let a_msg_1 = vec![1, 2, 3]; - let a_msg_2 = vec![3, 2, 1]; - let b_msg_1 = vec![4, 5, 6]; - - new_test_ext( - GenesisConfigBuilder { - preferred_dispatchable_upward_messages_step_weight: 900, - ..Default::default() - } - .build(), - ) - .execute_with(|| { - // We want to test here an edge case, where we remove the queue with the highest - // para id (i.e. last in the needs_dispatch order). - // - // If the last entry was removed we should proceed execution, assuming we still have - // weight available. - - queue_upward_msg(a, a_msg_1.clone()); - queue_upward_msg(a, a_msg_2.clone()); - queue_upward_msg(b, b_msg_1.clone()); - - { - let mut probe = Probe::new(); - - probe.assert_msg(a, a_msg_1.clone(), 300); - probe.assert_msg(b, b_msg_1.clone(), 300); - probe.assert_msg(a, a_msg_2.clone(), 300); - - Router::process_pending_upward_messages(); - - drop(probe); - } - }); - } -} From da1e56ede5b213265493feaaafaf86fc52eed794 Mon Sep 17 00:00:00 2001 From: Sergey Shulepov Date: Mon, 16 Nov 2020 13:20:11 +0100 Subject: [PATCH 16/16] Deabbreviate DMP,UMP,HRMP --- roadmap/implementers-guide/src/runtime/dmp.md | 2 +- roadmap/implementers-guide/src/runtime/hrmp.md | 2 +- roadmap/implementers-guide/src/runtime/ump.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/roadmap/implementers-guide/src/runtime/dmp.md b/roadmap/implementers-guide/src/runtime/dmp.md index c191c6a7a66b..6f125ca46b5e 100644 --- a/roadmap/implementers-guide/src/runtime/dmp.md +++ b/roadmap/implementers-guide/src/runtime/dmp.md @@ -1,6 +1,6 @@ # DMP Module -A module responsible for DMP. See [Messaging Overview](../messaging.md) for more details. +A module responsible for Downward Message Processing (DMP). See [Messaging Overview](../messaging.md) for more details. ## Storage diff --git a/roadmap/implementers-guide/src/runtime/hrmp.md b/roadmap/implementers-guide/src/runtime/hrmp.md index 80b87c920282..145a2f284530 100644 --- a/roadmap/implementers-guide/src/runtime/hrmp.md +++ b/roadmap/implementers-guide/src/runtime/hrmp.md @@ -1,6 +1,6 @@ # HRMP Module -A module responsible for HRMP. See [Messaging Overview](../messaging.md) for more details. +A module responsible for Horizontally Relay-routed Message Passing (HRMP). See [Messaging Overview](../messaging.md) for more details. ## Storage diff --git a/roadmap/implementers-guide/src/runtime/ump.md b/roadmap/implementers-guide/src/runtime/ump.md index 1e5d742657b4..ff2e9e09b997 100644 --- a/roadmap/implementers-guide/src/runtime/ump.md +++ b/roadmap/implementers-guide/src/runtime/ump.md @@ -1,6 +1,6 @@ # UMP Module -A module responsible for UMP. See [Messaging Overview](../messaging.md) for more details. +A module responsible for Upward Message Passing (UMP). See [Messaging Overview](../messaging.md) for more details. ## Storage