Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: fix typos programmatically #1890

Merged
merged 2 commits into from
May 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
16 changes: 16 additions & 0 deletions .github/workflows/spellcheck.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
name: Spellcheck

on:
pull_request:

jobs:
find-typos:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3

- name: Check spelling
uses: crate-ci/typos@master
with:
config: .typos.toml
6 changes: 6 additions & 0 deletions .typos.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[default]
extend-ignore-identifiers-re = [
"TRO",
"tro",
"Tro",
]
6 changes: 3 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,11 +188,11 @@ and this project adheres to [Semantic Versioning](http://semver.org/).
- The `StatisticTable` table lives in the off-chain worker.
- Removed duplication of the `Database` from the `dap::ConcreteStorage` since it is already available from the VM.
- The executor return only produced `Changes` instead of the storage transaction, which simplifies the interaction between modules and port definition.
- The logic related to the iteration over the storage is moved to the `fuel-core-storage` crate and is now reusable. It provides an `interator` method that duplicates the logic from `MemoryStore` on iterating over the `BTreeMap` and methods like `iter_all`, `iter_all_by_prefix`, etc. It was done in a separate revivable [commit](https://github.com/FuelLabs/fuel-core/pull/1694/commits/5b9bd78320e6f36d0650ec05698f12f7d1b3c7c9).
- The logic related to the iteration over the storage is moved to the `fuel-core-storage` crate and is now reusable. It provides an `iterator` method that duplicates the logic from `MemoryStore` on iterating over the `BTreeMap` and methods like `iter_all`, `iter_all_by_prefix`, etc. It was done in a separate revivable [commit](https://github.com/FuelLabs/fuel-core/pull/1694/commits/5b9bd78320e6f36d0650ec05698f12f7d1b3c7c9).
- The `MemoryTransactionView` is fully replaced by the `StorageTransactionInner`.
- Removed `flush` method from the `Database` since it is not needed after https://github.com/FuelLabs/fuel-core/pull/1664.

- [#1693](https://github.com/FuelLabs/fuel-core/pull/1693): The change separates the initial chain state from the chain config and stores them in separate files when generating a snapshot. The state snapshot can be generated in a new format where parquet is used for compression and indexing while postcard is used for encoding. This enables importing in a stream like fashion which reduces memory requirements. Json encoding is still supported to enable easy manual setup. However, parquet is prefered for large state files.
- [#1693](https://github.com/FuelLabs/fuel-core/pull/1693): The change separates the initial chain state from the chain config and stores them in separate files when generating a snapshot. The state snapshot can be generated in a new format where parquet is used for compression and indexing while postcard is used for encoding. This enables importing in a stream like fashion which reduces memory requirements. Json encoding is still supported to enable easy manual setup. However, parquet is preferred for large state files.

### Snapshot command

Expand All @@ -208,7 +208,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/).

Each item group in the genesis process is handled by a separate worker, allowing for parallel loading. Workers stream file contents in batches.

A database transaction is committed every time an item group is succesfully loaded. Resumability is achieved by recording the last loaded group index within the same db tx. If loading is aborted, the remaining workers are shutdown. Upon restart, workers resume from the last processed group.
A database transaction is committed every time an item group is successfully loaded. Resumability is achieved by recording the last loaded group index within the same db tx. If loading is aborted, the remaining workers are shutdown. Upon restart, workers resume from the last processed group.

### Contract States and Balances

Expand Down
4 changes: 2 additions & 2 deletions Makefile.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@
# https://github.com/sagiegurari/cargo-make/blob/0.36.0/src/lib/descriptor/makefiles/stable.toml

# This is a configuration file for the cargo plugin `cargo-make`. We use this plugin because of it's handling around
# cargo workspaces. Specifically, each task is run on workspace members indepedently, avoiding potential issues that
# cargo workspaces. Specifically, each task is run on workspace members independently, avoiding potential issues that
# arise from feature unification (https://doc.rust-lang.org/cargo/reference/features.html#feature-unification).
# Feature unification allows two unrelated crates with the same depedency to enable features on eachother.
# Feature unification allows two unrelated crates with the same dependency to enable features on eachother.
# This is problematic when a crate is built independently (when publishing / being consumed from crates.io),
# and it implicitly depended on features enabled by other crates in the same workspace.
# While feature resolver v2 attempted to resolve this problem, it still comes up in certain scenarios.
Expand Down
2 changes: 1 addition & 1 deletion crates/client/assets/debugAdapterProtocol.json
Original file line number Diff line number Diff line change
Expand Up @@ -3836,7 +3836,7 @@

"ExceptionPathSegment": {
"type": "object",
"description": "An ExceptionPathSegment represents a segment in a path that is used to match leafs or nodes in a tree of exceptions.\nIf a segment consists of more than one name, it matches the names provided if 'negate' is false or missing or\nit matches anything except the names provided if 'negate' is true.",
"description": "An ExceptionPathSegment represents a segment in a path that is used to match leaves or nodes in a tree of exceptions.\nIf a segment consists of more than one name, it matches the names provided if 'negate' is false or missing or\nit matches anything except the names provided if 'negate' is true.",
"properties": {
"negate": {
"type": "boolean",
Expand Down
2 changes: 1 addition & 1 deletion crates/fuel-core/src/database/balances.rs
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ use fuel_core_types::{
use itertools::Itertools;

pub trait BalancesInitializer {
/// Initialize the balances of the contract from the all leafs.
/// Initialize the balances of the contract from the all leaves.
/// This method is more performant than inserting balances one by one.
fn init_contract_balances<S>(
&mut self,
Expand Down
2 changes: 1 addition & 1 deletion crates/fuel-core/src/database/block.rs
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ impl Database {
let proof_index = message_merkle_metadata
.version()
.checked_sub(1)
.ok_or(anyhow::anyhow!("The count of leafs - messages is zero"))?;
.ok_or(anyhow::anyhow!("The count of leaves - messages is zero"))?;
let (_, proof_set) = tree
.prove(proof_index)
.map_err(|err| StorageError::Other(anyhow::anyhow!(err)))?;
Expand Down
2 changes: 1 addition & 1 deletion crates/fuel-core/src/graphql_api/api_service.rs
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ impl RunnableTask for Task {
}
}

// Need a seperate Data Object for each Query endpoint, cannot be avoided
// Need a separate Data Object for each Query endpoint, cannot be avoided
#[allow(clippy::too_many_arguments)]
pub fn new_service<OnChain, OffChain>(
genesis_block_height: BlockHeight,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -440,7 +440,7 @@ mod tests {
}

#[test]
fn succesfully_processed_batch_updates_the_genesis_progress() {
fn successfully_processed_batch_updates_the_genesis_progress() {
// given
let data = TestData::new(2);
let db = GenesisDatabase::default();
Expand Down
2 changes: 1 addition & 1 deletion crates/fuel-core/src/state/rocks_db.rs
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ impl ShallowTempDir {
Self { path }
}

/// Returns the path of teh directory.
/// Returns the path of the directory.
pub fn path(&self) -> &PathBuf {
&self.path
}
Expand Down
2 changes: 1 addition & 1 deletion crates/services/consensus_module/poa/src/deadline_clock.rs
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ impl DeadlineClock {
}

/// Clears the timeout, so that now event is produced when it expires.
/// If the event has alread occurred, it will not be removed.
/// If the event has already occurred, it will not be removed.
pub async fn clear(&self) {
self.control
.send(ControlMessage::Clear)
Expand Down
4 changes: 2 additions & 2 deletions crates/services/consensus_module/poa/src/verifier/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ fn correct() -> Input {
let mut i = correct();
i.ch.prev_root = [3u8; 32].into();
i
} => matches Err(_) ; "genesis verify prev root mis-match should error"
} => matches Err(_) ; "genesis verify prev root mismatch should error"
)]
#[test_case(
{
Expand All @@ -78,7 +78,7 @@ fn correct() -> Input {
let mut i = correct();
i.ch.generated.application_hash = [0u8; 32].into();
i
} => matches Err(_) ; "genesis verify application hash mis-match should error"
} => matches Err(_) ; "genesis verify application hash mismatch should error"
)]
#[test_case(
{
Expand Down
2 changes: 1 addition & 1 deletion crates/services/p2p/src/gossipsub/topics.rs
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ impl GossipsubTopics {
}
}

/// Given a `GossipsubBroadcastRequest` retruns a `GossipTopic`
/// Given a `GossipsubBroadcastRequest` returns a `GossipTopic`
/// which is broadcast over the network with the serialized inner value of `GossipsubBroadcastRequest`
pub fn get_gossipsub_topic(
&self,
Expand Down
4 changes: 2 additions & 2 deletions crates/services/p2p/src/p2p_service.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1404,7 +1404,7 @@ mod tests {
p2p_config.bootstrap_nodes = node_b.multiaddrs();
let mut node_c = build_service_from_config(p2p_config.clone()).await;

// Node C does not connecto to Node A
// Node C does not connect to Node A
// it should receive the propagated message from Node B if `GossipsubMessageAcceptance` is `Accept`
node_c
.swarm
Expand Down Expand Up @@ -1451,7 +1451,7 @@ mod tests {

// Node B received the correct message
// If we try to publish it again we will get `PublishError::Duplicate`
// This asserts that our MessageId calculation is consistant irrespective of which Peer sends it
// This asserts that our MessageId calculation is consistent irrespective of which Peer sends it
let broadcast_request = broadcast_request.clone();
matches!(node_b.publish_message(broadcast_request), Err(PublishError::Duplicate));

Expand Down
2 changes: 1 addition & 1 deletion crates/services/p2p/src/peer_manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -254,7 +254,7 @@ impl PeerManager {
.choose(&mut range)
}

/// Handles the first connnection established with a Peer
/// Handles the first connection established with a Peer
fn handle_initial_connection(&mut self, peer_id: &PeerId) -> bool {
const HEARTBEAT_AVG_WINDOW: u32 = 10;

Expand Down
4 changes: 2 additions & 2 deletions crates/services/p2p/src/peer_manager/heartbeat_data.rs
Original file line number Diff line number Diff line change
Expand Up @@ -56,10 +56,10 @@ impl HeartbeatData {

pub fn update(&mut self, block_height: BlockHeight) {
self.block_height = Some(block_height);
let old_hearbeat = self.last_heartbeat;
let old_heartbeat = self.last_heartbeat;
self.last_heartbeat = Instant::now();
self.last_heartbeat_sys = SystemTime::now();
let new_duration = self.last_heartbeat.saturating_duration_since(old_hearbeat);
let new_duration = self.last_heartbeat.saturating_duration_since(old_heartbeat);
self.add_new_duration(new_duration);
}
}
Expand Down
2 changes: 1 addition & 1 deletion crates/services/p2p/src/peer_report.rs
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ pub enum PeerReportEvent {
// `Behaviour` that reports events about peers
pub struct Behaviour {
pending_events: VecDeque<PeerReportEvent>,
// regulary checks if reserved nodes are connected
// regularly checks if reserved nodes are connected
health_check: Interval,
decay_interval: Interval,
}
Expand Down
4 changes: 2 additions & 2 deletions crates/services/producer/src/mocks.rs
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,8 @@ impl Relayer for MockRelayer {
&self,
_height: &DaBlockHeight,
) -> anyhow::Result<DaBlockHeight> {
let heighest = self.latest_block_height;
Ok(heighest)
let highest = self.latest_block_height;
Ok(highest)
}

async fn get_cost_for_block(&self, height: &DaBlockHeight) -> anyhow::Result<u64> {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,8 @@ pub fn pack_ptr_and_len(ptr: u32, len: u32) -> u64 {
/// Unpacks an `u64` into the pointer and length.
pub fn unpack_ptr_and_len(val: u64) -> (u32, u32) {
let ptr = u32::try_from(val & (u32::MAX as u64))
.expect("It ony contains first 32 bytes; qed");
let len = u32::try_from(val >> 32).expect("It ony contains first 32 bytes; qed");
.expect("It only contains first 32 bytes; qed");
let len = u32::try_from(val >> 32).expect("It only contains first 32 bytes; qed");

(ptr, len)
}
Expand Down
10 changes: 5 additions & 5 deletions crates/storage/src/test_helpers.rs
Original file line number Diff line number Diff line change
Expand Up @@ -17,29 +17,29 @@ use crate::{
/// The trait is used to provide a generic mocked implementation for all possible `StorageInspect`,
/// `StorageMutate`, and `MerkleRootStorage` traits.
pub trait MockStorageMethods {
/// The mocked implementation fot the `StorageInspect<M>::get` method.
/// The mocked implementation for the `StorageInspect<M>::get` method.
fn get<M: Mappable + 'static>(
&self,
key: &M::Key,
) -> StorageResult<Option<std::borrow::Cow<'_, M::OwnedValue>>>;

/// The mocked implementation fot the `StorageInspect<M>::contains_key` method.
/// The mocked implementation for the `StorageInspect<M>::contains_key` method.
fn contains_key<M: Mappable + 'static>(&self, key: &M::Key) -> StorageResult<bool>;

/// The mocked implementation fot the `StorageMutate<M>::insert` method.
/// The mocked implementation for the `StorageMutate<M>::insert` method.
fn insert<M: Mappable + 'static>(
&mut self,
key: &M::Key,
value: &M::Value,
) -> StorageResult<Option<M::OwnedValue>>;

/// The mocked implementation fot the `StorageMutate<M>::remove` method.
/// The mocked implementation for the `StorageMutate<M>::remove` method.
fn remove<M: Mappable + 'static>(
&mut self,
key: &M::Key,
) -> StorageResult<Option<M::OwnedValue>>;

/// The mocked implementation fot the `MerkleRootStorage<Key, M>::root` method.
/// The mocked implementation for the `MerkleRootStorage<Key, M>::root` method.
fn root<Key: 'static, M: Mappable + 'static>(
&self,
key: &Key,
Expand Down
2 changes: 1 addition & 1 deletion deployment/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ ENV BUILD_FEATURES=$FEATURES
COPY --from=planner /build/recipe.json recipe.json
RUN echo $CARGO_PROFILE_RELEASE_DEBUG
RUN echo $BUILD_FEATURES
# Build our project dependecies, not our application!
# Build our project dependencies, not our application!
RUN xx-cargo chef cook --release --no-default-features --features "${BUILD_FEATURES}" -p fuel-core-bin --recipe-path recipe.json
# Up to this point, if our dependency tree stays the same,
# all layers should be cached.
Expand Down
2 changes: 1 addition & 1 deletion deployment/e2e-client.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ RUN cargo chef prepare --recipe-path recipe.json
FROM chef as builder
ENV CARGO_NET_GIT_FETCH_WITH_CLI=true
COPY --from=planner /build/recipe.json recipe.json
# Build our project dependecies, not our application!
# Build our project dependencies, not our application!
RUN cargo chef cook --release -p fuel-core-e2e-client --features p2p --recipe-path recipe.json
# Up to this point, if our dependency tree stays the same,
# all layers should be cached.
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -372,7 +372,7 @@ impl transaction_pool::ports::BlockImporter for Service<BlockImporter> {

#### Ports: fuel_core_executor::ports
```rust
trait Database: IntepreterStorage
trait Database: InterpreterStorage
+ StorageMut<Coins, Error = StorageError>
+ StorageMut<Messages, Error = StorageError>
+ StorageMut<Contracts, Error = StorageError>
Expand Down
2 changes: 1 addition & 1 deletion tests/tests/messages.rs
Original file line number Diff line number Diff line change
Expand Up @@ -586,7 +586,7 @@ async fn can_get_message() {
..Default::default()
};

// configure the messges
// configure the messages
let state_config = StateConfig {
messages: vec![first_msg.clone()],
..Default::default()
Expand Down
4 changes: 2 additions & 2 deletions tests/tests/trigger_integration/interval.rs
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ async fn poa_interval_produces_empty_blocks_at_correct_rate() {
round_time_seconds <= secs_per_round
&& secs_per_round
<= round_time_seconds + 2 * (rounds as u64) / round_time_seconds,
"Round time not within treshold"
"Round time not within threshold"
);
}

Expand Down Expand Up @@ -167,7 +167,7 @@ async fn poa_interval_produces_nonempty_blocks_at_correct_rate() {
round_time_seconds <= secs_per_round
&& secs_per_round
<= round_time_seconds + 2 * (rounds as u64) / round_time_seconds,
"Round time not within treshold"
"Round time not within threshold"
);

// Make sure all txs got produced
Expand Down
2 changes: 1 addition & 1 deletion tests/tests/tx.rs
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ async fn dry_run_script() {
let tx_statuses = client.dry_run(&[tx.clone()]).await.unwrap();
let log = tx_statuses
.last()
.expect("Nonempty repsonse")
.expect("Nonempty response")
.result
.receipts();
assert_eq!(3, log.len());
Expand Down
2 changes: 1 addition & 1 deletion tests/tests/tx/utxo_validation.rs
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ async fn dry_run_override_utxo_validation() {
.unwrap();
let log = tx_statuses
.last()
.expect("Nonempty reponse")
.expect("Nonempty response")
.result
.receipts();
assert_eq!(2, log.len());
Expand Down