feat: proof-of-work mining engine with interrupt support#25
feat: proof-of-work mining engine with interrupt support#25arunabha003 wants to merge 19 commits intoStabilityNexus:mainfrom
Conversation
WalkthroughThis PR introduces the complete MiniChain blockchain implementation, adding core primitives including cryptographic identity and signing, transaction handling, block validation, proof-of-work consensus, state management, Merkle tree commitments, and genesis block initialization, along with comprehensive test coverage, build infrastructure, and CLI scaffolding. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 31
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci.yml:
- Around line 15-18: Enable pip dependency caching in the GitHub Actions step
that uses actions/setup-python@v5 by adding the cache option to the step's with
block (e.g., set with: python-version: "3.11" and cache: "pip"); update the
workflow step referencing actions/setup-python@v5 so the runner caches pip
dependencies between CI runs to speed up subsequent jobs.
In @.gitignore:
- Around line 265-270: The Python cache/gitignore entries (__pycache__/,
*.py[cod], .pytest_cache/, .ruff_cache/, .venv/) are currently embedded inside
the LaTeX section; move these lines out of the LaTeX block and append them to
the project's gitignore additions at the end of the file (i.e., alongside the
other project-specific entries) so they live together and no longer interrupt
the LaTeX sections.
- Line 340: Remove the blanket "docs/" ignore entry so hand-authored
documentation sources are tracked; instead add ignores for generated output
(e.g., "docs/_build/", "docs/site/" or "docs/build/") or specific generated
files so only build artifacts are excluded while the "docs/" source tree remains
versioned.
In `@Makefile`:
- Line 3: Add a new Makefile target named "clean" that removes common build/test
artifacts to avoid stale-state issues (e.g., __pycache__ directories, *.pyc
files, .pytest_cache, build/ and dist/ outputs and optionally node_modules or
.venv artifacts); implement it as a phony target alongside the existing .PHONY
list (update .PHONY to include clean) and ensure the "clean" recipe uses
portable shell commands (rm -rf) to delete those directories and files so
developers can run make clean to reset local state.
In `@pyproject.toml`:
- Around line 38-39: The ruff config under [tool.ruff.lint] currently only
selects "E", "F", and "I", which omits pytest and try/except rule sets; update
the select list to include "PT" and "TRY" so ruff will catch PT006, PT017, and
TRY003 in tests and enforce pytest/try anti-patterns; modify the select value
referenced in the pyproject.toml diff to add "PT" and "TRY" alongside the
existing "E", "F", "I".
- Line 18: Replace the incorrect PyPI dependency "py-libp2p>=0.2.0" with the
actual package name "libp2p>=0.2.0" in pyproject.toml so installing
minichain[network] can resolve the package; locate the dependency entry
containing "py-libp2p>=0.2.0" and update it to "libp2p>=0.2.0".
In `@README.md`:
- Around line 7-15: Update the README "Current Status" section to reflect
post-Issue#1 additions by listing the new modules and test files introduced
(e.g., merkle.py, serialization.py, genesis.py, consensus.py and their
corresponding tests under tests/), and refresh the repository layout example to
include src/minichain/{merkle.py, serialization.py, genesis.py, consensus.py,
...} and tests/* so contributors see an accurate project structure; ensure
wording mentions CI and Makefile remain and optionally note any key
functionality provided by those new modules for clarity.
In `@src/minichain/__init__.py`:
- Line 4: Replace the hardcoded __version__ string with dynamic retrieval from
package metadata: import importlib.metadata (or importlib_metadata for older
Pythons) and set __version__ = importlib.metadata.version("minichain") inside a
try/except that falls back to a sensible default (e.g., "0.0.0" or an
environment variable) and catches PackageNotFoundError to avoid import-time
failures; update the module-level symbol __version__ accordingly so the package
version is derived from pyproject metadata rather than duplicated.
In `@src/minichain/block.py`:
- Around line 25-30: Rename the ambiguous instance method hash() on BlockHeader
and Block to a clearer name (e.g., compute_hash() or header_hash()) and update
the accompanying hash_hex() methods to call the new name; ensure the new method
still returns blake2b_digest(serialize_block_header(self)) (for BlockHeader) and
the same digest logic for Block, and update any other call sites that use
header.hash(), block.hash(), or their hash_hex() helpers to use the new method
name so callers who expect Python's built-in hash(header) are not confused.
In `@src/minichain/consensus.py`:
- Around line 51-58: The loop repeatedly calls
dataclasses.replace(header_template, nonce=nonce) and stop_event.is_set() which
cause per-iteration allocations and a locking round-trip; change the mining loop
to reuse a single mutable header/serialized buffer and update only the nonce
field before hashing (e.g., keep a prebuilt BlockHeader-like byte template or a
single header object and modify its nonce property rather than calling replace)
and perform the stop_event.is_set() check only once every N nonces (e.g., 1_000
or 10_000) to batch the lock acquisition, while preserving the existing behavior
of raising MiningInterrupted and returning (nonce, digest) when
hash_to_int(candidate.hash()) <= candidate.difficulty_target.
- Line 60: Define a dedicated exception class (e.g., MiningExhausted(Exception))
in src/minichain/consensus.py and replace the raise RuntimeError("No valid nonce
found within nonce range") in the mining/pow function (the place that currently
raises that RuntimeError) with raise MiningExhausted(...) so callers can catch
nonce exhaustion explicitly; also update the function/method docstring (and any
local tests or callers that currently catch RuntimeError) to reference
MiningExhausted instead of RuntimeError.
In `@src/minichain/crypto.py`:
- Around line 21-24: The guard in _require_nacl() is brittle because it checks
for the name "blake2b" in globals(); instead, create and use a dedicated
sentinel (e.g., _NACL_AVAILABLE) that's set when the PyNaCl import succeeds and
cleared/False on failure, then change _require_nacl() to test that sentinel and
raise RuntimeError from _NACL_IMPORT_ERROR if it's not truthy; update any import
logic that currently sets blake2b to also set _NACL_AVAILABLE so the check is
unambiguous.
In `@src/minichain/genesis.py`:
- Around line 20-38: GenesisConfig is declared frozen but initial_balances is a
mutable dict that can be mutated after construction; update GenesisConfig to
make initial_balances truly immutable and run validation at construction: in
__post_init__ call self.validate() and replace/copy the incoming
initial_balances with an immutable representation (e.g., use
types.MappingProxyType or convert to an immutable tuple/ frozendict equivalent)
so callers cannot mutate balances after instantiation; ensure the immutable
field name remains initial_balances (or expose a read-only view) so existing
consumers still access the same symbol.
- Around line 14-17: Extract the duplicate helper _is_lower_hex into a shared
utility module (e.g., create minichain/utils.py or minichain/validation.py) and
export it so both genesis.py and transaction.py import it; specifically, remove
the local _is_lower_hex definitions from src/minichain/genesis.py and
src/minichain/transaction.py, add the single implementation of
_is_lower_hex(expected_length param signature unchanged) to the new shared
module, update both files to import _is_lower_hex from that module, and run
tests/linters to ensure no unresolved references remain.
In `@src/minichain/serialization.py`:
- Around line 47-56: serialize_canonical currently builds canonical_map using
the provided field_order but then passes sort_keys=True to json.dumps which
alphabetically reorders keys and nullifies the intended insertion order; update
the function so the JSON preserves the insertion order by removing
sort_keys=True from the json.dumps call (keep ensure_ascii, separators, etc.),
and add a short comment in serialize_canonical noting that Python 3.7+ preserves
dict insertion order so field_order controls JSON key order.
- Line 6: Replace the deprecated typing.Mapping import in
src/minichain/serialization.py by importing Mapping from collections.abc: update
the import line that currently reads "from typing import Any, Mapping" so that
Any remains from typing and Mapping is imported from collections.abc (e.g., keep
"Any" from typing and add "from collections.abc import Mapping"); adjust only
the import—no other code changes needed since annotations are from __future__.
In `@src/minichain/state.py`:
- Around line 49-63: The current apply_transaction calls get_account for both
sender and recipient up front, which creates phantom entries in self.accounts
when later validation (nonce/balance) fails; update State.apply_transaction (and
the module-level wrapper that calls it) to only fetch the sender via get_account
to perform nonce and balance checks and defer calling
get_account(transaction.recipient) until after those validations succeed (or
alternatively create a local snapshot/rollback around the whole
apply_transaction call), ensuring no persistent accounts are created on failed
validation; reference symbols: get_account, State.apply_transaction,
self.accounts, sender, recipient.
In `@src/minichain/transaction.py`:
- Around line 56-63: transaction_id() currently changes after signing because it
appends signature/public_key only when non-empty; make it fail fast for unsigned
transactions by checking Transaction.transaction_id (function transaction_id)
for presence of both self.signature and self.public_key and raising a clear
exception (e.g., ValueError or custom UnsignedTransactionError) if either is
missing, so callers cannot inadvertently compute a noncanonical ID; update the
function to call self.signing_bytes() only after the presence check and keep
using blake2b_digest for the final hash.
- Around line 97-100: The bare "except Exception" in the try/except around
deserialize_verify_key(self.public_key) triggers BLE001; to satisfy the linter
while preserving the defensive behavior, add a trailing "# noqa: BLE001" comment
to the except line and include a short inline comment explaining the intent
(that deserialize_verify_key may raise multiple types and this method must
return False on any failure). Keep the existing except block and return False,
but annotate it with the noqa and intent note so readers and tooling understand
why we catch all exceptions for deserialize_verify_key in this verification
method.
- Around line 80-86: The sign() method currently types its signing_key as object
but dereferences signing_key.verify_key and passes signing_key to
sign_message(), so change the type hint to SigningKey (imported from
minichain.crypto) to reflect the required contract; update the function
signature in def sign(self, signing_key: SigningKey) -> None and add the
appropriate import for SigningKey at the top of the module, ensuring existing
calls still pass a SigningKey-compatible object and no other behavior changes to
sign_message or serialize_verify_key are needed.
In `@tests/test_block.py`:
- Around line 49-51: The test test_block_hash_is_deterministic currently
compares block.hash() to itself which is vacuous; change it to construct two
independent Block instances with identical inputs (use _make_block or a new
helper to build blocks from explicit deterministic inputs/fixtures) and assert
that block1.hash() == block2.hash(); ensure any randomness (signing keys,
nonces, transactions) is fixed by using a seeded RNG or a fixed signing
key/fixture so both blocks are created deterministically (refer to _make_block
and the Block.hash method to locate the code).
In `@tests/test_consensus.py`:
- Around line 54-59: Replace the manual try/except around mine_block_header with
pytest.raises: import pytest if not present, then use with
pytest.raises(MiningInterrupted, match="interrupted"): mine_block_header(header,
max_nonce=1_000_000, stop_event=stop). Apply the same replacement for the
similar block that tests mining interruption at the other location; reference
the MiningInterrupted exception, mine_block_header call, and the stop stop_event
variable to locate the test cases.
- Around line 37-46: The test manually reconstructs a BlockHeader by copying
every field into mined_header which is verbose and brittle; replace that manual
rebuild with dataclasses.replace to clone header and update nonce (i.e., use
dataclasses.replace(header, nonce=nonce)) so new fields on BlockHeader won't be
forgotten and the subsequent is_valid_pow(mined_header) assertion remains the
same.
- Around line 1-9: Add a pytest.importorskip("nacl") guard at the top of the
test_consensus.py module before importing PyNaCl-dependent symbols so the module
is skipped when PyNaCl is not installed; specifically, add the importorskip call
before the existing imports that pull in BlockHeader, is_valid_pow, and
mine_block_header (these call BlockHeader.hash() -> blake2b_digest() ->
_require_nacl()), ensuring tests that call is_valid_pow() or mine_block_header()
are skipped rather than erroring at runtime.
In `@tests/test_crypto.py`:
- Line 41: The unpacked variable signing_key from generate_key_pair() is unused
and flagged by static analysis; rename it to begin with an underscore (e.g.,
_signing_key or just _) in the assignment (signing_key, verify_key =
generate_key_pair()) so the linter recognizes it as intentionally unused while
keeping verify_key as-is.
In `@tests/test_genesis.py`:
- Around line 60-85: The tests test_genesis_requires_empty_state and
test_genesis_block_rejects_wrong_previous_hash use manual try/except/else
patterns to assert ValueError; replace those with pytest.raises(match=...) for
explicit, robust assertions and remove the manual else raise; add import pytest
at the top of the file, and in each test wrap the apply_genesis_block(...) call
with with pytest.raises(ValueError, match="empty state") and with
pytest.raises(ValueError, match="previous_hash") respectively to match the
expected error messages.
In `@tests/test_scaffold.py`:
- Line 1: Update the module docstring at the top of tests/test_scaffold.py (the
top-level string literal "Scaffolding checks for Issue `#1`.") to either reference
the correct issue number ("Issue `#8`") or remove the stale issue reference
entirely; edit the module-level docstring so it accurately reflects the current
PR target.
- Around line 23-26: The test_component_modules_are_importable test uses
importlib.import_module which never returns None, so the assertion "assert
imported is not None" is redundant; update the test by removing that
tautological assertion and either rely on importlib.import_module raising an
exception to fail the test or replace the check with a meaningful assertion such
as isinstance(imported, types.ModuleType) or verifying a known attribute on the
module (reference: test_component_modules_are_importable, COMPONENT_MODULES,
importlib.import_module).
In `@tests/test_serialization.py`:
- Around line 68-69: Replace the string-style names passed to
pytest.mark.parametrize with a tuple/list of identifiers: change the decorator
call pytest.mark.parametrize("payload,serializer,expected", ...) to use a tuple
or list like pytest.mark.parametrize(("payload", "serializer", "expected"), ...)
so the parameter names are provided as an iterable of identifiers rather than a
single string; update the decorator in the test_serialization test where
pytest.mark.parametrize is used to supply the three parameters.
In `@tests/test_state.py`:
- Around line 154-157: The test currently uses
State.get_account(recipient_address) which creates and registers an empty
Account as a side effect; change the post-rollback assertions to check the
absence of the recipient in the internal map instead of calling get_account
(e.g., assert recipient_address not in state.accounts) and keep the checks for
sender via State.get_account(sender_address). This ensures the rollback truly
removed the recipient without inadvertently inserting it via State.get_account.
- Around line 53-54: Replace the two-step discard pattern that assigns
recipient_key then immediately overrides it with "_" by assigning the discard
inline from generate_key_pair(); specifically, change the occurrences where you
currently have "recipient_key, recipient_verify = generate_key_pair()" followed
by "_ = recipient_key" to a single inline discard assignment "_,
recipient_verify = generate_key_pair()" (apply the same replacement for the four
occurrences using generate_key_pair() that assign
recipient_key/recipient_verify).
| - name: Setup Python | ||
| uses: actions/setup-python@v5 | ||
| with: | ||
| python-version: "3.11" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider enabling pip dependency caching to speed up CI runs.
actions/setup-python@v5 natively supports pip caching via a single option:
⚡ Proposed optimization
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
+ cache: "pip"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Setup Python | |
| uses: actions/setup-python@v5 | |
| with: | |
| python-version: "3.11" | |
| - name: Setup Python | |
| uses: actions/setup-python@v5 | |
| with: | |
| python-version: "3.11" | |
| cache: "pip" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yml around lines 15 - 18, Enable pip dependency caching
in the GitHub Actions step that uses actions/setup-python@v5 by adding the cache
option to the step's with block (e.g., set with: python-version: "3.11" and
cache: "pip"); update the workflow step referencing actions/setup-python@v5 so
the runner caches pip dependencies between CI runs to speed up subsequent jobs.
| # Python caches and virtualenvs | ||
| __pycache__/ | ||
| *.py[cod] | ||
| .pytest_cache/ | ||
| .ruff_cache/ | ||
| .venv/ |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Python cache entries are embedded mid-file inside the LaTeX section.
The entries are functionally correct, but placing them between the "easy-todo" and "xcolor" LaTeX sections makes the file harder to read. Consider relocating them to the end of the file alongside the other project-specific additions (after line 338).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore around lines 265 - 270, The Python cache/gitignore entries
(__pycache__/, *.py[cod], .pytest_cache/, .ruff_cache/, .venv/) are currently
embedded inside the LaTeX section; move these lines out of the LaTeX block and
append them to the project's gitignore additions at the end of the file (i.e.,
alongside the other project-specific entries) so they live together and no
longer interrupt the LaTeX sections.
| #*Notes.bib | ||
|
|
||
|
|
||
| docs/ |
There was a problem hiding this comment.
docs/ blanket ignore will prevent source documentation from being tracked.
Ignoring the entire docs/ directory means any hand-authored or Sphinx/MkDocs source files placed there will silently never be committed. The conventional approach is to ignore only the generated build output rather than the whole directory.
🛠️ Proposed fix
-docs/
+docs/_build/
+docs/site/📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| docs/ | |
| docs/_build/ | |
| docs/site/ |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore at line 340, Remove the blanket "docs/" ignore entry so
hand-authored documentation sources are tracked; instead add ignores for
generated output (e.g., "docs/_build/", "docs/site/" or "docs/build/") or
specific generated files so only build artifacts are excluded while the "docs/"
source tree remains versioned.
| @@ -0,0 +1,21 @@ | |||
| PYTHON ?= python3 | |||
|
|
|||
| .PHONY: install dev-install test lint format start-node | |||
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding a clean target for local development hygiene.
A clean target clears build and test caches that can cause confusing stale-state issues (e.g., cached .pyc files or leftover .pytest_cache):
🧹 Proposed addition
-.PHONY: install dev-install test lint format start-node
+.PHONY: install dev-install test lint format start-node clean
+clean:
+ find . -type d -name __pycache__ -exec rm -rf {} +
+ rm -rf .pytest_cache .ruff_cache dist *.egg-info📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .PHONY: install dev-install test lint format start-node | |
| .PHONY: install dev-install test lint format start-node clean | |
| clean: | |
| find . -type d -name __pycache__ -exec rm -rf {} + | |
| rm -rf .pytest_cache .ruff_cache dist *.egg-info |
🧰 Tools
🪛 checkmake (0.2.2)
[warning] 3-3: Missing required phony target "all"
(minphony)
[warning] 3-3: Missing required phony target "clean"
(minphony)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Makefile` at line 3, Add a new Makefile target named "clean" that removes
common build/test artifacts to avoid stale-state issues (e.g., __pycache__
directories, *.pyc files, .pytest_cache, build/ and dist/ outputs and optionally
node_modules or .venv artifacts); implement it as a phony target alongside the
existing .PHONY list (update .PHONY to include clean) and ensure the "clean"
recipe uses portable shell commands (rm -rf) to delete those directories and
files so developers can run make clean to reset local state.
|
|
||
| [project.optional-dependencies] | ||
| network = [ | ||
| "py-libp2p>=0.2.0", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether a package named py-libp2p (vs libp2p) exists on PyPI.
curl -sf https://pypi.org/pypi/py-libp2p/json | jq '.info.name,.info.version' || echo "py-libp2p not found on PyPI"
curl -sf https://pypi.org/pypi/libp2p/json | jq '.info.name,.info.version'Repository: StabilityNexus/MiniChain
Length of output: 84
Change py-libp2p to libp2p — the PyPI package name is libp2p, not py-libp2p.
The package py-libp2p does not exist on PyPI. Installing minichain[network] will fail with "No matching distribution found for py-libp2p>=0.2.0".
Proposed fix
network = [
- "py-libp2p>=0.2.0",
+ "libp2p>=0.2.0",
]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "py-libp2p>=0.2.0", | |
| network = [ | |
| "libp2p>=0.2.0", | |
| ] |
| "py-libp2p>=0.2.0", | |
| "libp2p>=0.2.0", |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pyproject.toml` at line 18, Replace the incorrect PyPI dependency
"py-libp2p>=0.2.0" with the actual package name "libp2p>=0.2.0" in
pyproject.toml so installing minichain[network] can resolve the package; locate
the dependency entry containing "py-libp2p>=0.2.0" and update it to
"libp2p>=0.2.0".
| @@ -0,0 +1,34 @@ | |||
| """Scaffolding checks for Issue #1.""" | |||
There was a problem hiding this comment.
Stale issue reference in module docstring.
The docstring reads "Scaffolding checks for Issue #1." but this PR targets Issue #8. Update or remove the issue reference.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_scaffold.py` at line 1, Update the module docstring at the top of
tests/test_scaffold.py (the top-level string literal "Scaffolding checks for
Issue `#1`.") to either reference the correct issue number ("Issue `#8`") or remove
the stale issue reference entirely; edit the module-level docstring so it
accurately reflects the current PR target.
| def test_component_modules_are_importable() -> None: | ||
| for module in COMPONENT_MODULES: | ||
| imported = importlib.import_module(f"minichain.{module}") | ||
| assert imported is not None |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
assert imported is not None is tautological.
importlib.import_module either returns a module object or raises ImportError — it never returns None. The assertion always holds when the import succeeds and is therefore meaningless. Simply relying on the exception propagation (which fails the test) is sufficient.
🔧 Proposed fix
def test_component_modules_are_importable() -> None:
for module in COMPONENT_MODULES:
- imported = importlib.import_module(f"minichain.{module}")
- assert imported is not None
+ importlib.import_module(f"minichain.{module}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_scaffold.py` around lines 23 - 26, The
test_component_modules_are_importable test uses importlib.import_module which
never returns None, so the assertion "assert imported is not None" is redundant;
update the test by removing that tautological assertion and either rely on
importlib.import_module raising an exception to fail the test or replace the
check with a meaningful assertion such as isinstance(imported, types.ModuleType)
or verifying a known attribute on the module (reference:
test_component_modules_are_importable, COMPONENT_MODULES,
importlib.import_module).
| @pytest.mark.parametrize( | ||
| "payload,serializer,expected", |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Use a tuple (or list) for the pytest.mark.parametrize names argument (PT006).
The string form is syntactically accepted but the idiomatic form preferred by ruff and pytest tooling is a tuple.
♻️ Proposed fix
`@pytest.mark.parametrize`(
- "payload,serializer,expected",
+ ("payload", "serializer", "expected"),
[📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @pytest.mark.parametrize( | |
| "payload,serializer,expected", | |
| `@pytest.mark.parametrize`( | |
| ("payload", "serializer", "expected"), | |
| [ |
🧰 Tools
🪛 Ruff (0.15.1)
[warning] 69-69: Wrong type passed to first argument of pytest.mark.parametrize; expected tuple
Use a tuple for the first argument
(PT006)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_serialization.py` around lines 68 - 69, Replace the string-style
names passed to pytest.mark.parametrize with a tuple/list of identifiers: change
the decorator call pytest.mark.parametrize("payload,serializer,expected", ...)
to use a tuple or list like pytest.mark.parametrize(("payload", "serializer",
"expected"), ...) so the parameter names are provided as an iterable of
identifiers rather than a single string; update the decorator in the
test_serialization test where pytest.mark.parametrize is used to supply the
three parameters.
| recipient_key, recipient_verify = generate_key_pair() | ||
| _ = recipient_key |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Simplify the discarded signing-key variable.
The two-step recipient_key, recipient_verify = ...; _ = recipient_key appears four times. Inline the discard directly.
♻️ Proposed fix (example for one occurrence)
- recipient_key, recipient_verify = generate_key_pair()
- _ = recipient_key
+ _, recipient_verify = generate_key_pair()Also applies to: 75-76, 95-96, 113-114
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_state.py` around lines 53 - 54, Replace the two-step discard
pattern that assigns recipient_key then immediately overrides it with "_" by
assigning the discard inline from generate_key_pair(); specifically, change the
occurrences where you currently have "recipient_key, recipient_verify =
generate_key_pair()" followed by "_ = recipient_key" to a single inline discard
assignment "_, recipient_verify = generate_key_pair()" (apply the same
replacement for the four occurrences using generate_key_pair() that assign
recipient_key/recipient_verify).
| assert state.get_account(sender_address).balance == 100 | ||
| assert state.get_account(sender_address).nonce == 0 | ||
| assert state.get_account(recipient_address).balance == 0 | ||
| assert state.get_account(recipient_address).nonce == 0 |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Post-rollback assertion silently creates the recipient account as a side effect.
State.get_account creates and registers a new empty Account when the address is absent. Calling state.get_account(recipient_address) to assert balance == 0 after rollback actually inserts that address into state.accounts, which is a side effect that masks whether the rollback truly removed it. A more precise assertion would be:
assert recipient_address not in state.accounts🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_state.py` around lines 154 - 157, The test currently uses
State.get_account(recipient_address) which creates and registers an empty
Account as a side effect; change the post-rollback assertions to check the
absence of the recipient in the internal map instead of calling get_account
(e.g., assert recipient_address not in state.accounts) and keep the checks for
sender via State.get_account(sender_address). This ensures the rollback truly
removed the recipient without inadvertently inserting it via State.get_account.
Addressed Issues:
Part of #8
Depends on #24
Summary
Validation
python -m ruff check src testspython -m pytestChecklist
AI Usage Disclosure
Check one of the checkboxes below:
I have used the following AI models and tools: TODO
We encourage contributors to use AI tools responsibly when creating Pull Requests. While AI can be a valuable aid, it is essential to ensure that your contributions meet the task requirements, build successfully, include relevant tests, and pass all linters. Submissions that do not meet these standards may be closed without warning to maintain the quality and integrity of the project. Please take the time to understand the changes you are proposing and their impact.
Summary by CodeRabbit
Release Notes
New Features
Documentation
Tests
Chores