Conversation
| # Summary | ||
| [summary]: #summary | ||
|
|
||
| An in-memory, self-contained simulator for NanoContract blueprints that lets developers write, test, and iterate on nano contracts without running a full Hathor node. The simulator uses the same blueprint environment that the full node uses — all of which live in `hathorlib`. This gives near-perfect fidelity with zero maintenance divergence, while providing a high-level Python API that makes test authoring straightforward. |
There was a problem hiding this comment.
This gives near-perfect fidelity
Why not complete fidelity? Add details on what differs.
| sim = Simulator() | ||
| sim.register_blueprint(SimpleCounter) | ||
|
|
||
| alice = sim.create_address("alice") | ||
| contract_id = sim.create_contract(SimpleCounter, caller=alice) |
There was a problem hiding this comment.
Should we register the blueprint automatically on the first create_contract that uses it? It would be simpler for the dev.
There was a problem hiding this comment.
Sure, the principal ideia was that each call to the simulator would be a transaction on the network.
I still believe that we should at least have a group of methods that "translate" to a tx on the network but having utility methods that call multiple of these is also an improvement for DX
| | `_runner` | `Runner \= None` | Lazily created | | ||
| | `_seed` | `bytes \= None` | RNG seed for determinism | |
There was a problem hiding this comment.
My bad, I normally use macros to deal with these tables, I think when I tried to create \| as in bytes | None but with the "or" escaped so it does not count as a table separator I think i mistyped an equal sign
| - **Time advancement.** The simulator currently uses a fixed timestamp. Should it provide a `sim.advance_time(seconds)` method for testing time-dependent logic? | ||
| - **Event inspection.** Events emitted via `syscall.emit_event()` are currently not captured for test assertions. Should the simulator collect them and expose a `sim.get_events(contract_id)` method? | ||
| - **Error reporting.** When a method call fails, the error message includes the Python traceback but not the simulated call stack. Should the simulator produce richer error diagnostics that show the full contract call chain? |
There was a problem hiding this comment.
I would say yes for these 3, but they can be improved incrementally.
There was a problem hiding this comment.
Yes, I believe these (and more) can be done to improve and expand the testing cases and DX in the future
| # Future possibilities | ||
| [future-possibilities]: #future-possibilities | ||
|
|
||
| - **Integration with fullnode simulator.** We could use the fullnode simulator to create simulations that test actual network scenarios, not only blueprint execution. |
There was a problem hiding this comment.
I don't think this that's relevant for blueprint devs.
There was a problem hiding this comment.
Well, for more advanced use-cases a dev may want to know how the contract will work, specially under very specific conditions (e.g. re-orgs, split brains, etc.) which can only be done by the fullnode simulator.
| from hathorlib.simulator import Simulator | ||
|
|
||
| sim = Simulator() | ||
| sim.register_blueprint(SimpleCounter) |
There was a problem hiding this comment.
Existing test scaffold on hathor-core provides support for registering blueprints with either _register_blueprint_class (analogous to what you're doing here) or register_blueprint_file. The difference is that registering a class allows using Python components that are not necessarily available on the actual nano runtime. By registering a file, we can enforce syntax verifications, for example.
There was a problem hiding this comment.
Thanks, I will look into this
rendered