Mixed Python/Mojo project for experimenting with Mojo kernels in a fast.ai / PyTorch training loop, roughly following the "faster-ai-mojo" specification.
This project uses pixi to manage both Python and Mojo (via the Modular "max-nightly" channel).
-
Install pixi.
-
From this directory, create the environment:
pixi install
-
Open a shell in the environment:
pixi shell
-
(Optional) Check Mojo is available:
pixi run mojo-version
If you have built a operations.mojopkg containing the Mojo dense kernels exported
for Python, ensure it is importable as import operations inside the pixi
environment (for example by installing it or adjusting PYTHONPATH).
mojo/dense.mojo– Mojo implementation of dense forward/backward kernels.python/simple_mlp.py– Baseline PyTorch MLP.python/mojo_mlp.py– MLP that is intended to call Mojo kernels viaMojoDense.python/data.py– fast.ai MNIST DataLoaders helper.python/experiments/gradient_check.py– Autograd gradient check.python/experiments/performance_compare.py–torch.profilercomparison.python/experiments/train_compare.py– Full training + accuracy comparison.
All commands below assume you are inside pixi shell.
-
Gradient check:
pixi run gradient-check
-
Performance comparison (emits
mojo_trace.jsonandsimple_trace.json):pixi run performance
-
Training + accuracy comparison on MNIST:
pixi run train
At present, the Python side falls back to pure PyTorch math if the Mojo
operations module is not available. To connect the real Mojo kernels, wire up
operations.dense_forward and operations.dense_backward inside
python/mojo_mlp.py using the appropriate MAX tensor bridge API for your
installation.