ML — live coded autograd, four showcases
Pick a tab to swap models. Each showcase is a complete cljrs
program — model, dataset, loss, training loop, all editable. Hold
Alt (or Ctrl) and drag any number to scrub it live.
The GPU toggle re-routes matmul to a wgpu compute
shader on native; on the web it transparently falls back to CPU.
Stack: a tiny reverse-mode autograd over dense f32 tensors in
cljrs-ml, exposed as builtins
(ml/matmul, ml/matmul-gpu,
ml/relu, ml/sigmoid,
ml/softmax, ml/conv1d-valid,
ml/cross-entropy, ml/adam-step!,
ml/argmax, ml/one-hot, ...). Forward,
backward, and optimizer step are all in Rust; everything else
lives in the editor.
How the showcases work
- Function fit — a 1→H→1 MLP fits a noisy
sin(2x). Loss is MSE, optimizer is plain SGD. - 2D classifier — a two-layer MLP carves a decision boundary out of the two-moons dataset. Trained with cross-entropy + Adam; canvas paints the model's confidence underneath the points.
- MNIST-tiny — 8×8 hand-curated digits (200 train / 50 test, embedded in source) classified by a 64→32→10 MLP with cross-entropy + Adam. Stage shows test predictions; correct answers go green, mistakes go red.
- Autoencoder — a 64→16→64 net reconstructs the same 8×8 digits. Stage shows three side-by-side inputs and their reconstructions over time.
Use the GPU checkbox above to switch the showcase's
matmul implementation between CPU and a wgpu
compute kernel. Every showcase is built on a thin
(matmul a b) macro that swaps based on a
(USE-GPU?) atom JS sets each time you toggle.