ML — live coded autograd, four showcases

Pick a tab to swap models. Each showcase is a complete cljrs program — model, dataset, loss, training loop, all editable. Hold Alt (or Ctrl) and drag any number to scrub it live. The GPU toggle re-routes matmul to a wgpu compute shader on native; on the web it transparently falls back to CPU.

Stack: a tiny reverse-mode autograd over dense f32 tensors in cljrs-ml, exposed as builtins (ml/matmul, ml/matmul-gpu, ml/relu, ml/sigmoid, ml/softmax, ml/conv1d-valid, ml/cross-entropy, ml/adam-step!, ml/argmax, ml/one-hot, ...). Forward, backward, and optimizer step are all in Rust; everything else lives in the editor.

idle

How the showcases work

Use the GPU checkbox above to switch the showcase's matmul implementation between CPU and a wgpu compute kernel. Every showcase is built on a thin (matmul a b) macro that swaps based on a (USE-GPU?) atom JS sets each time you toggle.