GitHub

Neural Network
Compiler & Deployment

From model definition to in-browser inference. Compile, optimize, and deploy neural networks with a full compiler stack.

{}DefineModel DSL or ONNX JSON
ParseBuild computation graph IR
Optimize7 compilation passes
CodegenWebGPU / WASM / JS
DeployIn-browser inference

Multi-Level IR

High-level graph IR for optimization, low-level scheduled IR for code generation. Immutable data structures for full pipeline history.

7 Optimization Passes

Shape inference, constant folding, dead code elimination, operator fusion, quantization, layout optimization, and memory planning.

3 Code Generation Backends

Generate WebGPU WGSL compute shaders, WASM dispatch schedules, or pure JavaScript for maximum compatibility.

In-Browser Inference

Run compiled models directly in the browser with WebGPU acceleration, WASM fallback, and JS reference execution.

Built with
TypeScriptNext.jsD3.jsWebGPUTurborepo