This is a 2D binary classification demo using a Multi-Layer Perceptron (MLP) implemented in Rust and compiled to WebAssembly (WASM). WebAssembly provides more performance than JavaScript and allows compiling a complete neural network learning runtime to run in the browser.
Tech Stack:
Rust: Core neural network implementation
WebAssembly: Runtime compilation target
Vue.js: Frontend framework
Plotting: Rust Plotters Library with Canvas Backend
Both the neural network training and the plot generation are handled in WebAssembly for optimal performance and ease of use. Currently the JS client will step through the training process and update the plot in real-time. Future versions will allow for training in the background with a web worker.
Pro tip: Set a random seed for reproducible results. The URL updates with your parameters so you can share configurations.
The mask defines which regions in the 2D space (from -1 to 1 on both axes) are classified as "inside" (1) or "outside" (0).
Basic primitives:
circle(r,x,y)
— Circle at position (x,y) with radius rrec(x1,y1,x2,y2)
— Rectangle with opposite corners at (x1,y1) and (x2,y2)!
— Negation operator (flips inside/outside)&
— Logical AND operator (combines shapes)Examples:
circle(0.5,0,0)
— Simple circle at origincircle(0.5,0,0)&!circle(0.25,0,0)
— Ring shape (circle with hole)circle(0.75,0,0)&!rec(-0.25,-0.25,0.25,0.25)
— Circle with square holerec(-0.5,-0.5,0.5,0.5)&!circle(0.3,0,0)
— Square with circular holeImportant: Operations are evaluated from left to right. For complex masks, build them incrementally to understand how they combine.
This implementation uses a standard feedforward neural network with:
Mathematical foundation:
Forward Pass:
z = Wx + b
, a = f(z)
Activation Functions:
ReLU: f(x) = max(0, x)
Sigmoid: σ(x) = 1 / (1 + e-x)
Loss Functions:
MSE: (1/n)∑(y - ŷ)²
BCE: -(y·log(ŷ) + (1−y)·log(1−ŷ))
Multilayer Perceptrons (MLPs) have a rich history dating back to the 1940s. The theoretical foundations were first established in 1943 when Warren McCulloch and Walter Pitts introduced the concept of artificial neurons. The perceptron, a single-layer neural network, was invented by Frank Rosenblatt in 1958, but it had significant limitations.
The breakthrough for MLPs came in 1986 when David Rumelhart, Geoffrey Hinton, and Ronald Williams published their seminal paper on backpropagation, solving the training problem for networks with hidden layers. Despite this advance, MLPs fell out of favor in the 1990s due to computational limitations and the rise of support vector machines.
MLPs were dramatically revitalized in the mid-2000s with the advent of deep learning. The combination of increased computational power, larger datasets, and algorithmic improvements allowed for training deeper networks. Today, MLPs serve as the foundation for more complex neural network architectures and remain fundamental building blocks in modern machine learning systems.