[App Should Load in Here]

What is this?

What is this?

This is a 2D binary classification demo using a Multi-Layer Perceptron (MLP) implemented in Rust and compiled to WebAssembly (WASM). WebAssembly provides more performance than JavaScript and allows compiling a complete neural network learning runtime to run in the browser.

Tech Stack:

Rust: Core neural network implementation
WebAssembly: Runtime compilation target
Vue.js: Frontend framework
Plotting: Rust Plotters Library with Canvas Backend

Both the neural network training and the plot generation are handled in WebAssembly for optimal performance and ease of use. Currently the JS client will step through the training process and update the plot in real-time. Future versions will allow for training in the background with a web worker.

How to Use This Tool

  1. Configure the network: Set layer sizes (e.g., "2,5,5,1" means 2 input neurons, two hidden layers with 5 neurons each, and 1 output neuron)
  2. Adjust hyperparameters: Set learning rate, epochs, batch size, and loss function
  3. Define your mask: Create a custom region using the mask syntax (see mask instructions)
  4. Run training: Click the "Run Training" button to start
  5. Monitor progress: Watch the loss curve and decision boundary evolve in real-time
  6. Stop anytime: Use the stop button if you want to interrupt training

Pro tip: Set a random seed for reproducible results. The URL updates with your parameters so you can share configurations.

Mask Syntax Guide

The mask defines which regions in the 2D space (from -1 to 1 on both axes) are classified as "inside" (1) or "outside" (0).

Basic primitives:

  • circle(r,x,y) — Circle at position (x,y) with radius r
  • rec(x1,y1,x2,y2) — Rectangle with opposite corners at (x1,y1) and (x2,y2)
  • ! — Negation operator (flips inside/outside)
  • & — Logical AND operator (combines shapes)

Examples:

  • circle(0.5,0,0) — Simple circle at origin
  • circle(0.5,0,0)&!circle(0.25,0,0) — Ring shape (circle with hole)
  • circle(0.75,0,0)&!rec(-0.25,-0.25,0.25,0.25) — Circle with square hole
  • rec(-0.5,-0.5,0.5,0.5)&!circle(0.3,0,0) — Square with circular hole

Important: Operations are evaluated from left to right. For complex masks, build them incrementally to understand how they combine.

Neural Network Architecture

This implementation uses a standard feedforward neural network with:

  • Configurable layer sizes
  • ReLU activation for hidden layers
  • Sigmoid activation for output layer
  • Mini-batch gradient descent optimization
  • Choice between MSE and BCE loss functions

Mathematical foundation:

Forward Pass:
z = Wx + b, a = f(z)

Activation Functions:
ReLU: f(x) = max(0, x)
Sigmoid: σ(x) = 1 / (1 + e-x)

Loss Functions:
MSE: (1/n)∑(y - ŷ)²
BCE: -(y·log(ŷ) + (1−y)·log(1−ŷ))

Other Resources

History of MLP Research

Multilayer Perceptrons (MLPs) have a rich history dating back to the 1940s. The theoretical foundations were first established in 1943 when Warren McCulloch and Walter Pitts introduced the concept of artificial neurons. The perceptron, a single-layer neural network, was invented by Frank Rosenblatt in 1958, but it had significant limitations.

The breakthrough for MLPs came in 1986 when David Rumelhart, Geoffrey Hinton, and Ronald Williams published their seminal paper on backpropagation, solving the training problem for networks with hidden layers. Despite this advance, MLPs fell out of favor in the 1990s due to computational limitations and the rise of support vector machines.

MLPs were dramatically revitalized in the mid-2000s with the advent of deep learning. The combination of increased computational power, larger datasets, and algorithmic improvements allowed for training deeper networks. Today, MLPs serve as the foundation for more complex neural network architectures and remain fundamental building blocks in modern machine learning systems.