Skip to content

Neuron Sandbox

This is an interactive demonstration of a perceptron, the simplest form of artificial neural network. The perceptron was invented by Frank Rosenblatt in 1958 and forms the foundation for understanding modern deep learning.

Attribution

This demo is based on the original Neuron Sandbox by Angela Chen, Neel Pawar, and David S. Touretzky at Carnegie Mellon University. The original work was supported by National Science Foundation awards DRL-2049029 and IIS-2112633.

Output 1 only when BOTH inputs are 1
All outputs match! The perceptron has learned this pattern.
x₁x₂ΣOutputDesired
00000
01100
10100
11211
x₁x₁x₂x₂ w1=1 w2=1Σ> 1.5yOutput
Hover over a row to see the computation
Adjust weights and threshold to make all outputs match the desired values | Hover rows to see computations

How the Perceptron Works

A perceptron takes multiple inputs, multiplies each by a weight, sums them up, and produces an output based on whether that sum exceeds a threshold.

The Computation

For inputs x1,x2,...,xn with weights w1,w2,...,wn and threshold θ:

  1. Weighted Sum: Σ=w1x1+w2x2+...+wnxn
  2. Activation: Output = 1 if Σ>θ, otherwise Output = 0

Threshold vs. Bias

The perceptron can be expressed in two equivalent ways:

  • Threshold form: Output = 1 if Σ>θ
  • Bias form: Output = 1 if Σ+b>0 (where bias b=θ)

The bias form is more common in modern neural networks. Toggle "Show as Bias" in the demo to see this representation.

How to Use the Demo

  1. Select a problem from the dropdown to load different logic gates
  2. Adjust the weights (w1, w2) and threshold (θ) using the sliders
  3. Hover over rows in the table to see the step-by-step computation
  4. Try to make all outputs match the desired values (green = correct, red = incorrect)

Logic Gates as Perceptrons

AND Gate

  • Output 1 only when both inputs are 1
  • Solution: Set weights to positive values, threshold high enough that both inputs are needed
  • Example: w1=1, w2=1, θ=1.5

OR Gate

  • Output 1 when at least one input is 1
  • Solution: Set weights to positive values, threshold low enough that one input is sufficient
  • Example: w1=1, w2=1, θ=0.5

NOT Gate (Inverter)

  • Output the opposite of the input
  • Solution: Use a negative weight to flip the input
  • Example: w1=1, θ=0.5

NAND Gate

  • Output 0 only when both inputs are 1 (opposite of AND)
  • Solution: Use negative weights
  • Example: w1=1, w2=1, θ=1.5

XOR Gate - The Impossible Problem!

  • Output 1 when inputs are different
  • Cannot be solved with a single perceptron!
  • This limitation, famously discussed by Minsky & Papert (1969), contributed to the first "AI Winter"

Why XOR is Impossible

The perceptron creates a linear decision boundary - a straight line that separates 1s from 0s. For XOR:

  • (0,0) should output 0
  • (0,1) should output 1
  • (1,0) should output 1
  • (1,1) should output 0

Try drawing a single straight line that separates {(0,1), (1,0)} from {(0,0), (1,1)} - it's geometrically impossible!

This is why we need multi-layer neural networks (with hidden layers) to solve non-linearly separable problems.

The Learning Rule

Perceptrons can learn their weights automatically using the Perceptron Learning Rule:

  1. Start with random weights
  2. For each training example:
    • If output is correct: do nothing
    • If output is 0 but should be 1: increase weights for active inputs
    • If output is 1 but should be 0: decrease weights for active inputs
  3. Repeat until all outputs are correct (or prove impossible)

This simple rule is guaranteed to find a solution if one exists (for linearly separable problems).

Why This Matters for Cognitive Science

  1. Neural Inspiration: Perceptrons are loosely inspired by biological neurons
  2. Learning from Examples: They demonstrate how systems can learn from data
  3. Limitations of Simple Models: XOR shows why complex cognition requires more sophisticated architectures
  4. Foundation for Deep Learning: Modern neural networks are built from layers of perceptron-like units

Historical Context

  • 1943: McCulloch & Pitts propose the first mathematical model of a neuron
  • 1958: Frank Rosenblatt invents the perceptron
  • 1969: Minsky & Papert publish "Perceptrons," highlighting limitations like XOR
  • 1986: Backpropagation algorithm enables training of multi-layer networks
  • 2012+: Deep learning revolution, building on these foundations

References

  • Rosenblatt, F. (1958). "The perceptron: A probabilistic model for information storage and organization in the brain." Psychological Review, 65(6), 386-408.
  • Minsky, M., & Papert, S. (1969). Perceptrons: An Introduction to Computational Geometry. MIT Press.
  • Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). "Learning representations by back-propagating errors." Nature, 323(6088), 533-536.

Source Code

The original Neuron Sandbox source code is available on GitHub at https://github.com/touretzkyds/NeuronSandbox.