Neuron Sandbox
This is an interactive demonstration of a perceptron, the simplest form of artificial neural network. The perceptron was invented by Frank Rosenblatt in 1958 and forms the foundation for understanding modern deep learning.
Attribution
This demo is based on the original Neuron Sandbox by Angela Chen, Neel Pawar, and David S. Touretzky at Carnegie Mellon University. The original work was supported by National Science Foundation awards DRL-2049029 and IIS-2112633.
| x₁ | x₂ | Σ | Output | Desired |
|---|---|---|---|---|
| 0 | 0 | 0 | 0 | 0 |
| 0 | 1 | 1 | 0 | 0 |
| 1 | 0 | 1 | 0 | 0 |
| 1 | 1 | 2 | 1 | 1 |
How the Perceptron Works
A perceptron takes multiple inputs, multiplies each by a weight, sums them up, and produces an output based on whether that sum exceeds a threshold.
The Computation
For inputs
- Weighted Sum:
- Activation: Output = 1 if
, otherwise Output = 0
Threshold vs. Bias
The perceptron can be expressed in two equivalent ways:
- Threshold form: Output = 1 if
- Bias form: Output = 1 if
(where bias )
The bias form is more common in modern neural networks. Toggle "Show as Bias" in the demo to see this representation.
How to Use the Demo
- Select a problem from the dropdown to load different logic gates
- Adjust the weights (
, ) and threshold ( ) using the sliders - Hover over rows in the table to see the step-by-step computation
- Try to make all outputs match the desired values (green = correct, red = incorrect)
Logic Gates as Perceptrons
AND Gate
- Output 1 only when both inputs are 1
- Solution: Set weights to positive values, threshold high enough that both inputs are needed
- Example:
, ,
OR Gate
- Output 1 when at least one input is 1
- Solution: Set weights to positive values, threshold low enough that one input is sufficient
- Example:
, ,
NOT Gate (Inverter)
- Output the opposite of the input
- Solution: Use a negative weight to flip the input
- Example:
,
NAND Gate
- Output 0 only when both inputs are 1 (opposite of AND)
- Solution: Use negative weights
- Example:
, ,
XOR Gate - The Impossible Problem!
- Output 1 when inputs are different
- Cannot be solved with a single perceptron!
- This limitation, famously discussed by Minsky & Papert (1969), contributed to the first "AI Winter"
Why XOR is Impossible
The perceptron creates a linear decision boundary - a straight line that separates 1s from 0s. For XOR:
- (0,0) should output 0
- (0,1) should output 1
- (1,0) should output 1
- (1,1) should output 0
Try drawing a single straight line that separates {(0,1), (1,0)} from {(0,0), (1,1)} - it's geometrically impossible!
This is why we need multi-layer neural networks (with hidden layers) to solve non-linearly separable problems.
The Learning Rule
Perceptrons can learn their weights automatically using the Perceptron Learning Rule:
- Start with random weights
- For each training example:
- If output is correct: do nothing
- If output is 0 but should be 1: increase weights for active inputs
- If output is 1 but should be 0: decrease weights for active inputs
- Repeat until all outputs are correct (or prove impossible)
This simple rule is guaranteed to find a solution if one exists (for linearly separable problems).
Why This Matters for Cognitive Science
- Neural Inspiration: Perceptrons are loosely inspired by biological neurons
- Learning from Examples: They demonstrate how systems can learn from data
- Limitations of Simple Models: XOR shows why complex cognition requires more sophisticated architectures
- Foundation for Deep Learning: Modern neural networks are built from layers of perceptron-like units
Historical Context
- 1943: McCulloch & Pitts propose the first mathematical model of a neuron
- 1958: Frank Rosenblatt invents the perceptron
- 1969: Minsky & Papert publish "Perceptrons," highlighting limitations like XOR
- 1986: Backpropagation algorithm enables training of multi-layer networks
- 2012+: Deep learning revolution, building on these foundations
References
- Rosenblatt, F. (1958). "The perceptron: A probabilistic model for information storage and organization in the brain." Psychological Review, 65(6), 386-408.
- Minsky, M., & Papert, S. (1969). Perceptrons: An Introduction to Computational Geometry. MIT Press.
- Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). "Learning representations by back-propagating errors." Nature, 323(6088), 533-536.
Source Code
The original Neuron Sandbox source code is available on GitHub at https://github.com/touretzkyds/NeuronSandbox.