Demonstration of a 1-layer back-propagation network
Below is a demonstration of a simple one-layer back-propagation network that learns to classify a set of patterns.
The network consists of an input layer with 20 units laid out in a 4 by 5 grid, and 4 output units. Each output unit receives connections from the whole input grid. Hence, the weights on the connections from the input layer to each output unit are also displayed in a 4 by 5 grid. Each output unit also has a "bias weight", shown immediately to the left of the unit's activation, which determines its resting level of activation.
Activation display: The states of the input units are displayed using a color code: white means inactive (0) and blue means activated (1). The states of the output units are displayed both numerically and via a color code ranging from white to blue. Bright blue means fully on.
Weight display: The connection weights are displayed in two colours: blue squares for positive weights and red squares for negative weights. The intensity (opacity) of the color indicates the magnitude of the weight. For example, a dark red square represents a very large negative connection weight, while a faint red square represents a weak negative weight.
Total Error: The total error, summed over the 40 training patterns.
Training patterns: The model is trained on a set of 40 patterns. There are 10 examples of each of the four categories, shown at the bottom region of the display.
Testing the network: Any one of the training patterns may be presented to the network for testing, by simply clicking on the pattern. Alternatively, you can enter your own test pattern by clicking on Clear Input and then toggling pixels on the input grid by clicking on them.
Activation rule: When the network is presented with a training or testing pattern, the states of the input layer units are set to the corresponding values. The output layer units are then activated by summing their weighted inputs and passing the result through a nonlinear sigmoid function.
Learning: The network can be trained by clicking on Run Continuous Backprop; Learning will continue until you click on the button again.
The network updates its weights by a supervised learning procedure called error back-propagation.