next up previous
Next: 2.4.4 Backpropagation Learning Algorithm Up: 2.4 Backpropagation Neural Networks Previous: 2.4.2 Architecture of Backpropagation

2.4.3 Backpropagation Processing Unit

The backpropagation processing unit should be in the form modified from a linear perceptron so that the activation function is nonlinear and smoothed out at the threshold point. The suggested form of the activation function is the sigmoid function as mentioned previously. With sigmoid function, we can obtain not only output from the neuron but also information about how close we are to the threshold point using the slope of the sigmoid function. Mathematically, we can derive the slope from the equation (2.5) as follows:
$\displaystyle \frac{d}{dx}f(x)$ $\displaystyle =$ $\displaystyle \frac{exp(-x)}{(1 + exp(-x))^{-2}}$ (2.6)
  $\displaystyle =$ $\displaystyle \frac{1}{1+exp(-x)} \frac{exp(-x)}{1+exp(-x)}$ (2.7)
  $\displaystyle =$ $\displaystyle \frac{1}{1+exp(-x)} \left[1 - \frac{1}{1+exp(-1)}\right]$ (2.8)
  $\displaystyle =$ $\displaystyle f(x) \left[1 - f(x) \right].$ (2.9)

This will be the key information for the weight adjustments in the forthcoming discussions.


next up previous
Next: 2.4.4 Backpropagation Learning Algorithm Up: 2.4 Backpropagation Neural Networks Previous: 2.4.2 Architecture of Backpropagation
Kiyoshi Kawaguchi
2000-06-17