Learning rule

May 27 • General • 20868 Views • 3 Comments on Learning rule

Widrow –Hoff Learning rule

The WIDROW-HOFF Learning rule is very similar to  the perception  Learning rule. However the origins are different.

The units with linear activation functions are called linear units. A network with a single linear unit is called as adaline (adaptive linear neuron). That  is in an ADALINE, the input-output relationship is linear. Adaline uses bipolar activation for its input signals and its target output. The weights between the input and the output are adjustable. Adaline is a net which has only one output unit. The adaline network may be trained using the delta learning rule. The delta learning rule may also b called as least mean square (LMS) rule or   Widrow-Hoff rule. This learning rule is found to minimize the mean-squared error between the activation and the target value

 

Delta Learning rule

  1. The perceptron learning rule originates from the Hebbian assumption while the delta rule is derived from the gradient- descent method (it can be generalised to more than one layer).
  2. The delta rule updates the weights between the connections so as to minimize the difference between the net input to the output unit and the target value.
  3. The major aim is to minimize all errors over all training patterns. This is done by reducing the error for each pattern, one at a time
  4. The delta rule for adjusting the weight of ith pattern (i =1 to n) is

 

Hebb Learning rule

It is an algorithm developed for training of pattern association nets.

The hebb learning rule is widely used for finding the weights of an associative neural net. The training vector pairs here are denoted as s:t. The algorithm steps are given below:

Step0: set all the initial weights to 0

wij = 0

Step1: for each training target input out

output vector pairs s:t, perform steps 2-4

Step2: activate the input layer units to current training input.

xi = si (for i = 1 to n)

Step3: activate the output layer units to current target output,

yi = tj (for j = 1 to m)

Step4: start the weight adjustment

wij(new) = wij (old) + xiyi (for i =1 to n, j = 1 to m)

Also we can visit:

3 Responses to Learning rule

  1. Gashaw Negash says:

    ANN learning meathods

  2. Dilip K Choudhury says:

    Widrow hoff learning rule

  3. Dilip K Choudhury says:

    Widrow hoff learning rule

« »