Back Propagation

May 29 • General • 7609 Views • 4 Comments on Back Propagation

Back Propagation

Back Propagation learning Algorithm is one of the most important developments in neural networks. This network has reawakened the scientific and engineering community to the modelling and processing of numerous quantitative  phenomena using neural networks.

This learning algorithm is applied to the feed-forward networks (multilayer)consisting of processing elements with continuous differential activation functions. The networks associated with back propagation algorithm is called as back-propagation networks.

Architecture:

A back propagation neural network is a multilayer, feed-forward neural network consisting of an input layer, hidden layer and an output layer. The neurons present in the hidden and output layers have biases, which are the connections from the units whose activation is always 1.  The bias term also acts as weights. During the back-propagation phase of learning signals are sent in the reverse directions.

The training algorithm of the BPN is as follows:

Initialize the weights.

Perform steps 2-4 for each input vector.

Set the activations of each input unit

Calculate the net input to the hidden unit and its output.

Now compute the output of the output unit layer.

{use sigmodial activation functions for calculating output}

Learning factors:

The training of a back propagation network is based on the choice of the various parameters. Also the convergence of the back propagation network is based on some important learning factors such as initial weights, the learning rates, the updation rule, the size and nature of the training set, and the architecture.

Initial weights:

The ultimate solution may be affected by the initial weights of a multilayer feed-forward network. They are initialized at small random values.

Learning rate

The learning rate affects the convergence of the BPN. A large value of the learning rate may speed up the convergence but might result in overshooting, while a smaller value has an vice-versa effect.

Momentum factor

The  gradient descent is very slow if the learning rate is small and oscillates widely if the value of learning rate is large. One very efficient  and commonly  used method that allows a large learning rate without oscillations is by adding a momentum factor to a normal gradient descent method.

Tell us Your Queries, Suggestions and Feedback

Your email address will not be published.

4 Responses to Back Propagation

  1. Mitali Panda says:

    neural network helps to find out the best possible solution to a given problem with its train and test procedure. these contains small elements named as neurons as such as in brains. and they work out with a equation which is a function of 1/(1+eX). and generates the best possible path with least errors in it. We can also provide weights to the networks by applying weights the function becomes f= weights +1/(1+eX). Its a great procedure to find the best path possible in a leat error method

  2. Shilpa Ranjan says:

    Good article on the topic Back propagation.This is one of the important concept of engineering papers.The article is described briefly with its learning factors !!!

  3. Rachita Mishra says:

    Backpropagation, an abbreviation for “backward propagation of errors”, is a common method of training artificial neural networks. From a desired output, the network learns from many inputs, similar to the way a child learns to identify a dog from examples of dogs.

  4. patlakshi Jha says:

    This article comprises about the back propagation which is one of the most important concept . Many of us are not aware of this so this article would help us to understand the concept behind this.

« »