Dynamically Driven Recurrent Networks

Sep 11 • General • 3920 Views • No Comments on Dynamically Driven Recurrent Networks

Introduction:

Recurrent networks are neural networks with one or more feedback loops. There are two functional uses of recurrent networks:

  • Associative memories and
  • Input-output mapping networks.

The input space of a mapping network is mapped onto an output space. For such application, a recurrent network temporally respond to an externally applied input signal. Therefore, the recurrent networks are considered as dynamically driven recurrent networks.

 Recurrent network architectures:

 The architectural layout of a recurrent network takes many different forms. Four specific network architectures are as follows-

1. Input-Output Recurrent Model

The model has a single input that is applied to a tapped-delay-line memory of q units. This model has a single output which is fedback to the input through another tapped-delay-line memory also of q units, The contents of these two tapped-delay-line memories are used to feed the input layer of the multilayer perceptrons. u(n) denote the present value of the model input, and the corresponding value of the model output is denoted by y(n + 1); that is, the output is ahead of the input by one time unit.

2. State-Space Model

In state-space model, the hidden neurons define the state of the network. The hidden layer output is fed back to the input layer through a bank of unit delays. The feedback nodes and source nodes constitute the input layer and via the source nodes the network is connected to the external environment . The order of the model determined by the number of unit delays used to feed the output of the hidden layer back to the input layers .

3. Recurrent Multilayer Perceptron

It has one or more hidden layers, basically for the same reasons that static multilayer perceptrons are often more effective and parsimonious than those using a single hidden layer. Each computation layer of an RMLP has feedback around it.

4. Second-Order Network

 The basis of second-order recurrent networks constituted by second-order neurons .

Non-linear autogressive with exogenous inputs model:

Consider a recurrent network with a single input and single output. Given this state-space model,we wish to modify it into an input-output model as an equivalent representation of the recurrent network. the output y(n+ q) is expressible in terms of the state x(n) and the vector of inputs uq(n). Nonlinear mapping F: R2q R exists whereby the present value of the output y(n +1) is uniquely defined in terms of its past values y(n), …,y(n – q + 1) and the present and past values of the input u(n), …, u(n – q + 1).

Recurrent networks  computational power:

Recurrent networks, exemplified by the state-space model and the NARX model, have an inherent ability to simulate finite-state automata. The computational power of a recurrent network is embodied in two main theorems:

Theorem I- All Turing machines may be simulated by fully connected recurrent networks built on neurons with sigmoid activation functions.

Theorem II- NARX networks with one layer of hidden neurons with bounded, one-sided saturated activation functions and a linear output neuron can simulate fully connected recurrent networks with bounded, one-sided saturated activation functions, except for a linear slowdown.

Learning algorithms:

 Recurrent network trained by two modes which are as follows-

I. Epochwise training- The recurrent network starts running from some initial state until it reaches a new state, for given epoch, at which point the training is stopped and the network is reset to an initial state for the next epoch. The initial state is not same for each epoch of training.

II. Continuous training- It is suitable for situations where there are no reset states available andor on-line learning is required. The distinguishing feature of continuous training is that the network learns while signal processing is being performed by the network.

Back propagation through time:

Back-propagation-through-time (BPTT) algorithm

Back-propagation-through-time

An extension of the standard back-propagation algorithm is the back-propagation-through-time (BPTT) algorithm, for training a recurrent network and derived by unfolding the temporal operation of the network into a layered feedforward network and the topology of which grows by one layer at every time step.

Real time recurrent learning:

Another learning algorithm referred to as real-time recurrent learning (RTRL). Here adjustments are made to the synaptic weights of a fully connected recurrent network in real time, that is, while the network continues to perform its signal processing function.

Kalman filter:

Features of Kalman filters include:

i)The theory is formulated in terms of state-space concepts, providing efficient utilization of the information contained in the input data.

ii) Estimation of the state is computed recursively; that is, each updated estimate of the state is computed from the previous estimate and the data currently available, so only the previous estimate requires storage.

Decoupled Kalman filter-

Our primary interest in the Kalman filter is to exploit its unique properties to perform the supervised training of a recurrent network. The important issue is how to proceed with this approach in a computationally feasible manner without compromising the application of Kalman filter theory. The answer is using a decoupled form of the extended Kalman filter, in which the computational complexity is made to suit the requirements of a particular application and of available computational resources.

 Recurrent networks vanishing gradients:

The vanishing gradients problem, pertains to the training of the network to produce a desired response at the current time that depends on input data in the distant past. Vanishing gradients problem makes the learning of long term dependencies in gradient-based training algorithms difficult if not virtually impossible in certain cases. A recurrent network be able to store state information for an arbitrary duration and to do so in the presence of noise. Information latching is the long-term storage of definite bits of information in the state variables of the recurrent network.

System identification:

System identification is the experimental approach to the modeling of a process or a plant of unknown parameters. It involves the following steps: experimental planning, the selection of a parameter estimation, model validation and model structure. The procedure of system identification is iterative in nature.

Model reference adaptive control:

A control strategy well suited for the use of neural networks is the model reference adaptive control (MRAC), where the implicit assumption is that the designer is sufficiently familiar with the system under consideration. I In such system, adaptivity is used to account for the fact that the dynamics of the plant are unknown.The controller and the plant form a closed loop feedback system, thereby constituting an externally recurrent network.

Related Questions and Answers:

 Q1. What are the architectural layout of a recurrent network?

Ans- The architectural layout of a recurrent network are as follows- Input-Output Recurrent Model, State-Space Model, Recurrent Multilayer Perceptron, Second-Order Network.

 Q2. Define state of a dynamical system.

Ans- The state of a dynamical system is formally defined as a set of quantities that summarizes all the information about the past behavior of the system that is needed to uniquely describe its future behavior, except for the purely external effects arising from the applied input (excitation).

 Q3. What are the two modes of training a recurrent network?

Ans- Two modes of training a recurrent network are- Epochwise training, Continuous training.

 Q4. Write the uses of recurrent networks.

Ans- Two functional uses of recurrent networks are-

i)Associative memories

ii) Input-output mapping networks

Readers can give their suggestions / feedbacks in the given below comment section to improve the article.

Related Topics:

Backbone Networks

Virtual LAN and Networks Layers

Cluster computing and it’s application

Date Link Layer and it’s Errors

Tell us Your Queries, Suggestions and Feedback

Your email address will not be published.

« »