Associative memory stores a set of pattern as memories. When the associative memory is being presented with key pattern, it responds by producing one of the stored patterns, which closely resembles or relates to the key pattern. Thus, the calling is through the association of the key pattern, with the help of information memorized. These type of memory are also called as content-addressable memories(CAM). The CAM can also be viewed as associating data to address. Also, it can be viewed as data correlator. Here input data is correlated with that of the stored data in CAM. Associative memory makes parallel search within a stored data file. Two types of associative memory can be differentiated. They are autoassociative memory and heteroassociative.
Autoassociative memory network:
Here, the training input and the target output are same. The determination of weight is called storing of vector. Stored vector can be retrieved from distortion input, if input is sufficiently similar to it. The network performance is based on its ability to reproduce stored pattern from the noisy input. The weight on the diagonal can be set to zero.
Autoassociative memory network is used to determine whether the input vector is “known” or “unknown”. The network recognizes known vector by producing a pattern of activation on output units of the network which is same as one of the vector stored in it.
The application is as follows:-
Step 0: Initialize the weights.
Step 1: Repeat step 3 to 5 for the testing inputs.
Step 2: Set the activation for the input unit equal to input vector.
Step 3: Calculate net input to the output unit,
Step 4: Apply activation function,
yj = f(yinj) = 1, if yinj>0 or -1, if yinj<=0
Heteroassociative memory Network:
Here, the training input and target output are different. The weights are determined in such a way that net stores a set of pattern association. Association is a pair of training input and target output vector pairs (s(p), t(p)), with p=1,…..,P. Each vector s(p) has n components and each vector t(p) has m components. The determination of weights is done either by using Hebb rule or delta rule. Heteroassociative memory is not an iterative memory network.
Step 0: Initialize weights using Hebb rule pr delta rule.
Step 1: Repeat step 3 to 5 for each vector.
Step 2: Set the activation for input unit equal to the current input vector, xi.
Step 3: Calculate the net input to the output units,
yinj = ∑xi.wij
Step 4: Apply the activation function,
yj = f(yinj) = 1, if yinj>0 or 0, if yinj=0 or -1, if yinj<0