Distributed associative memory for use in scene analysis

Distributed associative memory for use in scene analysis

Distributed associative memory for use in scene analysis J Austin* and T J Stonham A distributed associative memory system which is ideal for scene ...

1MB Sizes 0 Downloads 24 Views

Distributed associative memory for use in scene analysis J Austin*

and T J Stonham

A distributed associative memory system which is ideal for scene analysis is described. Recall of associated patterns using incomplete originals is made possible by the use of a distributed storage mechanism and a novel recall procedure. The memory is shown to store associations between patterns more efficiently than a conventional file store. The paper describes the memory structure, the recall process and its storage abilities, as well as an example of its implementation in hardware. Keywords: scene analysis, pattern recognition, associative memory

The ability to memorize two patterns so that the presentation of one pattern elicits the recall of the other has applications in many areas. The memory described in this paper was developed for use in scene analysis, where it was required to recall a complete image of an object that may be occluded within a scene containing many other objects. Parallel associative memories are ideal in this application. They show distinct speed advantages over conventional memories when implemented in dedicated hardware, as well as providing a ‘fail-sort’ ability due to the inherent distributed storage of data. Previous approaches to these devices have suffered from an inability to accurately recover data at high speed. Furthermore, accurate control of storage capacity has been a major problem. The device described here overcomes these difficulties by using a K-tuple preprocessor and a novel recall strategy. This paper begins by describing the basic associative memory structure, followed by a discussion of the Ktuple preprocessing operations, storage efficiency and implementation. It is assumed that an image of a scene may be obtained from an imaging device, digitized, and Department of Electronic and Electrical Engineering, Brunei University, Uxbridge, Middlesex UB8 3PH, UK *Department of Computer Science, York University, Heslington, York YOl 5DD, UK. All communications to J Austin

026233856/87/05251-l

~015 no 4 november 1987

0

$03.00

@

then thresholded to a binary image. This image is then represented as a linear array or pattern of binary digits that is fed to the associative memory. The aim of the memory device is to recognize an object represented as a pattern of bits in an input pattern, and recall a complete description of that object. The input pattern may contain noise due to any number of factors including occluding objects.

MEMORY STRUCTURE AN ASSOCIATION

AND TEACHING

The memory structure is shown schematically in Figure 1. The two patterns to be associated are the key pattern a and the teach pattern b. In the example in Figure 1, a and b consist of arrays of eight and six binary elements respectively. By the process of teaching, pattern a and pattern b are associated such that during recall pattern a can be applied to the memory causing the output of pattern b. For the purpose of explanation, the horizontal and vertical lines can be seen as wires which are set to logical 1 when connected to a pattern element at 1. Similarly, if the pattern element is at 0, then the wire is at logical 0. During teaching (association of a with b) the key and teach patterns are applied. These set up a particular set of activated wires. Modification of the memory takes place by forming binary links between the point of intersection of two active wires. Figure 1 illustrates the binary links formed by the association between two patterns. This may be stated formally as

Mli = t,.k/

for all i and j, where i is the maximum dimension of t, and j is the maximum dimension of k. M is the binary 1987 Butterworth

& Co. (Publishers) Ltd 251

b

b

a

a

Threshold at 2

-I-

signifies a link

Figure 1. Memory matrix showing association between the key array a and the teach array b, causing the formation of the links shown. Array c is the recall array

the links in Figure 1; t and k are the two binary patterns to be associated, i.e. the teach and key patterns respectively. matrix

representing

RECALL OF ASSOCIATED

PATTERNS

Figure 2 illustrates a memory taught on the patterns given in the last example. In addition, the memory has been taught on some more patterns. Recall of an associated pattern takes place by first presenting the key pattern on array a, as shown in Figure 2. This pattern activates the horizontal wires where a logical 1 occurs in array a. When a horizontal wire set at logical 1 meets a link formed during teaching, it passes a logical 1 to the associated vertical wire. The output of each vertical wire is the summation of all these occurrences and forms an element of the response pattern as shown in array c in Figure 2. This may be expressed ri =

c MIi.ki all i

for all j, where k and M are as defined above and r is the recalled pattern. The response pattern now contains a set of values representing the recalled associated pattern. This pattern must be thresholded in some way to recover the original associated patterns taught as a binary pattern.

THRESHOLDING THE RESPONSE PAlTERN TO OBTAIN COMPLETE

RECALL

A fundamental problem of this type of memory is how the output pattern may be thresholded to recover the

252

em fl

qq

m

0

q

m

r]

Threshold at 3

F,

F,

F,

bointisattol)

Figure 2. Memory matrix taught on a selection of patterns including those in Figure 1. Arrays c, d show the recall values when the memory is tested on the key pattern shown. Array c is the recalled pattern before thresholding; array d is the recalled pattern after thresholding at 2. The array e is the recalled pattern after testing on the key pattern after changing element i to logical 1. Array f contains the recalled pattern after thresholding at 3

original pattern. Many approaches have been suggested. Wilshaw’ uses the number of logical 1s in the key pattern as a threshold in his associative memory. In the example above the recalled pattern is shown in array c in Figure 2. To obtain the correct recall pattern a threshold of 2 would be needed (the original associated pattern is shown in array b in Figure 1). Wilshaw’s method predicts the threshold value successfully. The method fails, however, if the key pattern used in recall differs from the key pattern used in teaching, e.g. if the key pattern is occluded by another pattern. This may be illustrated by altering the key pattern given in the above recall example. Considering Figure 2, element i in the key may be changed to logical 1. Upon testing this memory this change produces the recall pattern shown in array e. The recalled pattern now needs thresholding. According to Wilshaw’s method the threshold should be 3. However, if this is used the resultant recall pattern shown in array f is incorrect (compare the correct pattern in array d to this). This problem has also been observed by Gardener-Medwin2 in a paper investigating biological implementations of associative memories. If Wilshaw’s thresholding process was used in the scene analysis system, it would not be possible to recall a complete description of occluded objects. An approach taken by Lansner and Ekeberg on a similar binary asso-

image and vision computing

n

-

a

d n

I

I

I

I

I

I-

1 st memory

3

stage

2nd memory

stage

Figure 3. Two memory matrix units in combination. The first stage consists of a matrix similar to that in Figures I and 2, the inputs being arrays a and b (key and class respectively): the output is array c. The second stage of the associative memory has as its inputs arrays e and d and its output array>f (Note that this two-stage memory is shown having been taught on different patterns than the example of Figure 2)

ciative memory used an iterative process to recall the pattern. This resulted in a good recall ability, but it was slow because it needed a number of test phases to obtain total recall. To allow accurate recall of associated patterns from incomplete examples the following method was used. First, the key pattern is associated with a specific teach pattern, called the ‘class’ pattern. The characteristics of the class pattern are set so that it may be recalled with ease. Specifically, the class pattern has only N bits set to 1 randomly placed, i.e. if N = 3, 0010011 or 0100101 would be valid teach patterns. The N-bit class pattern allows accurate recovery of class during recall on a key pattern that contains noise by using a new threshold procedure. This process consists of selecting the N highest valued elements in the recall pattern and setting them to 1; all other elements are set to zero. In the example given in Figure 2, the memory was taught on a pattern that contained two elements at logical 1 and thus N = 2 in this case. Array e shows recall using a corrupted key pattern. If the new threshold procedure is used, selecting the two highest elements and setting only these to logical 1 results in a perfect recall, where previously recall failed. The output from the memory can be regarded as an identifier or ‘class’ for the input patterns. This threshold process is simple to apply, and needs no complex numerical calculation. Furthermore it allows the storage capacity of the memory to be accurately set, as will be shown below. We are now able to associate an input key pattern with an internal class pattern. In normal practice we need to be able to associate two particular patterns,

~015 no 4 november I987

the key pattern and the teach pattern, as in the original example in Figure 1. This may be achieved by using a two-stage’associative memory. The first stage of the memory associates the key pattern and the class pattern, as described above, where the key pattern can cause recollection of the class pattern. The second stage of the associative memory associates the class pattern with the teach pattern. This second stage is in the same form as the first. Both stages are shown in Figure 3. The first stage is made up of arrays a, b and c, where a is the key array input, b is the class array input and c is the class array output. The second stage, to the right of the first, is made up of arrays d, e and f. Array d takes a class pattern (from array c after thresholding) and array e takes the teach pattern. The output array f contains the teach pattern after recall. The operation of the second stage of the associative memory is exactly the same as the first. During teaching the class pattern is placed in array d, and the teach pattern is placed in array e. An association is formed between these two by setting links as before. Recall is accomplished by placing the class pattern (derived from a recall in the first memory) into array d. The horizontal wires are activated as before, and the total number of active wires that intersect with a link is summed for each column. This is shown in Figure 3. The process of recall forms the values shown in array f. These values then need thresholding again to produce the original pattern. In this memory the threshold may be as described by Wilshaw’, i.e. the sum of active elements in the class pattern, which is 2 in this case. Application of this threshold results in the pattern in

253

array g. There is no need to use the new threshold procedure in this case, because the class pattern, placed in array d, is assumed to be free from errors. To summarize, the key pattern and the teach pattern are to be associated. To make thresholding a trivial process the key pattern is first associated with a class pattern containing N elements at logical 1. The class pattern is then further associated with the original input teach pattern and thereby an association of the teach pattern and the key pattern is established. The recall class pattern is thresholded by selecting the N highest responding elements; the teach pattern is thresholded at exactly N. The value of N is selected for optimal storage and is discussed below. Using the two-stage memory and threshold process described above, this memory may be used to recall complete descriptions of occluded objects. This is accomplished by first associating an image of a complete object with itself, so that latter presentation of a possibly incomplete view of that object will allow recall of a complete description. The thresholding method makes recall particularly insensitive to other objects in the input image.

CONTROL OF STORAGE ASSOCIATIVE MEMORY

IN THE

In a single-stage associative memory, where the key pattern is directly associated with the teach pattern, the memory requirements would be PQ bits, where P and Q are the number of elements in the key and teach patterns respectively. This assumes it takes one bit of storage to record the presence of one link (see ‘Hardware realization’ below). Thus, to associate two patterns of 1024 pixels square a memory of lO244 bit would be needed, which is of the order of lOI bit. This is an impossible memory requirement, but it may be dramatically reduced in the two-stage memory described in this paper. For the two-stage memory, the number of bits required to store all the links is given by Number of bits required = QC + PC where Q is the number of elements in the key pattern, and C is the number of elements in the intermediate class pattern. It will be evident that the class pattern may be very small, and thus the number of bits to associate two patterns may be similarly very small. Furthermore, the use of the class pattern enables the size and storage ability of the memory to be easily altered, without altering the resolution of the images to be associated. The size of the class pattern obviously affects the ability of the memory to recall a pattern reliably. Its effect is discussed in detail in the next section. P is the number of elements in the teach pattern,

STORAGE

CAPACITY

OF THE MEMORY

The two-stage associative memory has a finite storage ability, but unlike a conventional listing memory, recall and storage do not suddenly fail as soon as its rated

254

capacity is reached, but instead they gradually fail as its rated capacity is reached. The process is probabilistic, not deterministic, i.e. there is a probability of recalling each associated pattern perfectly. When designing a memory to store a given number of associations, it is necessary to give the required recall reliability in the form of a probability of failure, P. If a memory is set up to store k associations, then P will only be reached after k patterns have been associated. The following equation4 gives the value of P after T patterns have been associated by the memory: P=l-

(1-[l-(&q]‘)”

(1)

where T is the number of patterns taught,R is the number of elements in the key pattern, Z is the number of bits set to 1 in the key pattern on every association, H is the number of elements in the class pattern, N is the number of elements set to 1 in the class pattern on every association, and P is the probability of getting any error on recall. The equation relates to the first stage of the associative memory, which recalls the class from the key patterns. The definitions of a recall error in this case is when the memory fails to recover the class pattern perfectly. The derivation of Equation (1) is explained in Appendix 1. The following equation gives the number of patterns that may be associated by the memory before the probability of a Z-bit error between the taught and recalled pattern becomes maximum. It is most useful since it does not directly involve probabilities.

‘n

(1-$z) (2)

T=

-

ln(l--g)

Both equations relate to associations between patterns that have a random distribution of bits set to 1. As with Equation (1), Equation (2) also relates to the first stage of the associative memory. Both may also be used for the second associative memory by substituting references to ‘class’ with ‘teach’ and references to ‘key’ with ‘class’ in the parameter list given above. Equation (2) is derived in Appendix 2. Using Equation (2), a memory may be easily designed to give a particular storage capacity. Figure 4 illustrates the storage properties of the first memory when different numbers of bits are set to 1 in the class pattern (N). The other memory parameters are: Key pattern (R) = 896 Bits set to 1 in the key (Z) = 22 Class pattern (ZZ) = 96 bit Two plots are shown, one giving the predicted storage capacity of the memory using Equation (2) and the other derived from a simulation of the associative memory where each link was 1 bit of storage and each pattern was randomly generated. In this simulation associations between unique, randomly generated patterns were taught into the memory until the recall resulted in a pattern with a single-bit error. The graph shows how

image and vision computing

1600

t

0

10

20

30 Number

.

predicted

0

simulation

40

50

of class points

storage

capacity

results

60

70

80

set to 1, N

Figure 4. Storage properties of the first memory when different numbers of bits are set to 1 in the class pattern the storage capacity drops off as the number of class points set to 1 is increased. Optimal storage occurs when the number of class points set to 1 is small. These results may be compared to the storage ability of a conventional file system, although no account is made of recall reliability in such a system. In this example a storage matrix of 96 x 896 links is used, which is equivalent to 86 016 bit of storage. For each pair of associated patterns a conventional storage system would require 96 + 896 bit of storage (i.e. 96 for the class, 896 for the key). Using the equivalent storage as in the associative memory, a conventional memory would store 86 associations. However, considering Figure 4, the associative memory is able to store many times this amount. It may be pointed out that to achieve such good storage ability in the associative memory the example uses a very sparse key pattern, i.e. only 22 bit set to 1 in 896 pixels. If the key pattern contains a high percentage of bits set to 1 then the storage capacity is very low. However, the preprocessor described below is used so that a sparse key pattern is always presented to the associative memory, regardless of the number of bits set to 1 in the original image. The second associative memory, which recalls the teach pattern from the class pattern, has different properties to those of the first memory. Simply, maximum storage is found for a particular value of N (N is the number of points set to 1 in the class). For values of N greater or less than this optimal value a fewer number of associations can be stored. Figure 5 illustrates the storage ability of the memory for the optimal value of N, in this case N = 9. The size of the teach image is 896 elements with 22 bits randomly set to 1 in each association. The value of N holds over a large range of ‘teach’ pattern sizes and class array sizes. Again a plot is given for the predicted values from Equation (2) and simulated values. The other parameters are set to H = 896; N = 22; I = 9; variable

~015 no 4 november

1987

R

0100

90

200

300

400

500

600

700

Class array size

Figure 5. Storage ability of the memory for the optimal value qf N(N = 9 in this case)

01111111 0

25

50

75

100

125

Number

150



175

of patterns

Figure 6. Ability of the associative noisy version of an original pattern



200

’ 225

’ 250



275

taught

memory

to recall a

Before discussing the relationship between the storage abilities of the first and second stages of the associative memory, it is worth looking at the ability of the associative memory to recall a noisy version of an original pattern. This is shown in Figure 6. The parameters for this example are R = 896; I = 125 (average);

H = 32; N = 4

It is important to note that K-tuple preprocessing was performed on the images in this test, with K = 4. A description of preprocessing is given below. For this graph the memory was taught on an initial random pattern. The ability of the memory to recall the same pattern after subsequent patterns had been taught into the memory was then assessed. For each data point in the graph, the percentage of noise that

255

needed to be added to the initial pattern before recall failed is given after 7’ associations have been taught into the memory. For the parameters of the memory given above, the predicted storage ability before failure is 318. This is in agreement with the simulation, which achieved a storage ability of 297 before failure (the maximum number of patterns that can be taught before failure). However, the graph shows that at the memory’s rated capacity its ability to recall from noisy patterns is low. This must be taken into account when the memory is used in scene analysis. To achieve good noise immunity the memory must be run below its rated storage capacity, as given by Equation (2). The relationship between the storage abilities of the first and second stages of the associative memory may now be considered. It is evident by comparing Figures 4 and 5 that the storage ability is far worse in the second memory than in the first, for the same size of class array. Thus, at the point at which the second memory has reached its rated capacity, the first memory will still be able to store many more patterns. As shown above, however, a memory which has not reached its rated storage capacity is much more tolerant to noise in the input pattern than one that has reached its storage capacity. Furthe~ore, it may be noticed that the second memory receives an error-free class pattern to recall the teach pattern, whereas the first memory recalls the class pattern from a noisy example of the original key pattern. Thus the first memory has a large tolerance to noise and is ideal for when recall from noisy examples is needed. The second memory, which has a low tolerance to noise, always receives a noise-free class pattern. In summary, equations have been given that characterize the storage behaviour of the two stages of the associative memory. They illustrate how the recall success is probabilisti~lly based, and how optimal storage may be achieved in practical applications.

USE OF A K-TUPLE PREPROCESSOR It was pointed out above that the input key image may be preprocessed to produce a sparsely filled ‘key’ or ‘teach’ image for application to the associative memory. To do this a preprocessing technique may be used. The basic method is illustrated in Figure 7. This shows an input array I and an output array 0. A set of binary decoders, L),, are used to transform I onto 0. The pattern to be processed is placed in I and the output pattern 0 is fed to the associative memory. The truth table of the decoders is also shown. The decoders have two effects. First they encode the pattern on the input array to a much more sparse pattern on the output array 0. Second, they maintain a constant number of bits set to 1 in the array presented to the associative memory. This second effect means that the storage properties of the associative memory may be accurately predicted for all types of input pattern4. This process has one other major benefit in that the pattern recognition abilities of the associative memory are greatly enhanced, and that nonlinearly separable pattern classes may be successfully classified5. The value of K in the name ‘K-tuple preprocess’ relates to the number of inputs each decoder receives (i.e. four in the

256

I

a State

i

Sl

001000

52

010100

s3

1

s4

110

j

a

b

c

0

0

0

10

0

01

d

b Figure 1. Logical K-tup~e preprocessing of an input vector: a, input array i and output array 0; b, state table

example in Figure 7), and is normally constant for all decoders. An evaluation has been made of the abilities of an associative memory when coupled to the binary and grey-scale K-tuple preprocessing operations4. These investigations involved the recognition of simple 2D objects, and have shown the device to be capable of complete recall of occluded objects.

USE OFTHE MEMORY ANALYSIS

IN SCENE

To illustrate the application of the associative memory in scene analysis, a simple example is shown in Figure 8. This shows how an image of a silhouette of an occluding square and a triangle may be segmented. The associative memory described above was used, incorporating the K-tuple preprocess. The input images were binary and of size 64 x 64 pixels. The K-tuple size was 4. The memory used a class size of 32 with four bits set to 1 in each new class taught. The memory was taught to recall a complete description of the images a and c in Figure 8. Four views of shape a, and three of shape c were taught with the processing window (i.e. the key image + K-tuple processing) centred at the points marked. Images e and f show the result of recognizing image b which is made up by superimposing the images a and c. The input window to the associative memory was centred at point 1 in image b, then tested to recall the associated image at this point. This caused the recall of image a. The output of the memory was then subtracted from the input image, which resulted in image e. This simple example illustrates how the memory may be used to determine the position and shape of an occluding object. The same process as just described was also performed

image and vision computing

a_..

,,.. ::.:..,:::.::::::.: ,,.,....,.,...,....

..~..,,,:;,,,; .,. ,I.,.

:”

..,...,.... ,,

..,

,.,,.,,,...._......::..::~~~:~~

.,, ._I),

: ,,:,:::.;,:.::..::,.::,:

:

:

; I :

:

:

: :

:: .,._,._,,_..,......,,_.~..~....~ .. _,, !f :::j:j:: jj

: ; : :. : : : : :

;

:. : : :

; : f : :

Patterns a and c are taught into the recognition system by placing the centre of the processing window {size 64 x 64 pixels) at each qf the points marked with a cross, teaching a dlyferent class puttern at each point. Image b shows a typical imuge used in recognition and pattern recall. Images d-h show recall results under a variety of conditions with the processing window placed at point 2 in image b. This resulted in image c being recalled, producing image f after subtraction. Examptes g and h show how the system performs in noisy conditions. To produce these images the same process as above was applied, but after adding random noise to image b. Image d was produced by placing the processing window at position 3 in a noisy version of image b. These examples serve ta itlustrate the memory’s use

vol5 no 4 novemher

1987

in scene analysis. For a detailed investigation into both binary and grey-scale image analysis see Reference 4.

HARDWARE

REALIZATION

Implementation of the associative memory is simplified when it is communed with the K-tuple preprocessor. ft will be recognized that the combination of one decoder

257

l

Clock

0

.

o

sg Figure 9. Hardware memory

s”o

implementation

s,”

s,”

of the associative

and the associated four storage elements shown in Figure 7 forms the essential element of a random access memory (RAM). In practice, RAMS may be much larger than this; for example, the Motorola MCM6287 RAM contains 64k singularly addressable bits in one package6. Using a device such as this, the K-tuple size may be between 1 and 16. The idea of implementing the K-tuple process using RAMS was first used by Aleksander’ in the Wisard pattern recognition machine. This system is able to classify objects at television frame rates. However, it does not have an associative capability. To implement the first stage of the associative memory described here, many RAMS will be required. A typical hardware configuration is shown in Figure 9. This illustrates the row (r,) and column (C,) arrays, which correspond to the key and class arrays respectively in the first stage of the associative memory shown in Figure 3. The row array, r, contains m elements broken into m/K blocks, where K is the K-tuple size. Each block of K lines forms the address lines to every RAM in each row of RAMS. The column inputs C,C, feed the data input lines of every RAM in each column of RAMS. Teaching an association between a class and a key pattern is simple. The key pattern is placed in array r, causing one location in every RAM to be addressed. The class pattern is placed in array C, placing a logical 1 or 0 at the data inputs of each RAM. When a write is asserted to all RAMS using the R/W lines, the data present at the data inputs of each RAM will be written to the addressed location. The writing of a logical 1 at the addressed location is exactlv eauivalent to the formation of a link between a row’and a column wire (as in Figures 1, 2 and 3). This operation takes very little time to accomplish; if MCM6287 RAMS are used it is of the order of 50 ns.

258

Using the implementation in Figure 9, recall takes slightly longer than writing, due to a semiparallel operation. Recall only involves the use of the key array to generate a class response, which may be thresholded subsequently. To allow summation of the number of links that were made during teaching, recall involves the use of a set S of counters. These counters are all clocked by a central clock which causes the counter to increment if a logical 1 appears on the data input (Din) of the counter. The recall operation starts by setting the counters to zero. The key pattern is then placed into the row array r thus activating the address lines to each RAM, as during teaching. This causes one bit to be addressed in each RAM. Following this, all the RAMS in the first row of RAMS are sent a read signal. This causes the contents of each addressed location to be placed on the data outputs of each RAM. Each counter, S, thus receives data from only one RAM in each column of RAMS, via an OR gate. The counters are then clocked to increment if their particular data input is at logical 1. Following this the next row of RAMS is read and the counters clocked again. This is repeated until all the rows of RA.Ms have been read. The state of the counters then represents the raw recalled class pattern, which then requires thresholding to recover the binary class pattern. This process effectively counts the number of links (bits set to 1) between the row and the column wires. Notice that the time taken for a recall is approximately p times the time for teaching an association, where p is the number of rows of RAMS and is equal to m/K, where m is the number of elements in the key pattern and K is the K-tuple size or number of address lines per RAM. However, for a given key array size the recall time is independent of the class array size. To implement the second stage of the associative memory a more conventional memory architecture is required. Only the functional aspects of the architecture will therefore be discussed. The main aspects of the architecture are shown in Figure 10. The memory is implemented as a set of H words of R bit in length, where H is the number of elements in the class pattern, and R is the number of elements in the teach pattern. During teaching, the H-bit class pattern and teach pattern are supplied as discussed above. Then each of the R-bit words that are related to a class element at 1 are extracted from the memory. Each word is then logically Teach pattern register

I I I I I I I I Class register

I I I I I I I I Summing register

Figure 10. Implementation

of the second stage

image and vision computing

ORed with the teach pattern and replaced in its original position in the memory. Testing or recalling an association from a class pattern recalled from the first memory stage takes place as follows. First a class pattern is supplied. Each of the R-bit words in the memory that is associated with a class element at logical 1 is then extracted from the memory. All the words extracted are added together into a summing register. The contents of the summing register are thresholded as described above to generate the original pattern. It may be pointed out that the fabrication of the memory in VLSI would be quite straightforward, apart from the problem of getting the images to be associated on and off the chip. An advantage of the two-stage associative memory relates to the failure of individual memory elements on a chip during production and in subsequent use. Because recall depends on a number of memory elements voting for each output, the loss of one memory element would be in no way fatal to recall or storage of associations. This ‘fail soft’ ability is advantageous in a number of applications.

recognition using random-access memories’ Comput. Digitaf Tech. Vo12 No 1 {Feb. 1979) pp 29-40

APPENDIX EQUATION

1: DERIVATION (1)

OF

Consider a single memory matrix like the one shown in Figure 1. First derive the number of links that may be made in the memory matrix after T patterns have been taught, S,. The memory consists of a matrix of cross-points R by H in size. To simplify, consider the case where R = 1 and H > 1, making the matrix a ID array of H elements. Every time an association is made, N links are made in the matrix. As successive patterns are associated into the matrix a smaller number of new links will be set (because some of the new links will coincide with links already made). The number of links made after T patterns have been taught is derived as follows. Let S, be the total number of links present after T associations and P, the number of new links made in the (T + 1) associations. In general ST = ST.., + P,

SUMMARY P,=

This paper has described a new form of dist~buted associative memory which may be used in situations where the pattern to be recalled may be noisy or incomplete. Results have been given to illustrate the storage efficiency of the memory, which has been shown to store data with an efficiency at least as good as a conventional file store. The system has an inherent ‘fail soft’ capability in that each output bit is voted for by a number of storage elements, allowing the loss of one storage element to be insignificant. Implementational aspects of the memory have been discussed, which illustrate how the memory may be constructed from readily available devices.

(3)

N(!czp) (4)

After one association has been made P, = N S, = N

After the second association, from Equation (4)

From Equation (3) REFERENCES

$=N+N-$

Wilshaw, D J in P~r~liel models of ~~ociffrive memories I-Iinton, G E and Anderson, J A (eds) Hillsdale, Erlbaum, NJ, USA (1981) Gardener-Medwin, A R ‘The recall of events through the learning of associations between their parts’ Proc. Roy. Sot. Land. B. Vol 194 (1976) pp 375-402 Lansner, A and Ekeberg, 0 ‘Reliability and speed of recall in an associative network’ IEEE Truns. PAMI Vol7 No 4 (July 1985) pp 49&498 Austin, J ‘The design and application of associative memories for scene analysis’ PhD thesis Dept. Electrical Engineering, Brunel University, UK (August 1986) Stonham, T J ‘Practical pattern recognition’ in Aleksander, I et al. (eds) Advanced digital informuiio~ systems Prentice-Hall, Englewood Cliffs, NJ, USA (1985) Motorola product preview, MCM 6287, 64 kbit static random access memory Motorola, Austin, TX, USA

=2N-X After the third association

=3N_3!!!?+!? H

H

After the fourth association

(1986) Aleksander, I and Stonham, T J ‘Guide to pattern

vol5 no 4 novemher

1987

259

N’

&=4N6$

+4j77--3

N4 H

P = 1 - [1 - P(link)l”

Substituting for P(link) from above: It can be seen that S, is progressing as a series with binomial coefficient ST = i

c’;(-

,)r-1

r=I

0 ‘-IN

r!( T - r)!

This gives a formula for a number of links made after made. This now needs to be (R > 1). The equation for P, p

T

=

ID matrix that gives the T associations have been extended to a 2D matrix is now

HR

iC:(-l)‘-’

APPENDIX EQUATION

2: DERIVATION (2)

OF

p(C’) = P (link)’ Substituting for P(link) from above gives

HR[

1-

(l-g)‘]

The probability of finding a crosspoint in the memory matrix where a link has been made may now be expressed

s*

P(link) = HR = 1 The probability of one element of the class array going above threshold when it should not may now be calculated. For this we assume a simple threshold process is operating where the response of the memory is thresholded at a value equal to the number of elements at logical 1 in the key pattern, i.e. the class element will be set to 1 when it is equal to I. For the class element to equal I all the points set to 1 in the key pattern must coincide with a link set to 1 in the memory. Thus the probability p(C) of one class recall element going above threshold is p(C’) = P (link)’ and the probability of this not happening is

p(c’> = 1 -

P(C)

The probability of not getting any class recall elements erroneously going above threshold is p(C’-“)

Now, the probability of getting one and onZyone element of the class recall array going above threshold erroneously is given by the second term of the binomial [p(link) + p(link)lH

r=I

This equation can be reduced to a polynomial and simplified: ST=

(1-$)‘I’}”

NIHR - ST-,

which follows through to give ST=

I-

From above, the probability of getting one class recall element going above threshold is given as

T!

=

I-[

$

where cT

P = l-1

which is e, = C’:p(link)‘[l

- p(link)lH-’

Now, C’: = H, so substituting e, = H{l - [1 - (NZ/HR)]T)’ x (1 - [1 - (1 - (NZ/HR))T’}Hm’

This equation can now be used to find when it is most likely to get one end only one class element above threshold. This is given when e, is maximum. (When this function is maximum the probability of other numbers of class errors will be smaller.) Thus to obtain the maximum of the function e, we need to first differentiate e, with respect to T: F+ = [1 - (c + l)(l - ar)“] X [1 - (1 - ar)bc-‘b(l - u’)“-’ aTlna where

u=,_NI

HR

b=I

c=H-1 Now find the maximum when de,/dT = 0. This reduces eventually to

= [1 - P(link)l” T=

Thus the probability of any class recall element going above 1 erroneously is

image and vision computing