An efficient hardware tool for structured information management

An efficient hardware tool for structured information management

EU ROM ICRO J O U R N A L 4 (1978) 275-282 © EU ROM ICRO and North-Holland Publishing C o m p a n y An efficient hardware tool for structured informa...

502KB Sizes 6 Downloads 109 Views

EU ROM ICRO J O U R N A L 4 (1978) 275-282 © EU ROM ICRO and North-Holland Publishing C o m p a n y

An efficient hardware tool for structured information management R.Beaufils and J. P. Sansonnet Laboraloire"Langages et Sysi+mes Informaliques". E.R.A.-('. N.R.S. 298. I.Jniversit6 Paul Sabaticr, 118,route de Narbonne, P 31077 Toulouse ('cdex, France

During the last years, the increase of the addressing needs, particularly in the data base conception, has been an important reason for the software heaviness. The complexity and the diversity of the access methods have not permitted us to wire for the addressing functions. This has led to the use of software processes which have essentially ~wo inconveniences: important access delays, complexity for the users (for compilers and interpreters too) who have to use these processes. This study, carried out at Paul Sabatier Laboratory "Langages et Syst~mes Informatiques" presents a hardware tool which is able to deal efficiently with the information whatever its type of logical structure, supplying it to a real need of the current systems. This tool has been achieved out of the definition of an associative addressing type. This study has led us to build a Data Structure Processor, the logical structure of which shews strength, rapidity and simplicity of use for several applications.

the same time). These two properties make the Oata Structure Processor (DSP) a hardware tool "dynamically rewireable" and which can be adapted to each problem. The user then has at his disposal one (or several) hardware stacks l i k e on the B 5500, and also other structures. The DSP has a language which is simple to use and is very e f f i c i e n t .

O. INTRODUCTION We d i f f e r e n t i a t e two types of "Data manipulations": - "Data processing" which is an operation on the data: a r i t h m e t i c a l or l o g i c a l operation according to the information type; - "Data management" which e s s e n t i a l l y concerns the addressing function, i . e . , the information organization only, not i t s substance.

1. CHOICE OF ASSOCIATIVE ADDRESSING

In the e a r l i e s t computers the addressing function was executed according to a d i r e c t method (access by absolute address) or an i n d i r e c t method (a memory word which constitutes an i n d i r e c t r e l a y ) . A s p e c i f i c tool f o r data manipulation was only r e a l i z e d when base addressing appeared. I t is a simple adding function which has f o r A-entrance a base addressing r e g i s t e r and f o r B-entrance the displacement in based space. According to the machine strength this tool was or was not effect i v e l y separated from the data path.

I . I . Direct addressing (OA) D e f i n i t i o n : Let a f i n i t e Set E of items ( e . g . , a set of memory words or a set of operators), a b i j e c t i o n b is established between E and I c ~. The d i r e c t addressing function DA is defined as: DA I

D>E

c ~

set of addresses

set of items

DA(n) = b - l ( n )

The f i r s t example of a developed function is the hardware stack of Burrough's 5000 and 6000 series. These machines are able to execute d i r e c t l y a Polish string thanks to i t s stack which drives the execution and provides operands. Other tools have been wired, l i k e FIFO queues to manage the pipe l i n e in IBM 91 and associative tables used in the management of v i r t u a l memory systems paging.

n e I

I t is this function (b - I ) which is wired in a random access memory. The t o t a l order r e l a t i o n is therefore Z on the integer set ~ which extends to the items set which becomes a sequence. This presents an advantage: the case of sequential l i s t s implementation, but has two inconveniences:

The appliances which appeared in a number of computers to increase the r a p i d i t y of data management shows the necessity of studying the mechanisms r e l a t i v e to information organization.

• sequence modif ic a t io n : i t is very d i f f i c u l t to i n t e r c a l a t e or to suppress a sequence component; • dynamics l i s t s implementation: e . g . , the problem of the queues advance and stacks management is always badly resolved.

We present here a tool able to manage information as structures such as: simple v a r i a b l e s , indexed v a r i a b l e s , trees, dynamic variables (stacks, FIFO queues). The access time is constant whatever the structure type is considered ( e . g . , to acceed to an indexed v a r i a b l e or to a simple v a r i a b l e needs

The access time of procedures employed is important. Their d i v e r s i t y makes any unspecialized wiring impossible. I t is then d i f f i c u l t to reduce the important access time in this way. 275

276

R. Beaufils and J.P. Sansonnet

1.2. Associative addressing (A2) 1.2.1. Definition. Let us imagine a f i n i t e set E of items and an attribute: family { T i } i ( [ 1 , p ] c ~

t 2 = (female,brown,tall) t 3 = (male,Q, little)

such as

Vi ~ [ 1 , p ] ~ E Ti ;

Prl(tO ) = female

f i 1 ( f e m a l e ) : {mary,jane}cE

is c a l l e d " d o n ' t care". This a t t r i b u t e means t h a t the set T i to which i t belongs is i r r e l e v a n t

Pr2(t O) = red

f21(red)

= {jane}cE

Pr3(tO ) = l i t t l e

f31(little)

= {john,jane,mary}cE

Let us immagine a f a m i l y of functions {fi}i~[l,p]c~

J = {1,2,3}

such as

fi(x)

= ti

;

x~E

A2(t O) = { m a r y , j a n e } n { j a n e } n { j o h n , j a n e , m a r y }

;

t i ~ Ti

= {jane}cE

Vi E [l,p] P T= N i=l

suppose

Jane is the only person corresponding w i t h t O T. the cartesian product of i

attributes. The addressing function (A2) is then defined by :

A2 T-

a little

Likewise

A2(t l ) = { j o h n , p e t e r } c E

There is no t a l l Use of " d o n ' t a r e " This a t t r i b u t e

:

means t h a t the corresponding T i is

(projection)

unimportant f o r the user :

ti = ~

i~J

P r l ( t 3) =male

fi1(male)

= {john,peter}cE

i~J

Pr2(t3 ) = ~

f~l(~)

=

Pr3(t 3) = l i t t l e

f~l(little)={john,jane,peter}cE

1

(J is the set of s i g n i f i c a n t

attributes

f o r the

user). Let E be a set of persons E = {John,Mary,Jane,Peter} TI s p e c i f i e s sex

T1 = {male,female, Q}

T2 s p e c i f i e s colour of the h a i r

T2 = { b r o w n , f a i r , r e d , ~ }

T3 s p e c i f i e s s t a t u r e

T3 = { l i t t l e

tall,~}

B u i l d i n g of ( f i ) f u n c t i o n s t a b l e is given by obs e r v a t i o n of various persons and t h e i r c l a s s i f i cation according to T i a t t r i b u t e s : fl(sex) male

f2(hair-colour) brown

f3(stature) little

Mary :

female

fair

tall

Jane :

female

red

little

Peter:

male

brown

little

Let us t r y to compute the following elements : t O = (female,red,little) t I = (male,brown,little)

E

tl#Q:lcJ t2 = ~ :

2~ J

t3 ~ ~ :

3EJ

J={1,3}

A2(t3 ) = {john,peter} n {john,jane,peter}

p = 3:

John :

women with brown h a i r .

pr i (t) = t i

t. # ~

EXAMPLE :

brown-haired-men

A2(t 2) = ¢

set of items -1 ; A2(t) = iE(~J f i [ p r i ( t ) ]

where

red -haired-women).

There are two l i t t l e

DE

set of attributes VtET

(i.e.

= {john,peter} c E There are two l i t t l e men ( w i t h o u t hypothesis about the colour of the h a i r ) . 1.2.2. A p p l i c a t i o n s . There are two d i f f e r e n t s t r u c t u r e types: s t a t i c and dynamic. We are going to give some examples to show the best way to use a s s o c i a t i v e addressingto reach the data. We take, as a representation of the variables in the memory, a word set (E) to on which we define a f i n i t e f a m i l y of s t r u c t u r e s : {Si} iE Kc~ / Vi E K

Si c E

The aim is to distinguish the structures (coded by i ) . We have to establish an attribute (To) and a function: f0 : {Si} i ~ K

D>To

which gives a "name" to each structure. Because the function (fo) is not o b l i g a t o r i l y injective

Structured Information Management we can obtain synonyms.

Example : stacks

,Implementation of simple v a r i a b l e s . We c a l l a simple v a r i a b l e a s t r u c t u r e which has only one component: . In t h i s case i t is not necessary to create a new a t t r b u t e and the s t r u c t u r e name is the component name.

T p + l = ~ u {~}

{xj}

j c J c~

• generation addressing (top stack)

We can w r i t e : and

V j ~J

;

fl(xj)

:

• Implementation of arrays• We can say t h a t a set Si is a p-dimensional array i f there is a b i j e c t i o n B from Si i n t o { I k } kc [ l , p ] f a m i l y of i n t e g e r i n t e r v a l s • A cartesian product component ( i l , i 2 . . . . . ip) is Si = { x j } j 'c

tp+ I =0)

n-1 Ifnp+l ( x ) = f p + l ( x ) + l ;

n~-]N

(flp+l(X)

~fnp+llnE~i / V x E E

I

0

n-Z f +l(x)=fp+l(X)-

].1

(x)

I ; n¢IN

0

tTp+l

t = (to,~)

c a l l e d an indexes l i s t

,

If;+lln~_.~ / VX~E

=j

Note : We can thus reach a whole vector w i t h :

Tou

to stack

t i =~

• to pop :

T 1 = Ju ~

We b u i l t

t= (Vi c~,p] •

The a t t r i b u t e To is no more s u f f i c i e n t . We must have a new a t t r i b u t e T~ to d i f f e r e n t i a t e the vect o r components•

The generations are numbered, 0 being the most recent

• Implementation of vectors. We c a l l a vector a sequence of v a r i a b l e s Si

277

:

P ~I ( I k ) k=1

the f a m i l y :

{Tj}je[1,p ] / T j = I j u {a}

; Vj

We see t h a t we can represent c l a s s i c s t r u c t u r e s w i t h the help of a s s o c i a t i v e addressing• We can, t h e r e f o r e , i n v e n t other s t r u c t u r e s by the d e f i n i t i o n of new a t t r i b u t e s T i and new f u n c t i o n s f i . Conclusion: I t is not necessary to e s t a b l i s h an a n a l y t i c form f o r the f u n c t i o n ( f i ) - This allows us to envisage s i n g u l a r types of s t r u c t u r e s • There is a simple access f u n c t i o n , i . e . , the one which consults the table of f u n c t i o n s ( f i ) . I t is independent of the number of functions ( i . e . , of a t t r i b u t e s emDloyed). I t can t h e r e f o r e be wired, thus reducing access time. This access f u n c t i o n allows us to r e a l i z e the hardware implementation of high level languages.

and the f u n c t i o n s f a m i l y : fo u { f j } j c [ l Then

,p] / f j ( x ) = p r j

B(x))



Vj

-1 A2(t) = r~ f j ( p r j ( t ) j~J

Notes: can thus reach sub-arrays. This is not possible w i t h the d i r e c t addressing which allows us to reach only one component. There is no d i f f e r e n c e in the passage from vectors to arrays - only the number of a t t r i b u t e s changes. • Implementation of dynamic s t r u c t u r e s . A dynamic s t r u c t u r e is characterized by a s e r ~ s c o n s i s t i n g of s t a t i c s t r u c t u r e s : {Sn}n¢]N / •nEIN

SnC ~ ( E )

From Sn two f u n c t i o n s are r e a l i z e d : Addressing (thanks to the a t t r i b u t e • Generation of S n . l ( ~ a n k s We have, t h e r e f o r e ,

Tp+l)

to a new f u n c t i o n fp+l)

to define

~ + i and fp+l

2• THE D.S.P. STRUCTURE 2.1. General s t r u c t u r e The DSP is connected on one side, to the main memory, w i t h which i t exchanges data, and on the other side to the CPU w i t h which i t exchanges orders and data. I t is composed of three components (see f i g . I ) : i . A local memory (LM): i t contains s t r u c t u r e d information• Through i t s o r g a n i z a t i o n i t assumes the a s s o c i a t i v e addressing f u n c t i o n • 2. An operator (OP): i t is used to manage the i n formation and to execute elementary operations on data in the case of need. I t is more p r e c i s e l y composed of: - An a r i t h m e t i c a l and l o g i c a l u n i t (ALU); - A binary chain manipulator (BCM), which allows CONCATENATION or EXTRACTION of data; - A comparator (COMP). 3. A control u n i t which i n t e r p r e t s the orders t h a t are sent out and d i r e c t s data path composed by the local memory and the operator. 2.2. "Compounded v a r i a b l e " d e f i n i t i o n Because t h i s tool has to deal w i t h h i g h l y s t r u c tured data, i t seemed a t t r a c t i v e to define elements having a h i g h l y a c t i v e nature. "Com-

278

R. Beaufils and J.P. Sansonnet (AD,VAR,DES)

REQUEST



LANGUAGE

I

t1¢[

CONTROL UNIT

LOCAL MEMORY

OPERATOR

I

The local memory is an associative one. In order to gain access to this memory we have to establish an associative key constituted by AD. We can then read or w r i t e in the memory part, i . e . , in VAR and DES. The association key, AD, is s p l i t into parts with a function attached to each one: M. is used to manage the stacks and the queues, N. indicates the name of the compounded variable used; I. is used to manage indexed variables. The descriptor, DES, is also s p l i t into parts: LG. gives the b i t - l e n g t h of the associated var i a b l e VAR, PT. helps to point on the variable during the strings manipulations, AUX. stays at the disposal to the user.

L)

"~1.static variables" access. We can call "M.static v a r i a l b e " the one that does not depend on the management f i e l d M of the structure, in the key AD.

Fig. I. DSP general structure. pounded v a r i a b l e s " : To a classic v a r i a b l e , VAR, we associate an addressing system, AD, which a l lows us to determine to what data structure i t belongs, and a descriptors l i s t , DES, that gives semantic information about the v a r i a b l e (necessary to the operator using the system). Thus we obtain a compounded v a r i a b l e :

Key

I lw. I

There are: simple variables vectors matrices trees-structures composed by the components here above. For the simple v a r i a l b e only the f i e l d name, N, is used. We gain access to the v a r i a b l e by means of association of the f i e l d . For vectors (or matrices) asupplementary f i e l d which contains the index (or indices) is used. We gained access to the f i e l d i n s t a n t l y by providing the name and the index (indices).

CPU...

I,o

The compounded variables set is joined in the local memory the structure of which is shown in f i g . 2.

I., I

.Let us imagine the matrix MAT(2,2) =

,

e can store i t in LH in the f o llo w in g form:

M

N

|

I~1 Pt

IWX

M

IOPERATOR

-

N

i

;

VAR

"mat" 1 1

A

"mat" 2 2

D

"mat" 1 2

B

"mat" 2 1

C

;

DES

"M.dynamic variables" access. We shall c a l l "M. dynamic v a r i a b l e " the one Which depends on the management f i e l d , M, in the key AD. We find among these variables: stacks, FIFO queues, the variables managed with an LRU algorithm.

I Output

~CPU.,. Fig. 2. DSP local memory l o g i c a l organization.

Stacks management. The place held by a stack in local memory is not f i x e d . We represent a stack by i t s name and we number the generation by giving the number 0 at the l a t e s t , i . e . , at the top.

Structured Information Management

Example :

C f

GENERATION

279

C.DESn+l i"

OPC.~

A;DES n+l

f

B.DESn

i'

C~-_ A. OPC,B

1 A

2

A.DESn

3

STACK

.

I

OPERATOR

B

B.DES

Fig. 3.

I t can be implemented in LM l i k e t h a t :

The p r i m i t i v e s present two points of i n t e r e s t :

M

(1) They a l l o w r e a l i z a t i o n of complex functions such as concatenation, e x t r a c t i o n , in a cycle. This is not possible with a c l a s s i c o p e r a t o r .

N

2

"stack"

0

"stack"

i

"stack"

3

"stack"

I

;

VAR

;

DES

B

-

-

D

-~

-

C

-~

I

A stack top

I

We o b t a i n the stack top by p r o v i d i n g the key : AD = 0

"stack"

;

To stack : i . store the word in LM by g i v i n g i t a key : AD = 0

"stack" - ;

2. execute + 1 on M f o r a l l the words the name of which is "stack" in the LM. To pop : 1. read the word in LM by p r o v i d i n g the key : AD = 0

"stack" - ;

2. execute - 1 on M f o r a l l

the components the

name of which is "stack" in the LM.

(2) We can d e f i n e the functions compounded by several i n s t r u c t i o n s executable in p a r a l l e l . This notably reduces the DSP answer time, and allows us to use a high l e v e l language. Example. CONCATENATION. Suppose: X a 12 b i t s s t r i n g (X.LG = 12) Y an 8 b i t s s t r i n g (Y.LG = 8) CONC is the p r i m i t i v e which r e a l i z e s the concatenation o f two b i t s s t r i n g . We w r i t e Z : = X CONC Y; t h i s represents the subroutine: v

z+xll

Y

Z.LG + X.LG + Y.LG V i t corresponds to the wiring diagram of f i g . 4. We can realize the scheme which corresponds to each primitive with the help of the executives placed into the DSP local memory. This system is very simple and allows realization of an e f f i c i e n t operator.

The phases 1 and 2 can be r e a l i z e d at the same time, t h i s makes the access to an LM stack as quick as to a hardware stack.

3. DSP TYPICAL APPLICATIONS

2.3. D e f i n i t i o n of the p r i m i t i v e s The f i e l d s LG, PT and AUX are used to d e f i n e the p r i m i t i v e s . The o p e r a t o r does not j u s t carry out operations on v a r i a b l e s . I t also makes use of the d e s c r i p t o r s and keeps them up to date. This a l lows us to extend the o p e r a t i o n notion to the one of p r i m i t i v e .

t example we g i v e is r e l a t i v e to theDSP adaptation to a computer. The adjunction of an aperations box able to provide hardware t o o l s to manage a r r a y s , stacks and so on, can notably i n crease the computer performances.

3.1. Implementation on a small or a medium

A p r i m i t i v e takes place in three phases: 1. Reading o f operands and t h e i r d e s c r i p t o r s p l a ced in the i n p u t r e g i s t e r s of the o p e r a t o r . 2. P a r a l l e l execution o f : the o p e r a t i o n i t s e l f ; the computation of the new d e s c r i p t o r s assoc i a t e d w i t h the r e s u l t ; b r i n g i n g up to date of old d e s c r i p t o r s ( i f necessary). 3. Sending the o p e r a t o r output r e g i s t e r s , which contain the r e s u l t and d e s c r i p t o r s , in LM. See f i g . 3. The p r i m i t i v e execution is done by very simple o p e r a t o r s , such as adders, e x t r a c t o r s , concatenators. Some o f these are doubled to a l l o w the p r i m i t i v e p a r a l l e l execution.

Xn yn X.LGn Y.LGn

T zn+l: = X II Y BCM

~> ADD

Fig, 4.

Z.LGn+I:= X.LGn + Y.LGn

~_~>X.LG n+1 ~{>Y. LG n+1

280

I

R. B e a u f i l s and J.P. Sansonnet

HCPU

&~IN

DSP

N

MEMORY , I k .

.

.

.

.

.

.

,.~ , .

.

.

REQUESJS

.

I

J

Fig. 5. The DSP support s t r u c t u r e . The computer processing u n i t sends requests to the DSP f o r three reasons: ( I ) Sending data to the DSP: i t is necessary to have a connection between the processing u n i t r e g i s t e r s and the DSP input. (2) Data demands on the DSP: i t must send data to the CPU r e g i s t e r s . (3) Data m o d i f i c a t i o n commands. This o p e r a t i o n i n t e r n a l to DSP does not need any connection with the CPU. In a l l cases we have a c a l l / a n s w e r connection type. From the machine s t r u c t u r e so defined we are going to give an example of request language, which is albe to be implemented on the DSP, to show i t s strength and s i m p l i c i t y f o r use. The language defines the DSP f o r the user. I t cont a i n s three i n s t r u c t i o n types:

::= READ INTO R < r e g i s t e r n°>; IWRITE R < r e g i s t e r n°> INTO ; I < p r i m i t i v e name> [ , < v a r i a b l e > ] [ , < v a r i a b l e > ] [,]; < v a r i a b l e > ::= [ < s p e c i f i c a t i o n s l i s t > ]

OPC , [ a d l ]

, [ad2] , lad3]

in which OPC designates a three addresses c o n s t i t u t e t h i s p r i m i t i v e . I t is now preted by the DSP control use the executives stored memory.

p r i m i t i v e and the the parameters o f ready to be i n t e r u n i t . To do t h i s we in the control u n i t

We s h a l l use as an example the grammar of f i g . 6 which defines the syntax, The d e f i n i t i o n o f the i d e n t i f i e r s is c l a s s i c • The i n t e g e r s are posit i v e or negative. The < r e g i s t e r n°> depends on the machine r e g i s t e r number. The l i s t s given f o r and < p r i m i t i v e name> are not exhaustive. I t must be determined during the implementation. This is made p o s s i b l e by using a system of executives in the DSP c o n t r o l u n i t . Examples : • request of reading and storage in the CPU registers : READ MAT(2,2) INTO R 6 ; • f o r the inverse o p e r a t i o n : WRITE R 4 INTO V(3) ;

::= .I < s p e c i f i cations l i s t > < i d e n t >

::= I] ] ::= ::= PUSH]POP]LINKIUNLINK ]READIWRITE . . . . ::= ( )

::= I , < i n d e x >

::= < i n t e g e r > I < i d e n t > I < i n d e x e d adress> < p r i m i t i v e name> ::= I ADDISUBIORIAND

(1) Data storage in the DSP; (2) Data demand to the DSP; (3) Demand o f a p r i m i t i v e execution by the ~SP. A l l the p r i m i t i v e s have symbolic names and are determinated during the DSP b u i l t • There i s , the less in the language, the p o s s i b i l i t y of def i n i n g new p r i m i t i v e f u n c t i o n s . The request language is not compiled. I t is assembled in the same way as machine languages in t h i s form:



JSUBSTRICA__~TI... Fig. 6.

• o p e r a t i o n request to the DSP : I incrementation ADD, I , I , 1 ; f o r I := I + I ; • comparison of two vector components : GT,BOOL, V ( 2 ) , X ( 3 ) ; BOOL ÷ "TRUE"

IF V(2) > X(3)

B00L ÷ "FALSE" IF V(2) ~ X(3) The allows us to describe the access method to a M.dynamic component : to stack

:

PUSH

P

to unstack

:

POP

P

w i t h o u t any stack o p e r a t i o n : to read stack top

:

to w r i t e stack top :

READ

P

WRITE

P

An
is composed o f a s p e c i f i c a t i o n p a r t ( i f we have a t r e e ) and is ended by a l e a f i n d i c a t i o n . The f o l l o w i n g address expressions are valid: TREE.A TREE.B.POP STACK TREE.B. READ STACK TREE.B.V(1)

Structured Information Management memory EXTE~;SIONS

REQUESIS

Fig. 7, Programming l o g i c a l cache mode.

281

therefore be synonyms in the h.meaning ( i t w i l l be the case f o r a l l the vector or stack components). I f n is the b i t number which we have to our disposal f o r coding the a t t r i b u t e to ="name", we can code 2n variables. They constitute f o r the user the v i r t u a l associative memory (VAM). The real associative memory, RAM, (or physical) has not o b l i g a t o r y the same size as the VAM. See f i g . 8.

3.2. The "programming l o g i c a l cache" mode In the programming l o g i c a l cache working mode, the processing u n i t is no longer connected to the main memory and we have to execute a l l the memory requests by the means of the DSP. I t is a p a r t i cular way of working which presents the same theor e t i c a l problems as in the f i r s t example, but i t offers i n t e r e s t i n g aspects: the memoryconnected to the DSP behaves in an i n t e l l i g e n t way because i t a ~ i t s a high level request language. This language is s i m i l a r to the one which we have described above p a r t i c u l a r l y about the "address" part. The processing u n i t can make the requests to the DSP which behaves then as the main memory " l o g i c a l cache". See f i g . 7.

Component storage in main memory i . From t by h we obtain h(t) = a MAV address. 2. We chain the variable on synonyms (Sa) head. 3. We store the pointer in the word MAV(a).

To use such a method could be i n t e r e s t i n g in the manipulation of a large information flow, p a r t i c u l a r l y the f i l e s and the data bases. In this case the DSP memory which is employed as l o g i c a l cache can not contain a l l the information needed.

information

I t is necessary to carry out moves of information between the DSP, in which there is associative addressing and the main memory, in which there is d i r e c t addressing. The simplest s o l u t i o n is to use the association key as main memory address and vice versa. As soon as the information is highly structured, even using very l i t t l e , the associative keys are often extremely long. This imposes p r o h i b i t i v e capacity memory, only a low percentage of which is employed. There are various techniques which help us to find a solut i o n to this problem, such as v i r t u a l space f o lding used in the data bases or the hash coding. Let us take as an example the l a s t method. The best way to apply a hash coding function, h, to a key which has the form t = (to = "name", t l ,

t2 . . . . .

tp),

is to associate a main memory address by using the "name" p r o j e c t i o n : pro h(t) + pr0(t) = "name" All the variables which have the same "name"will

tt

x

,

tl,

...., tp key

,

PT pointer on Sa l i s t

4. CONCLUSION We have shown that i t is possible to l o g i c a l l y define the functional structures adapted to the management of structured information and that s p e c i f i c solutions could be found to this problem thanks to the use of associative addressing. The s u b s t i t u t i o n of d i r e c t addressing by assoc i a t i v e addressing makes the f o l l o w i n g evident: - instant access to the information whatever i t s l o g i c a l structure; - implementation f a c i l i t y of new hardware structures by simple extension of the table of functions ( f i ) ; - the s i m p l i c i t y of use by: • a symbolic language d i r e c t l y i n t e r p r e t e d , • a strong language which uses f l e x i b l e primitives. The examples of applications we took, show the e f f i c i e n c y of a tool such as the DSP, and the necessity to develop hardware techniques f o r information management. The t h e o r e t i c a l aspects attached to the associative addressing and i t s applications are now in progress of being studied in p a r t i c u l a r , a prototype construction is a l ready possible thanks to the use of CCD s h i f t registers. The develooment of LSI techniques whether i t is CAM's or PLA's, must allow e f f e c t i v e construct i o n of a DSP. We hope to continue the work in this way, with the r e a l i z a t i o n of i n t e l l i g e n t control u n i t s , using the OSP, which w i l l permit to b u i l d new emulator types.

f--, I

Component research in main memory I. From t we obtain h(t) = a MAV address. 2. In qAV(a) we have a pointer on Sa, 3. We inv e s t igat e the l i s t Sa to determine the item which possesses the wanted a t t r i b u t e s (ti)i=l,p. Note: A v a r i a b l e x stored among the synonyms,S a, has as general form:

t RAM

DSP ]ocal memo ry

t = ("name"=

Fig. 8.

a

'

tI

'

t2

' '

..,tp~j

ACKNOWLEDGEMENTS This work was sponsored by the "Minist~re des Armies" under contract DRME # 74-425 and carried

282

R. Beaufils and J.P. Sansonnet

out in the Laboratory of Professor R. Beaufils (CNRS Associated Team ERA 298 "Langages et Syst@mes Informatigues"). The authors wish to thank D. L i t a i z e for his valuable advice. REFERENCES [ i ] R.F. Rozin, An Environment f o r Research on Microprogramming and Emulation, Comm. ACM, Vol. 15, nr. 8 (1972). [2] Sufrin, Microprogram Design for High Level Languages, Infotech State of the Art: Microprogramming System Architecture Report nr. 23 (1975) pp. 317-335. [3] A.B. Tucker and M.S. Flynn, Dynamic Programming: Processor Organization and Microprogramming, Comm. ACM, Vol. 14,nm 4(1971). [4] R.H. Eckouse J r . , A High Level Microprogramming Language (MPL), Spring Joint Computer Conference (1971), pp. 169-177. [5] E.A. Feustel, On the Advantages of Taged Architecture, IEEE Trans. Computers, Vol. C-22, hr. 7 (1973). [6] J.A. Rudolph, A Production Implementation of an Associative Array Processor - STARAN, Fall Joint Computer Conference (1972). [7] K.E. Batcher, STARAN - Parallel Processor System Hardware, Nat. Comp. Conf. (1974). [8] H. Fleisher and L . I . Maissel, An Introduction to Array Logic, IBM J. Res. Dev. (Jan. 1974) 98-109. [9] J~T. Koo, Integrated-Circuit Content-Addressable Memories, IEEE J. Sol. St. C i r c . , Vol. SC-5 (1970) 208-215. [i0] R.M. Lea, Information Processing with an Associative Parallel Processor, Computer (November 1975), 25-32. [11] J.A. Feldman and P,D. Rovner, An Algol-Based Associative Language , Comm. ACM, Vol. 12 (1969) 439-449. [12] D.R. Anderson, Data Base Processor Technology, Nat. Comp. Conf. (1976) pp. 811-818. [13] E.A. Ozakarahan, S.A. Schuster and K.C. Smith, RAP - An Associative Processor for Data Base. Management, Nat. Comp. Conf. (1975) pp. 379-387.