A new parallel algorithm for parsing arithmetic infix expressions

A new parallel algorithm for parsing arithmetic infix expressions

Pa altel Comptfing 4 (1987) 291-304 North-Holland 291 A new parallel algorithm for pa r smg • arithmetic infix expressions Y.N. SRIKANT and Priti S...

1012KB Sizes 3 Downloads 156 Views

Pa altel Comptfing 4 (1987) 291-304 North-Holland

291

A new parallel algorithm for pa r smg • arithmetic infix expressions Y.N.

SRIKANT and Priti SHANi
School of Automation, Indian Institute of Science, Bangalore 560 012, India Received February 1985 Revised October 1985 Abstract. A new parallel algorithm for transforming an arithmetic infix expression into a parse tree is presented. The technique is based on a result due to ~~ischer (1.980) which enables the construction of the parse tree, by appropriately scanning the vector of precedence values associated with the elements of the expression. The algorithm presented here is suitable for execution on a shared memory model of an SIMD machine with no read/write conflicts permitted. It uses O(n) processors and has a time complexity of O(log2n) where n is the expression length. Parallel algorithms for generating code for an SIMD machine are also presented.

Keywords. Arithmetic infix expression, parse tree, SIMD computer, parallel parsing algorithm, parallel programruing, parallel code generation.

I. Introduction The development of new techniques for parallel parsing [4,5,9] and for efficient parallel evaluation of expressions [2] has received considerable attention. An elegant parallel algorithm for the construction of the syntax tree from an arithmetic infix expression has been presented in [5], suitable for implementation on vector computers. In this paper, we adapt this algorithm for use on an SIMD shared memory machine [6] with no read and write conflicts permitted, and establish a time complexity bound of O((log n) 2) on O(n) processors. The infix expressions considered are arithmetic expressions with operators +, - , . , / and • , (exponentation), and with operands consisting of constants, simple and array variables, with certain constraints on array variable usage. Apart from the parallel syntax tree construction algorithm, we have also presented parallel techniques for parallel code generation. Section 2 presents our parsing algorithm and establishes its time complexity. Section 3 contains the code generation algorithms. Finally, Section 4 concludes the paper with suggestions for further work.

2. A

new parsing algorithm for syntax tree construction

Fischer [5] has formally described an arithmetic infix grammar, and presented necessary and sufficient conditions for checking well-formedness of strings with respect to this grammar. The implementation of these tests on an SIMD machine is described in [13] and requires O(log n) time on O(n) processors. 0167-8191/87/$3.50 © 1987, Elsevier Science Publishers B.V. (North-Holland)

Y.N. grikant, P. Shankar / Algorithmfor parsing arithmeticexpressions

292

2.1. Basic results Once the expression has been checked for well formedness, the next step is the construction of the syntax tree. Fischer has presented a formula for associating with each symbol in the expression a unique precedence value. For this purpose, the operators are grouped into priority classes. For our purposes the classes in increasing order of priority are OP0 = { $ }

(endmarkers),

OP2 - {ident, neg}

opt= ( . , / } ,

OP1 - { +, - },

(unary plus and unary minus),

oP,= { , . } .

The operators in classes 0, 1, 3, 4 are binary, while those in class 2 are unary. The classes 0, 1, 3 correspond to left associative operators and the classes 2, 4 correspond to right associative ones. Each symbol may be assigned a unique precedence value computed using the formula [5] PREC(i ) - SYMB,.POS* SYMB~.ASSOC + 2 • (length(SYMB) + 1) • SYMB,.CLASS + 2 * (length(SYMB) + 1) • ( q + 2) • SYMB,.NEST where SYMB~.POS

-index of symbol in the vector SYMB which is the input string with of all parentheses removed, SYMB~.ASSOC- associativity of symbol ( - 1 - left associative, 1 = right associative), SYMB~.CLASS -- priority class of symbol (an integer in the range 0-q + 1), 1 SYMB~.NEST --- nesting level of symbol, length(SYMB) = length of input string after all parentheses have been removed. q - number of operator classes.

Fischer [$] has presented an algorithm which converts the vector of precedence values associated with an expression into a reduced syntax tree (i.e. one with no parentheses as leaves and only operators as internal nodes), The basis for his algorithm (as also for ours in the following subsection) is the pair of lemmas below: I~mma 2.1. Let some node SYMBj occur in a subtree rooted by the operator SYMBj in the syntax tree for some arithmetic expression. Then PREC(i) > PREC(j). Let Sg(i, PREC) be the largest set of consecutive integers, starting from position i in PREC with values greater than PREC(i), moving right. Similarly define SL(i, PREC), moving left. Let R(i, PR~C) be the index of the minimum value in the set SR(i, PREC) if the set is not empty; if it is empty define R(i, PREC) to be zero. Similarly define L(i, PREC) with respect to SL(i, PREC). Finally, define R((il, i2,..., ira), PREC)-R(i~, PREC), R(i2, PREC),...,R(i,,, PREC) and similarly for L. Lemma 2.2. (i) If SYMB~ - 0 ~ $, then R( i, PREC) (L(i, PREC)) - index of the node that roots the right (left) subtree of 0 if that subtree is non null, otherwise O. (ii) If SYMB~= $ is the jth $ in SYMB and i < length(SYMB), then R(i, PREC) is the root of the jth expression in SYMB.

t Identifiers are assigrtcd a l~riority of q + 1.

Y,N. Srikant, P. Shankar / Algorithmfor parsingarithmeticexpressions

293

As a consequence of the above, we observe that in order to convert the vector PREC of precedence values into a syntax tree, only the R and L values need to be computed. The algorithm to be presented next achieves this. 2.2. Syntax tree construction We first state a few results (without proof, as they follow directly from the definition of the function PREC and Lemmas 2.1 and 2.2. Lemma 2.3. The element with minimum precedence value in any expression (subexpression) forms the root of the parse tree associated with the expression (subexpression). Lemma 2.4. I f an expression properly nests a parenthesized subexpression, the minimum precedence value in the expression will never lie within the subexpression. Lemma 2.5. The operators to the immediate left and right of a parenthesized subexpression will have precedence values less than any value within the subexpression. Lemma 2.6. An element, outside a parenthesized subexpression can have as a direct descendent only the element with minimum precedence value within the subexpression. As a consequence of Lemmas 2.3 and 2.4 above, all nested subexpressions may be bypassed altogether in the search for the root of the subtree at a given nesting level. Further Lemma 2.5 assures us that the direct descendants of any element within a subexpression will always lie within the subexpresssion (as the fight or left scan from any point within the subexpression terminates, at the latest, at the element just outside the right or left parenthesis). Thus, subtrees corresponding to parenthesized subexpressions can be constructed independently of the elements outside their delimiting parentheses. Finally, Lemma 2.6 allows us to conclude that each such subtree is connected to the rest of the tree only through its root. Thus, only the precedence value of the root of a subtree for a parenthesized subexpression is necessary for the computation of the subtree corresponding to the immediate outer nesting level. The above facts lead to an efficient algorithm, whereby the subtree for expressions at every nesting level can be computed in parallel if all the elements at a single level are linked together, each subexpression immediately nested within a level being replaced by a single precedence value (viz. that associated with the root of the corresponding subtree). The initial steps of our algorithms (based on the discussion above) are given below. Algorithm Parallel-Parse. Input: INPUT - the input vector containing the arithmetic infix expression, gYMB = the vector of symbols corresponding to the input expression with all parentheses removed, PREC = the vector of precedence values associated with gYMB; both these vectors can have length at most n. Output: R and L which are vectors of length at most n containing the indices of the left and fight children of elements in gYMB. Step 1. Extract the left and right parentheses from INPUT and sort the set of left and right parentheses according to nesting level. Furnish each left parenthesis (l.p.) with the index of its matching right parenthesis (r.p.) and vice versa. Step 2. For each element in INPUT, establish a link to the next element on its right and to the previous element on its left. The right (left) link is set to - 1 for the rightmost (leftmost) element.

294

EN, Srikant, P Shankar / Algorithmfor parsing arithmetic expressions

1. ~]"]1 - ~

,,

\

6.

. . . .

/'

Minimum precedence is now availuble in the I.p,

Fig. 1. Minimum precendence value computation.

Step 3. Establish links across parenthesized subexpressions using the information obtained in Step 1. Example 2.7. Let INPUT = $ (((A + B)) + C) + D $. After Step 3 the fight links (shown as --)' are positioned as follows. (One can similarly deduce the positions of the left links:)

r,

* ( [

,

1

(Note that the second I.p. and r.p. are redundant). At the end of this step all symbols at a given nesting level have been linked.

Step 4. Find the minimum precedence value at each nesting level, and its index in SYMB. All nesting levels are corsidered in parallel, and a parallel linked list traversal algorithm is used at each level. At each step in this traversal, every element compares itself with the one to which it is currently linked. The minimum precedence value is finally stored in the first element. Figure 1 will illustrate the method. The minimum precedence value is first stored in the left parenthesis corresponding to the subexpression. It is then copied into the matching right parenthesis. The position of the minimum precedence value is also easily computed while computing the value itself. Step 5. Remove the redundant I.p.'s (r.p.'s) by establishing links across them to the right (left) (for details refer to the algorithm in [13]). Step 6. Store the minimum precedence value (and the corresponding index) of a parenthesized subexpression (which value is currently in the left and right parentheses by Step 4) into the operator just outside the left or right parenthesis. Thus, each parenthesized subexpression is 'represented' by the precedence value of the root of its subtree. Reestablish all links to link elements at the same level. We first illustrate the above steps with a simple example.

neg 2 78

A 3 231

-nestinglevel0 -nestinglevell

(

C 7 501

** 8 426

D 9 503

** 10 428

290

60

290

I

14

14

I

14

I

]6

E 11 505

(C * * D * * E + F -

G)

-~, 12 292

F 13 507

14 290

G 15 509

)

16 60

H 17 245

$ 18 - 18

Fig. 2. Computation and storage of minimum precedences at each level.

Now the only element in the parenthesized subexpression that enters the computation of syntax tree at level 0 corresponds to a precedence value of 290 and this value (and its position) has been stored in the operators • and - as shown abe,re. Hence the computation of the syntax trees at level 0 and level 1 may be carried out in parallel.

290 290

I

I

I

-H$

*

Sneg A * B

minimum precedence value is copied from the i.p. (r.p.) into the operator next to it.

I

Step 3. The

14

I

* 6 108

( C ** D ** E + F - G )

I

H $

16

$neg A * B * -

60

B 5 233

2. After computing the minimum precedence value at each level, it is stored in the l.p. and r.p.

$ negA • B • - H$ ( C ** D ** E + F -G)

Step

* 4 110

bypassing parenthesized subexpressions, the expression is divided into the following segments:

$ 1 - 1

Step 1. After

INPUT POS PREC

~"

~~;

~.

-~

~,

~,

."e

.~

296

Y.N. grikant, P. ghankar / AIgorithmfor parsingarithmeticexpressions

Example 2.8. See Fig. 2. We are now in a position to compute the syntax tree for all nesting levels in parallel. Definition 2.9. A subexpression of an expression is said to be homogeneous if it contains operators belonging to only a single priority Cass and no nested subexptessions. The results stated below lead to the construction of the syntax subtrees at each level. As these follow directly from the results of Subsection 2.1, they are stated without proof. [,emma ZI0. The sequence of precedence values of the operators in a homogeneous expression (or subexpression) increases (decreases) monotonically from left to right if the opermor is right (left) associative. Lemma 2.11. Let et be a homogeneous subexpression (in operator class OPi) of an expression e, such that the precedence of operators in OPt is greater than all other operator precedences in the expression e. Then the direct descendants of any element within e t in the syntax tree will always lie within e t. Further the subtree corresponding to e t is connected to the rest of the tree only through its root. As a consequence of Lemma 2.10 we note that the subtree corresponding to a homogeneous subexpression has a standard form, corresponding to Figs. 3 (4) if the operator is left (right) associative. As we shall prove later, such subtrees can be constructed in constant time. Once such subtrees are constructed, Lemma 2.11 allows us to treat the corresponding subexpressions in a manner similar to parenthesized subexpresssions, representing each such subtree by its root for computing the syntax subtrees for the homogeneous subexpressions in operators of the immediately lower priority class. Thus homogeneous subexpressions in a single operator class at all levels may be considered in parallel. The algorithm Parallel-Parse may thus be completed using the following steps: Step 7. Set the R and L positions corresponding to all identifier symbols to 0 (as these correspond to leaves ia the syntax tree). Step 8. Starting with the homogeneous expression in the highest operator class and going down to the lowest operator class, construct standard subtrees for each homogeneous expres. op

! ID

."

A

ID

ID

ID

ID

".

ID

1D

FiB. 3. Tree structure for en expression involving left

Fig. 4. Tree structure for an expression involving right

associativebinaryvibrators of a singleclass,

associativebleary operators of a singleclass.

Y.N. Srikant, P. ghankar / Algorithm/or parsing arithmetic expressions

297

sion in a single class in parallel (the method of constructing the standard subtrees for the homol~eneous expression is simple and will not be repeated here since it is detailed in Lemma 2.13). Each time the processing of a new operator class is completed, mark all nodes in the standard subtree created in the previous step as inactive, except the root, which enters the computations for the next priority level, and remove them from the linked list. Such marking can be done by inspecting all elements in parallel. An element is marked inactive if all its immediate children have been computed and the element itself has become the child of some other element. There are several be,ok-keeping details which are not mentioned here for the sake of brevity. The complete algorithm is provided in [~3j.1 We now illustrate Steps 7 and 8, continuing with the same example used to illustrate the previous steps. Example 2.8 (continued). Step 4. We start with homogeneous subexpressions with operators in priority class 4. There is only one such subexpression C , , D , , E and it appears at level 1. Thus the following subtree is created, as the operator class is right associative (indices are shown within parentheses): Is }

..

ClY)~

°}

D (9)

E (11)

Step 5. The

symbols C, D, E, and the second • • are marked inactive and removed from the list. The remaining active elements are level 0

negA,B,[H$

level 1

,,+F-G (8)

6,60)

(a)

(a) both have (14,290) stored as index and precedence value respectively of the root of the subtree corresponding to the nested subexpression. Step 6. The next operator class is class 3. There is one homogeneous subexpression A _ , B * in this class at level 0 (the root of the subtree corresponding to the parenthesi:~ed subexpression is stored in the second ,). • is left associative. The subtree created is ,(el

A(3)

BIS)

Step 7. The identifiers

A, B, and the first • are marked inactive and removed from the li~t. The remaining symbols are level 0

level 1

gneg • ~ H $ **+F-G \ (6) \ (8) (16,60) (14,290)

¥.N. Srikant,P. Shankar / Algorithm for parsing arithmeticexpressiotLs

298

Step 8. We now come to operator class 2, viz., neg (unary). The subtree created is

neg( 2 )

\

-i61 The subtrees created till this point are

neg(2) % ~

..181

~* O

1101

191

E1111

A(31

B(S)

Step 9. The operator • (index 6) is made inactive and is removed from the list. The lists now become level 0 neg (16 60)

level 1

I

H$

**+F-G (8)

(14,290)

Step 10. The final operator class is of priority 1. The homogeneous subexpressions are - ii at level 0 and + F - G at level 1. The subtrees created at this step are -(It)

* ~ ( l S ) neg~zl

H(I?)

•.

(8)

F (13)

Step 11. The elements neg H, F, G, - (index 14), a n d , , removed. The remaining lists are level 0 .... $

-

(index 8) are marked inactive and

level 1 $

nii

I

(16,60)

Step 12. The first $ already has the index 16 stored in it and the element ( - ) corresponding to this value is the root of the whole expression. The syntax tree for the whole expression is complete at this point and is shown in Fig. 5. The L and R vectors are shown in Fig. 6. 2,3. Complexity estimates We first provide complexity estimates for some of the operations described in the algorithm.

Y.N. Srikant, P. Shankar / Algorithm for parsing arithmetic expressions

neg(2) ~

299

H(I?)

A [31 B(,~) " 1 ~

G 1151

c(?'i D(9 )

INPUT POS PREC L R

$ 1 -1 16

neg 2 78 0 6

A 3 231 0 0

* 4 110 3 5

B 5 233 0 0

• ( C 6 7 108 501 4 0 14 0

•, 8 426 7 10

E (11)

D 9 503 0 0

•, 10 428 9 11

Fig. 5. Syntax tree for Example 2.8.

E 11 505 0 40

+ 12 292 8 13

F 13 507 0 0

14 290 12 15

G ) 15 509 0 0

16 60 2 17

D i7 245 0 0

$ 18 -18 -

Fig. 6. L and R vectors for Example 2.8.

Lemma 2.12. The minima of the precendence values of all subexpressions at each level in the expression can be found in O(log n) time. Proof. The algorithm for finding the minimum is just a simulation of the recursive doubling scheme [7] performed in parallel for all levels. This has a known time complexity of O(log n). O Lemma 2.13. Every operator in a homogeneous subexpression can determine its children in constant time. Proof. We have to consider four cases corresponding to whether the operator is unary or binary, left or right associative. The arguments for left and right associative operators are symmetrical, so we consider only the right associative case. Case 1: Right associative binary. The last operator in the homogeneous subexpression has to be treated separately. It has as its right child, the element immediately to its right. It can determine this as soon as it knows that it is the last operator. An operator is the last one in a homogeneous subexpression if the second element on its right is (i) an operator of lower class than itself, or (ii) a right parenthesis. For all other operators within the subexpression, the right child is the next operator to the fight. This is because of two facts: (a) the right scan process from the operator (if it were executed) would always terminate at the dement to the immediate right of the homogeneous subexpression,

300

EIV. grikant. P. Shankar / Algorithmfor parsing arithmetic expressions

(b) the minimum of the precedences of the elements covered by the right scan would be the precedence of the next operator as the sequence monotonically increases to the right. Thus, no right scan is necessary as the operator to the immediate right is always the right child, which can be found in two steps. The left child of every operator is the identifier (or the root of the subtree for the nested subexpression) to its left and this can be found in one step. Case 2: Right associative unary. The wellformedness of the expression guarantees that (a) no bixiary operator can follow a unary operator, (b) no well-formed expressions (which include identifiers) can appear between two unary operators. Thus the homogeneous subexpression consists of consecutive unary operators terminated by a well-formed expression, which for purposes of our argument may be considered to be an identifier. Each unary operator has (by arguments similar to those of the previous case) as its right child, the unary operator to its immediate right; the last unary operator has as its child the identifier at the right end. Thus the right child of every operator is determined i.-. or~ step. Hence the lemma, ra Theorem 2.1.4. Algorithm Parallel-Parse constructs the parse tree from an expression of length n in O(log2n) time, using O(n) processors. Proof. We recapitulate the steps of the algorithm below: Step 1. Sorting of parentheses. This requires O(log2n) time [8]. Step 2. Establishing links across parenthesized subexpressions and linking all elements at the same level. This requires constant time. Step 3. Computation of the root of the subtree for each parenthesized subexpression, i.e., the minimum precedence value at each level. This takes O(log n) time. Step 4. Bypassing redundant left and right parentheses and compressing the vector. This consumes O(Iog n) time. Step 5, Computation of left and right children. All homogeneous subexpressions for a given priority class are treated in parallel. The construction of the subtree for each homogeneous subexpression takes constant time (by Lemma 2.13). Once the subtree is constructed, all the elements corremonding to the nodes in the subtree are removed except for the root (which behaves like a • , f f o : the next class). Removal takes O(log n) time. When subtrees corresponding to homogeneous subexpressions for every operator class have been constructed, the tree is complete, Since there is a fixed number of priority classes, the time taken for this step is

O(log n).

Summing up the times taken for the above steps, we obtain the results. Also no step requires more than O(n) processors. C3

3. Code generation In this section, the generation of parallel code suitable for execution on an SIMD ('shared memory model) machine is considered. We describe a scheme for the generation of an, intermediate form for parallel code called vector quadruples, each of which may correspond to more than one machine instruction.

3.1. Vector quadruples Vector quadruples are similar to the usual quadruples [1] except that the operands and the result are vectors.

Y,N, Srikant,P, Shankar / Algorithmforparsingarithm~.,ticexpressions

301

Example 3,L Consider the vector quadruple (+, A, B, C) where A = [100, 107, 150], a = [115, 145, 170], C = [124, 175, 164] are all vectors of length 3, stored starting from locations 1, 10 and 20 respectively. The vectors A and B contain the addresses of the operands of the + operator. After addition, the results are to be stored in the addresses contained in the vector C. We use two kinds of vector quadruples in our algorithm. One or" the form (SOP, A, B, C) where the operands are vectors of addresses, and the other of the form (AOP, A, B, C) where the operands are vectors of values. The latter type of quadruples are used while generating code for expressions involving array operands. We list the various quadruples below: (i) (SOP, A, B, C), (AOP, A, B, C) where OP is any one of { +, - , . , / , • • }, (ii) (SOP, A, , C), (AOP, A, , C) where OP is any one of {negate, move}, (iii) (Imove, A, , C) where A is a vector of constants and C a vector of addresses. This is an immediate mode instruction, (iv) (Broadcast, N, V, C)where N = number of processors to be enabled during broadcasting. We assume that processors 0 to ( N - 1) would be enabled for receiving the broadcasted value, V = address of the value to be broadcast, C ffi vector into which the value received due to broadcasting is to be deposited. (other types of broadcast instructions are also possible but we do not need them here). 3.2. Examples of code generation Before presenting the algorithms for code generation, we give examples of the kind of parallel code generated. First we consider expressions containing only scalars. Example 3.2. Refer to Figs. 7 and 8. Next we provide an example of code generated for expressions containing array operands. We do not allow parts of arrays to be used in expressions (for example, if A is a 3-dimensional 8*(T 2 )

10

3b

5c

1713(T7) 9d

lle

113) 51~3)

15f

Fig. 7. Syntax tree for the expression (a + b ) , ( c - 3 ) + ( d + e ) * ( 5 - f ) . (The indices of the elements in the SYMB vector are shown on the left of the element; The temporaries required at various nodes are shown in parentheses on the

right of the element.) (Imove, [3, 5], [T7, T13]; move constants into temporaries (S+, [a, d], [b, el, [T2, T1o]); add a&b, d&e ( S - , [c, T13], [TT, f], [TT, Tt3j); subtract 3 from c, f from 5

(s,, [T~,T1o],[T,, T13],[T2,T~o]), multiply (S+,[T2],[TIo],[T2]);

add

Fig. 8. Code generated for the expression in Fig. 7. (The vector operands are shown directly in the instructions for the purposes of cla~ty, instead of their addresses.)

302

Y.N. Srikant, P. Shankar / Algorithm for parsing arithmetic expressions

4"(T2l /

~

la

3b

5c

|7} 5 | ¥ 7}

(lmove~ [5], T~]): more constant 5 into T~ (A+, a, b, T2); add arrays a&b (Broadcast, N¢, T-i, T6); N,. -- size of c (A +, c, T6, T6); add 5 from T6 to c (A*, T2, T6, T2); multiply

Fig. 9. Syntax tree for the expression (a + b ) * ( c + 5 ) where, a, b and c are all arrays. (Two array operations at the same level cannot be performed in parallel, in general: One or both operands may be arrays; the temporaries T2 and T6 are array temporaries and T7 is a scalar temporary.)

Fig. 10. Code generated for the expression in Fig. 9.

array, then A[I, J] is a single dimensional array - this is not permitted; the whole of A has to be used), because the semantic analysis and the code generation become more complex. This will be the subject of future work [10]. We assume that the number of dimensions, base address and the size of the array are stored with the array identifier (supplied by the lexical analyzer, say). We also assume that arrays are FORTRAN-like, i.e., the index of the array starts with 1.

Example 3.3. See Figs. 9 and I0.

'~.3. Code generation algorithm la this section, an algorithm for the generation of vector quadruples for arithmetic expressions is informally described. The syntax tree is assumed to be available. We do not consider the semantic type checking of the expression, which can be easily performed during a bottom-up tree traversal by checking the types of the nodes and their children. The detailed algorithms for top-down and bottom-up traversals are given in [14]. The code generation procedure uses an algorithm Temporary.allocator which is supplied the description of the syntax tree in terms of the vectors SYMB, L and R, and produces as output a vector RESULT which stores the names of temporaries needed. Clearly, the operators in SYMB need temporaries to store results: our code generation scheme also assigns temporaries to constants. The detailed algorithm is provided in [14]; we only present an informal description here. Initially all constants are assigned temporaries. Next, a bottom-up traversal of the tree assigns temporaries level by level, whenever required, in parallel. A new temporary (of either vector or scalar type as dictated by the operation) is assigned to an internal node only if neither of its children has been allocated a temporary of the sane type. It this is not the case, then whenever there is a choice, the temporary corresponding to the left child is used. The output of this algorithm for the expressions in Examples 3.2 and 3.3. are shown in Figs. !1 and 12 respectively. Once temporaries have been allocated, the algorithm Code-Generator produces vector quadruples as output, when suppfied with the vectors SYMB, L, R, RESULT and the variable SIZE containing the, size of the arrays used in the expression. The code generator initially generates a vector quadruple to load all constants into their assigned temporaries. Following this, the nodes are examined level.wise. Fo, ~calar operators a single vector quadruple is generated for all nodes labelled by the same operator. Thus for the case involving only scalar opera~ors, the number of vector quadruples generated at a level is equal to the number of

Y.N. Srikant, P. Shankar / Algorithm for parsing arithmetic expressions [a, Ta, b, T2, c, T7, T7, T2, d, 7"1o,e, Tlo, T13, T13, f ] all the temporaries are scalar temporaries.

[a, T2, b, T2, c, T6, T~]

T2 and T6 are vector temporaries and Tv is a scalar temporary,

303

Fig. 11. The RESULT vector for the expression in Fig. 7.

Fig. 12. The RESULT vector for the expression in Fig. 9.

operator types at that level. The relevant addresses for the operands and the result are extracted from RESULT. For array expressions, each array operator node is processed separately, as a separate instruction has to be generated for each array operator node. The temporaries required are again obtained from the vector RESULT. The case when both operands are arrays is simple, since a single quadruple needs to be generated (see Example 3.2). When one of the operands is a scalar and the other an array, a broadcast instruction is to be generated (to create a vector temporary of the size of the array, all of whose elements contain the value of the scalar) followed by an array add quadruple (see Example 3.2). Both Temporary-Allocator and Code-Generator algorithms run in O(n) time on O(n) processors. A detailed algorithm for code generation is provided in [14].

4. Conclusions

A new parallel algorithm for the construction of a parse tree from an arithmetic infix expression has been presented. The algorithm is based on a result due to Fischer [5] and is appropriate for a shared memory model SIMD computer [3,4,6-8] with no read and write conflicts permitted. It requires O(log2n) time using O(n) processors where n is the expression length. The parsing algorithm presented is powerful enough to handle function calls, relational and logical operators, and array indexing operations. The technique described by Dekel and Salad [4] derives the parse tree via the postfix form and is a modification of the sequential infix-to-postfix conversion algorithm. The time complexity of their algorithm is the same as that of the algorithm presented here. A new parallel intermediate code called vector quadruples has been introduced. An algorithm to generate vector quadruples for arithmetic expressions containing simple and array variables has been described. This algorithm uses O(n) processors and runs in O(n) time and is based on parallel bc 0~'om-up traversal of the syntax tree of an arithmetic expression. Future efforts at . esearch will be directed towards development of parallel algorithms for parsing, semantic analysis and code generation for block structured languages [i0,11,12,14].

Acknowledgment We thank the anonymous referee whose suggestions considerably improved the presentation of the paper.

304

Y.N: Srikant, P. Shankar /Algorithm for parsing arithmetic expressions

References [1] A.V. Aho and J.D. UIIman, Principles of Compiler Design (Addison-Wesley, Reading, MA, 1978). [2] R.P. Brent, The parallel evaluation of general arithmetic expressions, J. ACM 21 (2) (1974) 201-206. [3] E. Dekel and S. Sahni, Binary trees and parallel scheduling algorithms, in: CONPAR 81, Lecture Notes in Computer Science 111 (Springer, New York, 1981) 480-492. [4] E. Dekel and S. Sahni, Parallel generation of postfix m,.~!tree forms, A CM Trans. Programming Languages and Systems 5 (3) (1983) 300-317. [5] C.N. Fischer, On parsing and compiling arithmetic expressions on vector computers, ACM Trans. Programming Languages and Systems 2 (2) (1980) 203-224. [6] MJ. Flynn, Very-high-speed computing systems, Proc. IEEE 54 (1966) 1901-1909. [7] P.M. Kogge and H.S. Stone, A parallel algorithm for the efficient solution of a general class of recurrence equations, IEEE Trans. Comput. 2 (1973)786-793. [8] F.P. Preparat& New parallel-sorting schemes, IEEE Trans. Comput. 27 (7) (1978) 669-673. [9] R.M. Schell, Jr., Methods for constructing parallel compilers for use in a multiprocessor environment, Ph.D. Thesis, Computer Science Depar:ment, Univenity of Illinois, Urbana, IL, 1979. [10] Y.N. Srikant and P. Shankar, Parallel algorithms for code generation on SIMD computers, in preparation. [11] Y.N. Srikant and P. Shankar, Parallel algorithms for restructuring arithmetic expressions for parallel evaluation, submitted for publication. [12] Y.M. Srikant and P. Shankar, A sorting and counting approach to parallel parsing, to appear. [13] Y.M. Srikant and P. Shankar, A new parallel algorithm for parsing arithmetic infix expressions, Technical Report, School of Automation, Indian Institute of Science, Bangalore, 1985. [14] Y.N. Srikent, Ph.D. Thesis.