Towards a unified model for performance evaluation of concurrency control

Towards a unified model for performance evaluation of concurrency control

INFORMATION SCIENCES 41, V-94 (1989) 77 Towards a Unified Model for Performance Evaluation of Concurrency Control AHMED K. EL~GARMID ABDELSALAM ...

962KB Sizes 1 Downloads 104 Views

INFORMATION

SCIENCES

41, V-94

(1989)

77

Towards a Unified Model for Performance Evaluation of Concurrency Control AHMED K. EL~GARMID

ABDELSALAM

A. HELAL

~eparime~t of Computer Sciences, Purdue University, West ~~ayette, Indiana 47907

ABSTRACT Research in the area of concurrency-control performance evaluation has been extensive in the past few years. Unfortunately, conclusions arrived at by many of the researchers were incomparable and at times contradictory, because of the different assumptions and performance models used. Therefore, a unified and complete performance model is needed. We propose a framework for concurrency-control performance evaluation in single-site databases. Areas of performance studies are classified, and a performance model is suggested. Finally, we give a complete example of a performance study for the dynamic two-phase locking algorithm using this framework. Two effects were studied: those of granularity and the read-write mix.

1.

IN~ODUCTION

Despite the very large number of concurrency-control algorithms in both single-site and distributed databases, there does not exist any formal quantitative general method for analyzing and comparing their performance. The reason for this stems from the following facts stated in [22]:

(1) The existence of an enormous number of algorithms, which tends to spread the research effort. (2) The difficulty of capturing the logical characterization of the algorithms by traditional theoretical modeling: con~~ency-con~ol algorithms seem to require an analysis of queueing systems where the service-time ~st~bution of any server depends on the state of other servers. The above problems are not known to have closed-form solution for queueing networks of general topology. @Elsevier Science Publishing Co., Inc. 1989 655 Avenue of the Americas, New York, NY 10010

78

AHMED

K. ELMAGARMID

AND ABDELSALAM

A. HELAL

(3) The incompatibility of assumptions stated by various authors, and also the difficulty in testing their validity due to the lack of commercial implementations of many algorithms. (4) Lack of clarity of what each researcher did, especially in cases of simulation-based studies. However, a number of performance studies were conducted in the last few years. Different assumptions and performance models were used in each of these studies, thereby leading to incomparable and in many cases contradictory results. Therefore, future preformance studies should be based on a unified framework. In this paper, a framework for concurrency-control performance evaluation in single-site databases is proposed. In Section 2, an overview of database systems and performance studies is given, followed by the proposed performance-evaluation model in Section 3. In Section 4, an example of a performance study of dynamic two-phase locking using the proposed framework is given, and finally some concluding remarks are discussed in Section 5. 2.

OVERVIEW

2.1.

DATABASES

AND

CONCURRENCY

CONTROL

A single-site database system (Figure 1) consists of three components: the transaction component, the database management system (DBMS) component, and the database component. Each component has a set of logical and/or physical parameters which constitute the overall system characteristics. User transactions interact with the DBMS by issuing read and write requests for items stored in the database. The concurrent execution of transactions can

Fig. 1.

Components

of a single-site

database

PERFOR~NCE

EVALUATION

OF CONCURRENCY

CONTROL

79

result in data misuse (lost updates and dirty reads [S]). Therefore, database systems include a subsystem called the concurrency controller to control concurrent accesses to the shared database. A concurrency-control algorithm (WA) can be designed using one of three approaches: locking 116, 211, timest~ping [5, 6, 18, 251, or the opti~stic approach [3, 4, 151. All previous approaches can be implemented to allow for multiple versions of the database [4, 11, 181. It is assumed that the reader is familiar with at least one algorithm in each of these approaches. 2.2.

OVERViEW

OF PERFOR~~~C~

STlJDlES

There are five areas to which performance studies should be directed. In the remainder of this section, each area is explained in detail. System Throughout and Transaction Response Time These are two indices of special importance. Minimizing the average transaction response time is of special interest to the user, whereas maximizing the system throughput is of special interest to the system manager. Most researchers have directed their efforts to this area. A Igorithm Efficiency and Asymptotical Behavior These studies investigate the useless percentile of storage, CPU, I/O, and (for the distributed case) intersite communications. This is called algorithm overhead, and it can be viewed as a cost factor which affects the system throughput and the transaction response time. Many researchers have directed their research to this area [9, 19, 201. The Effect of Varying Transaction Component Parameters The transaction component parameters affecting system throughput, action response time, and algorithm effiency are listed below: (1) Number ming). (2) (3) (4) (5)

of transactions

trans-

in the system (load or degree of multiprogram-

Transaction size in terms of the number of read and write operations. CPU resource requirements. Read-write mix (ratio of query to update). Locality of requests.

80

AHMED K. EL~G~MID

AND ABDELSALAM A. HELAL

(6) Read-write operations sequence (either interleaved or clustered into a read step followed by a write step). (7) Time between requests. (8) Transaction semantics (for a formal definition of transaction semantics, see 171). Each of the above listed parameters has an impact on the overall system performance. Most researchers have investigated the impact of parameters (I), (2), and (3). In these studies, the degree of multiprogr~ng ranged from 5 to 500 transactions. Both small and large transaction classes were used to represent different ~~saction sizes [lO]. The effect of the fourth parameter (readwrite mix) was studied in [11,14, 171.There is no reported study addressing the effect of (.5), (6), (7), and (8). The Effect of Varying the L)BMS Component Parameters

The DBMS com~nent parameters affecting system ~ou~put, response time, and algorithm efficiency are listed below:

transaction

(I) The ~oncu~ency-~n~ol algorithm. (2) The recovery technique. (3) Other modules such as data access method, security module, and query optimization module. The CCA has a great impact on performance. Most of the research in this area has been focused on two-phase locking, basic timestamp ordering, serial validation, and multive~o~ algorithm;. Recovery techniques affect the way CCAs are designed and consequently the overall system performance. Although the effect of the various recovery techniques can be studied sep~ately, the effect of each technique on the CCA should be investigated. The effect of the two-phase commit on the dynamic two-phase locking and the serial validation agony were studied by [14].

The database component parameters affecting system throughput, transaction response time, and algorithm efficiency are listed below: (I) The data model (relational, hierarchical, network, etc.). (2) The physical database size (given in terms of physical units called items). (3) The ~~~~~, as defiled in Section 4.2 (a granule may contain more than one item).

PERFORMANCE

EVALUATION

OF CONCURRENCY

CONTROL

81

Like the transaction semantics, the data model has no direct effect on the performance, but a CCA can be designed to utilize a specific data model for better performance. Granularity is a critical parameter which dramatically affects the overall system performance, especially when the system is loaded. Ries and Stonebraker [15, 201 have investigated the effect of “locking” granularity on the performance of a DBMS. However, granularity should also be studied for other, nonlocking algorithms.

3.

THE PERFORMANCE

EVALUATION

MODEL

In this section, a performance model to be used in evaluating the performance of concurrency control algorithms is given. The model consists of four components: the transaction-workload description, the system structure and parameters, the concurrency-control algorithm, and the performance indices. In the remainder of this section, these components are described and a set of model assumptions are stated.

3.1.

TRANSACTION-

The transaction

WORKLOAD

workload

DESCRIPTION

can be specified using the following parameters:

(1) The number of transactions in the system (degree of multiprogramming) or, in case of open system modeling, the average arrival rate and the distribution of the arrival process (usually assumed to be Poisson). (2) The transaction size, which is the number of read and write operations in a transaction. This can either be deterministic (fixed size with the possibility of multiple transaction classes) or probabilistic (usually uniformly distributed over [min- size. . . max_ size]). (3) The amount of CPU time required by a transaction. This can be either neglected or assumed to be exponentially distributed with some mean. (4) The read-write mix, which can be either deterministic [e.g., 80% read and 20% write] or probabilistic [e.g., prob(next request is read) > 0.21. It is possible to have different transaction classes that range from pure retrieval to a high level of update. (5) The sequence of read-write operations, which can be either interleaved or clustered. (6) The distribution of the time between access requests. This is not known, but can be assumed uniform.

82

AHMED

K. ELMAGARMID

AND ABDELSALAM

A. HELAL

(7) The transaction semantics, which can be described by redefining the read and write operations as pairs (P, Operation), where P is some predicate that, when evaluated True, causes the read-write operation to be performed. We give no specification for the predicate P.

3.2.

THE SYSTEM

STRUCTURE

The system structure is depicted in Figure 2. This structure can be used in modeling both open and closed systems. When a new transaction arrives, its workspace is initialized and it is put on the concurrency-controller queue (CCQ). The concurrency controller (CC) then serves the transaction by (1) allowing it into the database, (2) blocking it in the block queue (BQ), (3) aborting it, or (4) com~tt~g it. In the case of a nonbl~~ng CCA, the BQ is not used. If the transaction must wait for an unavailable resource, it is put on the BQ. When the resource is available, it is dequeued and is put back on the CCQ. Once allowed into the system, a transaction with a read request is placed on the query queue (QQ). However, in the case of a write request, no disk write operation takes place and the new value is updated onty in its workspace. When a transaction finishes, the CC puts it on the update queue (UQ) so that all its updates may be committed. The UQ has higher priority than the QQ. If a transaction is aborted, the CC resets or modifies its workspace, possibly delays it, and then resubmits it to the system through the transaction initiator (TI). When a new transaction arrives at the TI, the latter submits a small setup transaction with zero or one read operation. This setup transaction is guaranteed to commit with no restarts, and it represents a setup cost for newly arriving or restarted transactions. When the setup transaction commits, the original transaction is considered by the CC.

3.3.

THE CUNCURRE.~CY-CONTROL

~LG~RrTHM

The concurrency control algorithm is specified by a piece of actual code. Since this code is sharable by different transactions, it is considered a critical section and is modeled by a single-server queueing model with FIFO discipline and with service time found by one of two methods. First, a counter can be used to count the statements actually executed while calling this code. This exact count gives an exact service time for the transaction being served, which can then be used as a predicted service time for the next transaction to be served. Second, using the first method, simulation can be conducted to compute the average service rate, which can then be used in subsequent experiments.

P~OR~N~E

EVALUATION OF CONCURRENCY CONTROL

I

--I

F

I ‘-

Y

83

AHMED

84 3.4.

K. ELMAGARMID

THE PERFORMANCE

AND ABDELSALAM

INDICES

Following are some general indices that should be measured mance study: (1) (2) (3) (4) (5) (6)

A. HELAL

Average transaction response time (RT) System throughput (TP) Average degree of concurrency (DC) Conflict rate (CR) Maximum number of times a transaction Useless I/O and CPU percentile.

was restarted

in any perfor-

(MXRST)

There are also certain indices applicable only to specific CCAs. An example in the deadlock rate in the two-phase locking algorithm. To define the degree of concurrency (DC), let T, be the duration of time through which the number of active transactions in the system is n. The degree of concurrency is then given by

where T, is the system time (elapsed time) at which the DC is measured (clearly, T, = Cr_,,T,) and N is the number of transactions in the system. Clearly, OcDC
3.5.

MODEL

ASSUMPTIONS

(1) Writes are deferred. Thus, a write operation is treated as a nondisk operation and is committed on transaction completion. Commitment begins at transaction completion by issuing 1 + 1writeset 1 disk operations in two phases (one in the first phase and jwriteset 1 in the second phase). (2) Resources are viewed as finite. The physical database is stored in one disk unit with some service rate. The case of more than one disk unit can equivalently be studied as one disk unit with higher service rate. Also, there is only one CPU, with a fixed speed, in the system. (3) Elements of the readset and the writeset of the same transaction are distinct. (4) A transaction workspace consists of two parts: the transaction definition and the transaction data section. When a transaction is restarted, only its data section is reset.

PERFORMANCE

EVALUATION

OF CONCURRENCY

CONTROL

In the following section, we use the suggested model to perform studies on the dynamic two-phase locking algorithm.

4. PERFORMANCE LOCKING (2PL)

STUDY OF DYNAMIC

85

two case

TWO-PHASE

Dynamic 2PL was simulated by writing an actual algorithm implementation and then dividing the implementation into disjoint parts. These parts are then viewed and treated as events in an event-driven simulator that follows the performance model described in Section 3 (see the Appendix for a description of the dynamic-2PL algorithms). The parameters stated by the model are specified, and the effects of the granularity and the read-write mix on the performance are studied.

4.1.

MODEL

SPECIFICATION

(a) Following the model, the transaction following specifications:

workload

description

is given the

(1) We assumed closed-system simulation in which the degree of multiprogramming was varied from 10 to 40 transactions. (2) The transaction size was modeled deterministically, and was chosen large to get worst-case results of a system with mixed-size transactions (in this study, a transaction accesses 2.5% of the whole database, while usually it accesses less than 1%). (3) Zero CPU time is required by a transaction (a transaction performs no computations on the values read). (4) The read-write mix was modeled deterministically, and experiments for mixes of: 100:0,90:10,80:20,. . . ,10:!90,0:100 were conducted. (5) The sequence of the read-write operations was considered interleaved. (6) There is no time between access requests; as soon as an operation is complete, another operation, if any, is submitted. (7) Transaction semantics have not been modeled at all. (b) The system structure and parameters: The system structure depicted in Figure 2 utilizes a disk average service rate of 20 accesses/second, and a CPU instruction execution time of 100 nanosecond. (c) The concurrency control algorithm: The 2PL is described by a set of procedures written in pseudocode as listed in the appendix (note that the pseudocode does not describe the simulation of the entire system).

86

AHMED

K. ELMAGARMID

AND ABDELSALAM

A. HELAL

(d) Performance indices: Although most of the performance indices listed in Section 3 were measured, only results for system throughput and average transaction response time are presented in this paper.

4.2.

GRA NlJL.A RlTY

EFFECT

Granularity is a very sensitive parameter that dramatically affects the degree of concurrency and hence the performance. Before getting into the experimental results, we formally define granularity. The database physically consists of M data items and is viewed by the DBMS as consisting of G granules, such that M is a multiple of G. Thus, each granule contains M/G database items. The granularity is referred to by G. When a transaction requests an item i, the DBMS hashes it to the granule: g = ]((i - l)G)/M] + 1. Clearly, two items may be mapped into the same granule. When G is large, the granularity is fine; otherwise it is coarse. In this study, we assumed large-size transactions to obtain worst-case results for a system with mixed-size transactions. The database size M was 640 items, and the transaction size L was 15 items (about 2.5% of the database). The investigated granularities were 80, 160, 320, and 640 granules. The degree of multiprogramming T was studied for 10, 20, 30, and 40 concurrent transactions. According to Yao and Tay [23, 261, we cannot vary the granularity without affecting the effective transaction size. The expected value of the effective transaction size K as a function of G is given by the following formula stated by [26]:

where G is the granularity, M is the number of database items, and L is the transaction size in terms of items. However, for the values of M and L chosen in this study, G has almost no effect on K. This is shown in Figure 3. Figures 4 and 5 show the response time (RT) and throughput (TP) versus G for different degrees of multiprogramming (T) of sizes 10, 20, 30, and 40 transactions, and with read-write mix fixed at 50:50. We made the following observations about RT from Figure 4: (1) For small loads (T =lO), the effect of granularity is negligible. (2) Coarse granularities (80, 160) highly impair the RT, especially when the system is loaded (T = 30, 40). (3) The effect of doubling G from 320 to 640 is almost negligible. Thus for large-size transactions, RT incurred by G = M is almost the same as that incurred by G = M/2.

PERFORMANCE

0

EVALUATION

OF CONCURRENCY

m

90

CONTROL

87

00

CmJmAmT

Fig. 3.

Effective transaction size versus granularity for M = 640 items and L = 15 items.

From Figure 5 we made the following observations

about TP:

(1) For heavy loads (T= 40), TP increases linearly with G. (2) For heavy loads (T = 30, 40), TP diminishes for every coarse granularities (G = 80). (3) For G < 160, doubling the load reduces the TP by approximately a half. This means that under coarse granularity, TP is highly sensitive to the system load.

4.3.

EFFECT

OF READ- WRITE

MIX

The read-write mix associated with transactions contributes highly to data contention. In this study, instead of defining a few classes of transactions with different read-write mixes (e.g., query and update), we performed a set of experiments and measured the effect of read-write mixes of lOO:O, 9O:lO , . . . ,O:lOO on the response time and throughput. The experiments were repeated for different degrees of multiprogramming T, and the granularity was fixed at 320 granules, Figures 6 and 7 show the experimental results. From Figure 6 we can observe the following about RT: (1) The read-write mix has a great impact on the RT, especially when the system is loaded (T = 30,40). (2) For small loads (T = lo), read-write mix has a negligible effect on RT.

88

AHMED K. ELMAGARMID AND ABDELSALAM A. HELAL

(3) Starting from the 100% read (query) case, increasing the write percentage first impairs the RT till a read-write mix of 50:50, after which the RT improves and reaches its minimum value at the pure-update case. From Figure 7 we can observe the following about TP: (1) The read-write mix has a great impact on TP for all loads. (2) Pure query transactions result in m~mum TP. (3) Starting from the 100%read case, increasing the write percentile first impairs the TP till a read-write mix of 50:50, after which TP improves. The third observation on both RT and TP can be restated as follows: Query-dominant and update-dominant transactions result in better performance than transactions with nearly equal read and write percentages. This result is explained below. First: For query-dominant transactions (R > 50, W < 50), conflicting transactions are blocked after accessing most of their requests due to data contention. When unresolvable conflicts occur, the mostly finished transactions are restarted, thus increasing the useless I/O percentile. Second: For update-don’t tr~sactions (R < 50,W > 50), since writes are deferred till commit time, no write disk operation takes place during transaction execution, and hence most of the transaction’s disk access operations will not be done by the time it is restarted. Thus, we have the case of restarting a transaction that accessed a small fraction of its requests. This results in sma.B useless I/O percentage. On the other hand, for update-dominant transactions, the unresolvable conflict rate increases, resulting in many restarts. However, the restart operation neither contributes to a large useless I/O percentage nor incurs a high setup cost.

5.

CONCLUSION

In this paper, we have classified the research studies in the area of concurrency-control performance evaluation into five major areas, three of which reflect the effect of the database environment on the performance. Also, we proposed a framework for performing these studies. Using the proposed framework, we presented an example of a performance study for the dynamic two-phase locking algorithm. Two effects were studied: those of granularity and read-write mix. We have shown that for large-size transactions, the performance incurred by a granularity of M and a granularity of N/2 is almost the same. Also, we found that under the assumption that writes are deferred, query-dominant and update-dominant transactions result in better performance then transactions with nearly equal read and write percentages.

FERFOR~CEEV~UA~ONOFCONCURRENCYC~NTROL Response R/W

mix

=

89

Time 50/50

2600”4

-

208

-%-++-

l-10

-

T-30

-

T-4O

‘1=20

ts3

52

-

Fig. 4.

Response Time (RT) versus granularity for read/write mix 50:50.

Throughput f?,A4mix = SO/SO 1.5

-e-

-l-w

-

T-30

--+--

T-20

-

T-10

.6

.3

0 80

160

240

520

400

Gronularlty

480

560

I*0

Q

Fig. 5. Throughput (TP) versus granularity for read/write mix 5030.

AHMED K. ELMAGARMID AND ABDELSALAM A. HELAL Throughput hi = 320 -e--7=40

1.6

-++--T-30 --a-7-20

1.4

-+-T=clO 1.2 I .I .a .4 2 0

Fig, 6.

Response

Time (RT) versus read/write

mix for M = 320.

Response time M = 320 -+-T-IO

I

-++-l-220

R/W niix Fig. 7.

Throughput

versus read/write

mix for M = 320.

-d--

T -

-+-

7 = 40

30

PERFORMANCE APPENDIX.

EVALUATION

THE DYNAMIC

Procedure

OF CONCURRENCY TWO-PHASE

LOCKING

CONTROL ALGORITHM

2PL;

Type

request

lock

: record

i

{requesting

granule)

j

{requesting

transaction)

: database item; : transac-

k

{Access

tion; mode)

: ( read , write

end; database

item

: l..M;

transaction Var

:

Lijk

Victim Fai led

: l..N;

lock request; : transaction; : boolean;

begin accept<

Lijk 1;

try to lockt if if

failed failed

Lijk, failed then then

1;

try to block(

Lijk, failed

I;

begin choose

( victim

restartt

1;

victim

1;

end end. procedure

try to lockt

begin if granule

i

Lijk, failed

is unlocked

1;

then

begin lock item

i with

lock of type K;

put j on i's hold set; failed

is FALSE

end; else if (i is already

'read'

locked and k is also

'read' and the wait queue begin put j on i's hold set; failed is FALSE end; else end;

91

failed

is TRUE;

of i is empty

1

then

1;

92

AHMED K. EXMAGARMID AND ABDELSALAM A. HELAL

Procedure

try to block<

begin for

transactions

at1

Lfjkr failed T holding

I;

a Lock on i do

begin insert if

an edge

a cycle

results

in

the

waits-for-graph;

then

begin detete

alL

failed

is

edges

from

node

j;

TRUE;

exit end; put j

on i’s

failed

is

wait

queue;

FALSE

end; Procedure

restart<

begin for

items

all

if

victim

victim

);

i do

betongs

to

i’s

hotd

set

then

begin remove mark

victim

the

from

wait

the

queue

hold

of

set

of

i tern i;

i

end

else

if victim

is

in

i’s

wait

queue

then

begin detete mark

victim the

wait

from

wait

queue

of

queue

of

i;

i

end;

all

for

the

begin detersine of

marked the

itee

i

delete

this

insert

it

t4at

wait

front

queues part

shauld

of

do the

no Longer

wait

queue

be waiting;

part; in

front

of

the

CCQ;

end; delete node victim from the waits_ .f or-graph; reset the workspace of the victim; put victim on the CCQ end;

REFERENCES 1. R. Agrawal and D. Dewitt, fhtegrated Cmeurrmcy Control and Remwry Mmhanisms: Design and Performance EvaIuation. TR 497, Computer Science Dept., Univ. of Wiscon-

PERFORMANCE EVALUATION OF CONCURRENCY CONTROL

93

sin-Madison, Feb. 1983. 2. R. Agrawal, Multiprocessor Database Machines: Design and Performance Evaluation, Ph.D. Thesis, Computer Science Dept., Univ. of Wiscons~-Macon, 1983. 3. D. Badal, Correctness of concurrency control and implications in distributed databases, in Proceedings of the COMPSAC’79, Chicago, 1979. 4. R. Bayer, H. Heller, and A. Reiser, Parallelism and recovery in database systems, ACM Trans. Databare Systems, June 1980. 5. P. Bernstein and N. Goodman, Timestamp-based algorithms for concurrency control in distributed database systems, in Proceedings of the 6th Internatjona~ Conference on Vev Large Data&es, Oct. 1980. 6. P. Bernstein and N. Goodman, Concurrency control in distributed database systems. ACM Comput. Surveys, June 1981. 7. B. Bhargava, Concurrency control and reliability in distributed database management systems, in Handbook of Software Engineering, North Holland, 1984. 8. M. Carey, Modeling and Evaluation of Database Concurrency Control Algorithms, Ph.D. Thesis, Computer Science Division, Univ. of California, Berkeley, Sept. 1983. 9. M. Carey, An abstract model of database concurrency control algorithms, in Pr~eedings of the ACM SiGMOD, International Conference on Management of Data, San Jose, Calif., May 1983. 10. M. Carey and M. Stonebraker, The performance of concurrency control algorithms for database management systems, in Proceedings of the Tenth International Conference on Very Large Databases, Singapore, Aug. 1984. 11. M. Carey and W. Muhana, The performance of multiversion concurrency control algorithms, ACM Trans. OCS, to appear. 12. P. Franaszek and J. Robinson, Limitations of concurrency in transaction processing, ACM Trans. Database Systems, Mar. 1985. 13. E. Gelenbe and K. Sevcik, Analysis of update synchronization for multiple copy database, in Proceedings of the 3rd Berkeley Workshop in Distributed Databases and Computer Networks, Aug. 1978. 14. A. Heial, Performance Analysis of Concurrency Control Algorithms in Database Systems, M.Sc. Thesis, Computer Science Dept., Alexandria Univ., Egypt, July 1985. 15. H. Kung and J. Robinson, On optimistic methods for concurrency control. ACM Trans. Database Systems, June 1981. 16. D. Menasce and R. Muntz, Locking and deadlock detection in distributed databases, in Proceedings of the 3rd Berkeley Networks, Aug. 1978.

Workshop on Distributed

Data Management

and Computer

17. H. M. Nagi, A. A. Helal, and A. K. Elmag~d, Opti~st~c vs. pessimistic concurrency control algorithms: A comparative study, in Proceedings of the Intemational Conference on ParuNel Processing, Aug. 1986. 18. D. Reed, Naming and Synchronization in a Decentralized Computer System, Ph.D. Thesis, Dept. of Electrical Engineering and Computer Science, MIT, 1978. 19. D. Ries and M. Stonebraker, Effects of locking granularity in a database management system, ACM Trans. Database Systems, Sept. 1977. 20. D. Ries, and M. Stonebraker, Locking granularity revisited, ACM Trans. Database Sysrems. June 1979. 21. D. Rosenkrauz, R. Stearns, and P. Lewis, System level concurrency control for distributed database systems, ACM Trans. Database Systems, June 1978. 22. 0. Shmueli, P. Spirakis, and N. Goodman, A Methodology for Concurrency Control Performance Evaluation, TR-33-82, Aiken Computation Lab., Harvard Univ., Aug. 1982. 23. Y. Tay, A Mean Value Performance Model for Locking in Databases, Ph.D. Thesis, Computer Science Dept., Harvard Univ., Feb. 1984. 24. Y. Tay and R. Suri, Choice and performance in locking for databases, in Proceedings of

94

AHMED K. ELMAGARMID

AND ABDELSALAM

A. HELAL

the Tenth International Conference on Vety Large Databases, Singapore, Aug. 1984. 25. R. Thomas, A majority consensus approach to concurrency control for multiple databases, ACM Trans. Database Systems, June 1979. 26. S. B. Yao, Approximating block accesses in database organization, Comm. ACM, 1977. Received 26 March 1987; revised I6 October 1987

copy Apr.