A load-balanced distributed parallel mining algorithm

A load-balanced distributed parallel mining algorithm

Expert Systems with Applications 37 (2010) 2459–2464 Contents lists available at ScienceDirect Expert Systems with Applications journal homepage: ww...

554KB Sizes 6 Downloads 108 Views

Expert Systems with Applications 37 (2010) 2459–2464

Contents lists available at ScienceDirect

Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

A load-balanced distributed parallel mining algorithm Kun-Ming Yu a, Jiayi Zhou b,*, Tzung-Pei Hong c, Jia-Ling Zhou d a

Department of Computer Science and Information Engineering, Chung Hua University, 707, Sec. 2, WuFu Rd., HsinChu 300, Taiwan, ROC Institute of Engineering and Science, Chung Hua University, 707, Sec. 2, WuFu Rd., HsinChu 300, Taiwan, ROC c Department of Computer Science and Information Engineering, National University of Kaohsiung, 700, Kaohsiung University Rd, Kaohsiung 811, Taiwan, ROC d Department of Information Management, Chung Hua University, 707, Sec. 2, WuFu Rd., HsinChu 300, Taiwan, ROC b

a r t i c l e

i n f o

Keywords: Parallel and distributed processing Cluster computing Frequent patterns Association rules Data mining

a b s t r a c t Due to the exponential growth in worldwide information, companies have to deal with an ever growing amount of digital information. One of the most important challenges for data mining is quickly and correctly finding the relationship among data. The Apriori algorithm has been the most popular technique in finding frequent patterns. However, when applying this method, a database has to be scanned many times to calculate the counts of a huge number of candidate itemsets. Parallel and distributed computing is an effective strategy for accelerating the mining process. In this paper, the Distributed Parallel Apriori (DPA) algorithm is proposed as a solution to this problem. In the proposed method, metadata are stored in the form of Transaction Identifiers (TIDs), such that only a single scan to the database is needed. The approach also takes the factor of itemset counts into consideration, thus generating a balanced workload among processors and reducing processor idle time. Experiments on a PC cluster with 16 computing nodes are also made to show the performance of the proposed approach and compare it with some other parallel mining algorithms. The experimental results show that the proposed approach outperforms the others, especially while the minimum supports are low. Ó 2009 Elsevier Ltd. All rights reserved.

1. Introduction With the rapid development of information technology, companies have been working on digitizing all areas of business to improve efficiency and thus competitiveness. Tremendous amounts of data are thus generated due to the full digitization. It is important to extract meaningful information from scattered data. Data mining techniques have recently been developed for this purpose. They can be classified into different models like classification, regression, time series, clustering, association, sequence, and among others. Especially, association rules are commonly used in many applications. The most important step in mining association rules is to discover frequent patterns. This step needs to count the times of the patterns appearing in a database. According to the ways of generating candidate patterns, researches can be classified into generate-and-test (Apriori-like) (Agawal, Imilinski, & Swami, 1993) and pattern-growth approach (FP-growth) (Han, Pei, Yin, & Mao, 2004). The former uses a bottom-up approach, which extends frequent subsets by one item at a time. If an itemset with length k is frequent, then its any subset with length less than k is also fre-

* Corresponding author. Tel.: +886 3 5186360; fax: +886 3 5186416. E-mail addresses: [email protected] (K.-M. Yu), [email protected] (J. Zhou), [email protected] (T.-P. Hong), [email protected] (J.-L. Zhou). 0957-4174/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2009.07.074

quent. Although many Apriori-like methods have been proposed, it takes long time to find the frequent patterns when the database contains a large number of transactions. Some researches thus apply parallel and distributed techniques to effectively speed-up the mining process (Agrawal & Shafer, 1996; Cheung, Han, Ng, Fu, & Fu, 1996; Cheung, Lee, & Xiao, 2002; Cheung, Ng, & Fu, 1996; Ye & Chiang, 2006). In a distributed environment, irregular and imbalanced computation loads may cause the overall performance to be greatly degraded. Load balance among processors in the mining process is thus very important to parallel and distributed mining. In this paper, the Distributed Parallel Apriori (DPA) algorithm is proposed as a solution to this problem. Its goal is to reduce the frequency of database scans and to balance the computation loads among participated computing nodes. In the proposed method, a database has only to be scanned once because metadata are stored in the form of Transaction Identifiers (TIDs). The approach also takes itemset counts into consideration to improve load balancing as well as to reduce idle time of processors. The experimental results also show that the running time of the proposed approach is significantly less than that of some previous methods. The results also depict that DPA can successfully reduce the number of scan iterations and can evenly distribute workloads among processors. The paper is organized as follows. Association rules and parallel-distributed algorithms are reviewed in Section 2. The DPA algorithm is proposed in Section 3. An example to illustrate

2460

K.-M. Yu et al. / Expert Systems with Applications 37 (2010) 2459–2464

the proposed algorithm is given in Section 4. The experimental results are shown in Section 5. Finally, the conclusion is stated in Section 6.

2. Related work The frequent-pattern mining problem is defined as follows. Let DB = {T1, T2, . . ., Tm} be a transactional database, in which each transaction Ti consists of a set I of items {i1, i2, . . ., im}. Associated with each transaction is a unique identifier, TID (Apte & Weiss, 1997). The support of an itemset x in a database DB, denoted supDP(x), is the number of transactions in DB that contain x. Formally, supDP(x) = |{t| t DB and x t}|. The problem of frequent-pattern mining is to find all itemsets x’s with supDP(x) P s for a given threshold s (|DB| P s P 1). The Apriori algorithm was proposed by Agrawal and Srikant (1994) and is one of the most representative algorithms in mining frequent patterns. Its main idea is based on the observation that subsets of frequent itemsets must be frequent as well. The Apriori algorithm extends frequent itemsets by one item at a time and tests the candidates against the data. The algorithm terminates when no further successful extension is possible. Even though the Apriori algorithm can efficiently find frequent patterns, the execution time gets longer when the database sizes get larger because of each candidate itemset should be tested against the database. Since each candidate itemset with the same length may be tested independently, a good design of data structure will make the algorithm to be parallelized easily. Thus, many distributed parallel methods based on the Apriori algorithm were proposed (Einakian & Ghanbari, 2006; Parthasarathy, Zaki, Ogihara, & Li, 2001; Zaki, Ogihara, Parthasarathy, & Li, 1996; Zaki, Parthasarathy, Ogihara, & Li, 1997). Agrawal and Shafer (1996) then proposed parallel algorithms based on Count Distribution (CD) and Data Distribution (DD) to solve the frequent-pattern mining problem. The former (CD) partitions the database into blocks and sends them to processors to compute frequent itemsets. In the approach, (k + 1)-itemsets are also generated from k-itemsets. The advantage is that each processor only needs to process the data it owns. The other one, DD, then further improves the memory usage of CD. The amount of communication, however, increases with processors increased. Cheung et al. then proposed the Fast Distributed Mining (FDM) approach for finding association rules (Cheung, Han et al., 1996; Cheung, Ng et al., 1996). FDM reduces the candidate set by both local pruning and global pruning. Cheung et al. also improved the above approach and proposed the Fast Parallel Mining (FPM) algorithm (Cheung et al., 2002) in 2002 for parallel and distributed mining. FPM needs less communication than FDM. Its mining performance can thus be raised. Ye and Chiang (2006) also proposed a parallel-distributed algorithm based on the Trie tree (Bodon, 2003). Their algorithm distributes workloads according to the Trie tree to balance and speed-up the computation. However, the items are distributed to the nodes only based on the first level of the Trie tree. This may cause the sizes of candidate itemsets (workloads) among processors significantly varying. Moreover, this method also requires a database to be scanned many times. Recently, Wu and Li proposed an efficient frequent-pattern mining algorithm, called EDMA, based on the Apriori algorithm (Wu & Li, 2008). EDMA uses the CMatrix data structure to store the transactions for mining. This can get rid of database re-scanning. EDMA can minimize the number of candidate sets and reduce the exchange messages by local and global pruning. Since it may decrease the average size of transactions and datasets, the execution time for verifying frequency can also be reduced. Moreover,

it can decrease the communication time among computing nodes. The execution time, however, gets longer when the database size is larger, since EDMA will access CMatrix a lot of times when calculating candidate itemsets. Therefore, in this paper, a Distributed Parallel Apriori (DPA) algorithm is proposed to speed-up the process of frequent-pattern mining. By storing the TIDs of itemsets and precisely calculating and distributing computation workloads, DPA is able to effectively accelerate the computation of itemsets and reduce the required scan iterations to a database. 3. The proposed Distributed Parallel Apriori (DPA) algorithm The execution time of different processors in Ye and Chiang’s algorithm may vary in a wide range because distributing the items according to the Trie tree in upper levels may lead to imbalanced workload. In order to observe the execution time of each processor in Ye and Chiang’s algorithm, we implement their algorithm on the dataset T10I4D12KN100 K with the minimum support set at 0.2% using the MPI library on a PC cluster. Table 1 shows the execution time of each processor in Ye and Chiang’s algorithm. Moreover, it can also be observed that the CPU time and the communication time occupied 97% and 3% of the total execution time, respectively. Since their algorithm scans a database many times to verify whether the candidate patterns are frequent or not, we may also improve the performance by reducing database scan. To avoid the problems of load imbalance and multiple scans, the DPA algorithm is proposed in this paper, so that a database needs to be scanned only once while maintaining load balance among processors. In the proposed algorithm, each transaction has a unique Transaction Identifier, called TID. By using hash functions to store TIDs in a table structure, the number of itemsets can be quickly calculated without the need of re-scanning the database. For achieving a good load balance, the proposed approach adopts a heuristic based on the weights of frequent itemsets. The workload measure for finding frequent (k + 1)-itemsets is estimated from frequent k-itemsets. The frequent k-itemsets are first sorted according to their counts in descending order. Let len(freqk) denote the total number of frequent k-itemsets. The weight of the ith frequent k-itemset (Ii) is then set as follows:

weightðIi Þ ¼ lenðfreqk Þ  i  1:

ð1Þ

The concept can be represented by Fig. 1. The total weight of all the frequent k-itemsets can then be found as follows:

TotalWeight ¼

lenðfreq Xk Þ1

weightðIi Þ:

ð2Þ

i¼0

Assume there are p processors available. Each professor can then process the subset of k-itemsets with the sum of their weights close to TotalWeight/p. For simplicity, the frequent k-itemsets are put one

Table 1 Execution time of each processor in Ye and Chiang’s algorithm. Processor ID

Execution time

CPU time

Communication time Send

Receive

0 1 2 3 4 5 6 7

129.4010358 89.78008485 77.47582674 89.60780978 75.14281654 70.93045855 71.56252313 72.59657574

127.838176 88.00373292 75.11535358 87.40449095 73.05057096 68.58841801 68.99555421 70.05979991

0.367843151 0.33731699 0.364768028 0.320035219 0.312260389 0.307432413 0.301764011 0.319941998

1.195016623 1.439034939 1.995705128 1.883283615 1.779985189 2.034608126 2.265204906 2.21683383

2461

K.-M. Yu et al. / Expert Systems with Applications 37 (2010) 2459–2464

its own k-itemsets and the sorted order of all the frequent kitemsets as mentioned above. Step 7. Each processor calculates the counts of its candidate (k + 1)-itemsets from the TIDs and sets the itemsets as frequent if their counts are greater than the minimum support s. Step 8. Each slave processor sends its own frequent (k + 1)itemsets to the master processor. Step 9. If the set of frequent itemsets received by the master processor is empty, then exit the algorithm; else set k = k + 1 and repeat Steps 5–9.

Weight (I0)

I0 I1 …

4. An example

I n-1 I0

I1



I n-1

Fig. 1. The concept of weights of frequent itemsets.

by one into the first processor according to the sorted order until the sum of their weights is larger than TotalWeight/p. The other itemsets are then allocated into the second processor in the same way. The above allocation is then repeated until all the itemsets are processed. The first processor then generates its possible candidate (k + 1)itemsets according to its own allocated k-itemsets and the sorted order of all the k-itemsets. For example, assume there are two processors and four items {A, B, C, D}. Also, assume that the descending sorted order according to the counts is (A, B, C, D). If the set of 1-itemsets {A} is allocated to the first processor, and the set of {B, C, D} is allocated to the second processors. Then, the first processor will form the candidate 2-itemsets {AB, AC, AD} for checking, and the second processor will form the candidate 2-itemsets {BC, BD, CD} for checking. Note that the cross-set 2-itemsets (e.g. AB) are checked by the former processor, instead of by the latter one. The above heuristic is also reasonable if each cell in Fig. 1 is thought of as the checking workload for a (k+1)-itemset formed from the two corresponding k-itemsets. Note that some cells may not be checked because they are not valid (k + 1)-itemsets. For example, a cell formed from two 2-itemsets {A, B} and {C, D} is not a valid 3-itemset. But the heuristic can offer an effective approximate estimation of workload. The experimental results will validate this. The algorithm of the proposed DPA algorithm is described in more detail below. 3.1. The DPA algorithm Input: A transaction database DB = {T0, T1, . . ., Tn-1}, in which each transaction Ti I = {i0, i1, . . ., im1}; a given minimum support s; a given set of p processors, in which P0 is both the master processor (MP) and a slave processor (SP) and P1 to Pp1 are slave processors. Output: All frequent patterns based on DB. Step 1. Each processor reads the database DB. Step 2. Each processor scans DB and creates the set of transaction identifiers (TIDs) for each item. Step 3. Each processor calculates the counts of the candidate 1itemsets and sets the itemsets as frequent if their counts are greater than the minimum support s. Step 4. Set k = 1, where k is the number of itemsets currently processed. Step 5. The master processor divides the frequent k-itemsets into a partition of p disjointed subsets in the way mentioned above and assigns the subsets to the corresponding slave processors, respectively. Step 6. Each processor receives its own set of k-itemsets and generates its possible candidate (k + 1)-itemsets according to

An example is given below to illustrate the proposed DPA algorithm. Assume there are two processors available. The first processor acts as both the roles of the master processor and a slave processor. The second processor simply acts as the slave processor. Also assume the database consists of five transactions as shown at the left of Fig. 2. The minimum support s is set at 2. In Steps 1 and 2, both the two processors first read and scan the database to build the candidate 1-itemsets and their TIDs. The results are shown at the right of Fig. 2. In Step 3, the frequent 1-itemsets are first found. The results are shown at the left of Fig. 3. From Steps 1 to 3, each processor executes the same procedure to search all frequent 1-itemset, since the execution time of this part is tiny. Thus, after processed, each processor will have the whole set of frequent 1-itemset. In Step 4, k is initially set at 1, meaning 1-itemsets are first processed. In Step 5, the master processor calculates the weights of the frequent 1-itemsets and partitions the 1-itemsets (items) into two subsets. The results are shown at the right of Fig. 3, in which two subsets of items {C, A, B} and {F, L, M, O} are formed. Note that the last item P doesn’t need to be allocated since the 2-itemsets with P will be generated from the other items. The set {C, A, B} is then allocated to the first processor, and the set {F, L, M, O, P} is allocated to the second processor. In Step 6, each processor generates its possible candidate 2itemsets according to its own 1-itemsets and the sorted order of all the frequent 1-itemsets. Thus, P0 will handle the candidate 2itemsets with at least one of {C, A, B}, and P1 will handle the other candidate 2-itemsets with {F, L, M, O, P}, but without any item in {C, A, B}. In Step 7, each processor calculates the counts of its candidate 2-itemsets from the TIDs. For example, Fig. 4 describes the calculation of the candidate 2-itemset {C, A}. The intersection from the TIDs of items A and C are {1, 5}. The count of the 2-itemset {A, C} is thus 2. The results of the count calculation for all the candidate 2-itemsets on the two processors are shown in Fig. 5. TID

Items

Item

TID

1

AC D G P

A

125

2

AF L M O

B

345

3

BF C M

C

1345

4

BC K SL

D

1

5

AC O P B

F

23

G

1

K

4

L

24

M

23

O

25

P

15

S

4

Fig. 2. Scanning the database to build the candidate 1-itemsets and their TIDs.

2462

K.-M. Yu et al. / Expert Systems with Applications 37 (2010) 2459–2464

Item

TID

C

1345

A

125

B

345

F

23

L

24

M

23

O

25

M

P

15

O

C A B F L

Total number of lattices = 28 Average lattices per processor = 28 / 2 = 14

p0 p1

P C

A

B

F

L

M

O

P

Fig. 3. Distributing frequent 1-itemsets to two processors.

Item

TID

Support

Item

TID

Support

C

1345

4

CA

15

2

A

125

3

Fig. 4. An example for calculating the counts of itemsets from TIDs.

The itemsets with their support values larger than or equal to the given minimum support, which is 2 in this example, are then frequent. In this example, CA, CB, CP, AO, AP, and FM are frequent. In Step 8, each slave processor sends its own frequent 2-itemsets to the master processor. In Step 9, since the frequent 2-temsets are not empty, Steps 5–9 are then repeated. The master processor then sorts the frequent 2-itemsets according to their counts and then partitions them into two subsets, {CB, CA} and {CP, AO, AP}. The process is shown in Fig. 6. In this example, only one 3-itemset {CAP} is frequent. Its related data are shown in Fig. 7. Because only one frequent 3-itemset is derived, no candidate 4itemsets will be formed. The mining process thus ends here.

Item CB CA CP AO AP FM

TID 34 5 15 15 25 15 23

Itemset CA CB CF CL CM CO CP AB AF AL AM AO AP BF BL BM BO BP

p0 TID 15 345 3 3 3 5 15 5 2 2 2 25 15 3 4 3 5 5

p1 Support 2 3 1 1 1 1 2 1 1 1 1 2 2 1 1 1 1 1

Itemset FL FM FO FP LM LO LP MO MP OP

TID 2 23 2 N/A 2 2 N/A 2 N/A 5

Support 1 2 1 0 1 1 0 1 0 1

Fig. 5. The results of the count calculation for candidate 2-itemsets.

CP

AO AP FM

Total weight = 15 Average weight per processor = 15/2 = 7.5

p0

p1

CB CA CP AO AP FM

Fig. 6. Distributing frequent 2-itemsets on P1.

5. Experimental results Experiments were then made to evaluate the performance of the proposed algorithm. The DPA algorithm was also compared with Ye and Chiang’s algorithm (Ye & Chiang, 2006) and with EDMA (Wu & Li, 2008). The programs were executed in a PC cluster with 16 computing nodes. The synthesized datasets generated by

CB CA

Item

TID

Support

CAP

15

2

Fig. 7. The resulting frequent 3-itemset.

IBM’s Quest Synthetic Data Generator (IBM Almaden, 1994) were used to compare the algorithms. Table 2 gives the hardware and software specifications. Fig. 8 shows the execution time of each processor by DPA on the dataset T10I4D6KN100K. It can be observed that DPA achieved a good balance of workload. Fig. 9 shows the execution time of each processor by the three algorithms. It can be seen that DPA outperformed much over the other two algorithms. The use of TIDs in DPA could effectively reduce the database scanning and save the execution time. Besides, DPA and EDMA got more balanced workload than Ye and Chiang’s algorithm.

Table 2 Hardware and software specifications. Hardware environment CPU Memory Network Disk

AMD Athlon Processor 2200+ 1 GB DDR Ram 100 Mbps interconnection network 80 GB IDE H.D.

Software environment O.S. Library

Ubuntu Linux 6.06 MPICH2 1.0.3

2463

K.-M. Yu et al. / Expert Systems with Applications 37 (2010) 2459–2464

Execution time of each processor (T10I4D6KN100K, Sup:0.002) 1.4

p0

Time (sec.)

1.2

p1

1

p2

0.8

p3

0.6

p4

0.4

p5

Fig. 10 shows the execution time under different numbers of processors. As expected, the execution time decreased along with the increase in the processors. Fig. 11 shows the execution time with different supports for eight processors. With the increasing support values, the execution time decreased significantly since less candidate itemsets were generated for higher minimum supports.

p6

0.2

Execution time of WDPA (T10I4D200KN100K, Sup=0.0015)

p7 0

2500 DPA

Time (sec.)

DPA

Fig. 8. Execution time of each processor by DPA.

1500 1000

Execution time of each processor (T10I4D6KN100K, Sup:0.002)

300 250 Time (sec.)

2000

200 150

p0

p2

p1

p3

p4

p6

p5

p7

500 0 1

2

4

8

16

Number of processors Fig. 12. Execution time of DPA for a large dataset.

100 50 0 DPA

Ye

EDMA

Fig. 9. Comparison of the three algorithms.

Speedup ratio of WDPA (T10I4D200KN100K, Sup=0.0015)

14

DPA

1600

Executime time of different processors (T10I4D6KN100K, Sup=0.002) DPA

Time (sec.)

1400

Ye

1200

Speedup ratio

12

EDMA

1000

10 8 6 4 2

800

0

600

1

2

400

4

8

16

Number of processors

200 Fig. 13. Speed-up ratios of DPA for a large dataset.

0 1

2 4 Number of processors

8

Fig. 10. Execution time with different processors.

10000

Execution time of different support (T10I4D10KN100K, 8 processors) 140 100

EDMA

80 60

Time (sec.)

Time (sec.)

Ye

0.002 0.0025 0.003

1000

DPA

120

0.0015

Execution time of different support (T10I4D200KN100K)

0.004 0.005

100

10

40 20

1

0 0.002

0.0025

0.003 Support

Fig. 11. Execution time with different supports.

0.0035

1

2

4

8

Number of processors Fig. 14. Execution time with different minimum supports for a large dataset.

2464

K.-M. Yu et al. / Expert Systems with Applications 37 (2010) 2459–2464

Since Ye and Chiang’s algorithm and EDMA took much longer time than DPA, we thus only executed DPA for the large dataset (T10I4D200kN100K) to further verify the performance. Figs. 12 and 13 illustrate the execution time and the speed-up by various numbers of processors, respectively. With more processors, DPA needed less execution time. Moreover, the speed-up ratios were also acceptable from 1 to 16 processors. The execution time with different minimum supports for the large dataset is shown in Fig. 14. From Fig. 14, it can be observed that DPA could effectively reduce the execution time with low minimum supports such that the speed-up was nearly linear.

6. Conclusion Discovering frequent patterns in a huge database is a worthwhile research topic. The process of generating itemsets and confirming them to be frequent is, however, very time consuming. Parallel and distributed computation strategies provide feasible solutions to this problem. In this paper, the Distributed Parallel Apriori (DPA) algorithm is proposed. It stores the TIDs of items in a table to compute the occurrences of itemsets fast. DPA can thus effectively reduce the required scan iterations to a database and accelerate the calculation of itemsets. It also adopts a useful heuristic to partition the itemsets to processors. By taking the factor of itemset counts into consideration, this approach can effectively balance workload among processors and reduce processor idle time. Experimental results show that DPA achieves better than some pervious works, especially in the case of high data volumes and low minimum supports. The results also show that DPA has a better load balance than some other parallel mining algorithms. The proposed algorithm can thus provide a useful distributed strategy for mining problems.

References Agawal, R., Imilinski, T., & Swami, A. (1993). Mining association rules between sets of items in large databases. In Proceedings of the 1993 ACM SIGMOD international conference on management of data (Vol. 22(2), pp. 207–216). Agrawal, R., & Srikant, R. (1994). Fast algorithms for mining association rules. In Proceedings of the 20th international conference on very large databases (pp. 487– 499). Agrawal, R., & Shafer, J. C. (1996). Parallel mining of association rules. IEEE Transactions on Knowledge and Data Engineering, 8(6), 962–969. Apte, C., & Weiss, S. M. (1997). Data mining with decision trees and decision rules. Future Generation Computer Systems, 13(2–3), 197–210. Bodon, F. (2003). A fast apriori implementation. In Proceedings of the IEEE ICDM workshop on frequent itemset mining implementations. Cheung, D. W., Han, J., Ng, V. T., Fu, A. W., & Fu, Y. (1996). A fast distributed algorithm for mining association rules. In The fourth international conference on parallel and distributed information systems (pp. 31–42). Cheung, D. W., Lee, S. D., & Xiao, Y. (2002). Effect of data skewness and workload balance in parallel data mining. IEEE Transactions on Knowledge and Data Engineering, 14(3), 498–514. Cheung, D. W., Ng, V. T., & Fu, A. W. (1996). Efficient mining of association rules in distributed databases. IEEE Transactions on Knowledge and Data Engineering, 8(6), 911–922. Einakian, S., & Ghanbari, M. (2006). Parallel implementation of association rules in data mining. In Proceedings of the 38th southeastern symposium on system theory (pp. 21–26). Han, J., Pei, J., Yin, Y., & Mao, R. (2004). Mining frequent patterns without candidate generation: A frequent-pattern tree approach. Journal of Data Mining and Knowledge Discovery, 8(1), 53–87. IBM Almaden. (1994). I. Quest synthetic data generation code. . Parthasarathy, S., Zaki, M. J., Ogihara, M., & Li, W. (2001). Parallel data mining for association rules on shared-memory systems. Knowledge and Information Systems, 3(1), 1–29. Wu, J., & Li, X. M. (2008). An efficient association rule mining algorithm in distributed database. In International Workshop on Knowledge Discovery and Data mining (WKDD) (pp. 108–113). Ye, Y., & Chiang, C. C. (2006). A parallel apriori algorithm for frequent itemsets mining. In Proceedings of the fourth international conference on software engineering research, management and applications (pp. 87–94). Zaki, M. J., Ogihara, M., Parthasarathy, S., & Li, W. (1996). Parallel data mining for association rules on shared-memory multi-processors. In Proceedings of the 1996 ACM/IEEE conference on supercomputing (CDROM). Zaki, M. J., Parthasarathy, S., Ogihara, M., & Li, W. (1997). Parallel algorithms for discovery of association rules. Data Mining and Knowledge Discovery, 1(4), 343–373.