Construction of Mixed Covering Arrays Using a Combination of Simulated Annealing and Variable Neighborhood Search

Construction of Mixed Covering Arrays Using a Combination of Simulated Annealing and Variable Neighborhood Search

Available online at www.sciencedirect.com Electronic Notes in Discrete Mathematics 47 (2015) 109–116 www.elsevier.com/locate/endm Construction of Mi...

220KB Sizes 5 Downloads 94 Views

Available online at www.sciencedirect.com

Electronic Notes in Discrete Mathematics 47 (2015) 109–116 www.elsevier.com/locate/endm

Construction of Mixed Covering Arrays Using a Combination of Simulated Annealing and Variable Neighborhood Search Arturo Rodriguez-Cristerna a,1 Jose Torres-Jimenez W. G´omez a,3 W. C. A. Pereira b,4

a,2

a

Technology Information Laboratory, CINVESTAV, Ciudad Victoria, Tamaulipas, Mexico

b

Biomedical Engineering Program/COPPE, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil

Abstract The construction of Mixed Covering Arrays (MCA) with the minimum number of rows has been a problem faced in different ways. This paper aims to construct near optimal MCAs through a novel approach based on Simulated Annealing (SA) and Variable Neighborhood Search (VNS). The solution quality of the proposed algorithm was measured by solving two benchmarks and the obtained results show a significant quality improvement over the results of previously reported metaheuristics. Keywords: Mixed Covering Arrays, Simulated Annealing, Variable Neighborhood Search

1 2 3 4

Email: Email: Email: Email:

[email protected] [email protected] [email protected] [email protected]

http://dx.doi.org/10.1016/j.endm.2014.11.015 1571-0653/© 2014 Elsevier B.V. All rights reserved.

110

1

A. Rodriguez-Cristerna et al. / Electronic Notes in Discrete Mathematics 47 (2015) 109–116

Introduction

The Mixed Covering Arrays (MCAs) are combinatorial structures that have been applied successfully for designing test suites for software interaction testing [11]. The problem of constructing MCAs with arbitrary numbers of features (columns) and the minimum number of required tests (rows) is very important. An MCA denoted by MCA(N; t, k, v0 v1 . . . vk−1 ) is an N × k array, where the values for the i-th column comes from an alphabet of size vi where 0 ≤ i ≤ k − 1, and must satisfy that all column combinations of size t contains at least the elements of the set: {vi0 × vi1 , ..., ×vit−1 }. A short notation for an MCA can be given using the exponential notation writing ui −1 v0u0 v1u1 . . . vi−1 to indicate that there are ui columns with cardinality vi . An example of an MCA(6; 2, 4, 3123 ) in its transposed form is shown in Figure 1. 0 1

1 0

0 1

0 1

0 1

1 0

0 1

0 1

0 1

0 0

1 1

2 2

Fig. 1. Transposed matrix of an M CA(6; 2, 4, 31 23 ).

It is not known if the Covering Array Construction (CAC) problem is NP-complete [12], however some related problems are NP-complete [4]. Also, there is only some special cases for which exists polynomial time algorithms to construct Covering Arrays [4]. The proposed algorithm combines the foundations and advantages of Simulated Annealing (SA) and Variable Neighborhood Search (VNS), we have called the algorithm as SAVNS. The motivation of the SAVNS algorithm comes out from the many papers that reported the success of using SA and VNS independently, and hybrid implementations of VNS [14,3].

2

Proposed Approach

This section presents the proposed SAVNS algorithm, SAVNS comprehends: an initial solution; an evaluation function; a set of neighborhood functions; a cooling schedule; and a combination of SA and VNS. The initial solution is a matrix M obtained by adding random generated rows with maximum Hamming distance as stated in [1].

A. Rodriguez-Cristerna et al. / Electronic Notes in Discrete Mathematics 47 (2015) 109–116

111

The evaluation function (τ ) implemented in the SAVNS algorithm is the number of missing t-tuples, thus the desired solution will have kzero missing ttuples. The cost of computing the fitness of a solution is O(N t ), but by using   local recalculation it is decreased to O( k−1 ) for each cell that was modified. t−1   t−1 The local recalculation is computed with a matrix P of size kt × i=0 vmaxi where vmax is the set of t higher cardinalities of the MCA under construction. The matrix P has the function to store the number of times that each t-tuple is covered in each t combination of columns. A probabilistic mixture of two neighborhood functions is implemented in the proposed SAVNS algorithm as a local search. The first neighborhood function called nf1 searches for a missing t-tuple in the matrix P, then it tries to set such tuple in each row of the matrix M modifying the first row that covers the selected tuple without generating a new missing one; otherwise, the function nf1 sets the chosen missing tuple in the row that results in the minimum increase of missing tuples. The neighborhood function nf1 is applied t     t k−t with a probability of nf1 = [0, 1] and its cost is O(4N ). i t−i i=1

A cell is considered as wildcard when every tuple involving it is covered more than once, because it can be established any value in such cell without decreasing the interaction degree coverage. The second neighborhood function called nf2 sequentially tries to set all the v − 1 possible values in a maximum number of 2t cells, but it changes only the first cell which leads to decrease the missing tuples; otherwise changes the value in the cell that triggers the minimum increase of missing tuples. Additionally, the function nf2 identifies the wildcard cells and randomly change their values. The neighborhood functionnf2 is applied with a probability of nf2 = 1 − nf1 and its cost is O(2t· 2v k−1 ). t−1 The cooling schedule of the SA is composed by: a) an initial temperature (temp0 ); b) a decrement function for lowering the temperature value; c) a final temperature value (tempf ); and d) a finite number of local search trials (L), also known as length of each homogeneous Markov chain. The parameters of the cooling schedule control the overall behavior of the SA algorithm and thus drastically affect the quality of the final solution obtained. It was determined to use a geometrical function to decrease the current system temperature, where the parameter to control the decreasing proportion (α) remains static at running time, because it has had good results in previously reported approaches. It was added a factor β = 1/α to increase the temperature and reduce the number of cases of premature convergence, and it is applied instead of the parameter α as long as there is found an improvement in the

112

A. Rodriguez-Cristerna et al. / Electronic Notes in Discrete Mathematics 47 (2015) 109–116

best solution in a block of L iterations. The termination criterion is based on the final temperature and on a parameter called frozen factor (f f ). The frozen factor works as an alternative termination criterion when the number of consecutive temperature decrements without an improvement in the best solution found is greater than a established limit (set to 33). The main algorithm proposed is based on the optimization algorithm presented by Kirkpatrick [10] (SA) and it is improved by implementing VNS (presented by N. Mladenovic and P. Hansen [13]) in the cycle of the homogeneous Markov chain of length L, where the maximum number of neighborhood structures is determined by pmax . The pseudocode of the main algorithm, which we have called SAVNS, is showed in Algorithm 1. Algorithm 1 Pseudocode of the proposed SAVNS algorithm to construct MCAs. 1 2 3 4 5 6 7 8 9 10

Input : N , t, k, v0 v1 . . . vk−1 , nf1 , temp0 , tempf , L, α, f f Output: M∗ begin M∗ ← M ← M ← getInitialSolution(); p = 0; repeat temp ← temp0 /i; bestImprovementG ← f alse; while temp > tempf and f c < f f do bestImprovementL ← f alse; for m ← 0 to L − 1 do M ← shake(M , p); if rand(0, 1] < nf1 then M ← nf1 (M ); else M ← nf2 (M ); −

12 13 14 15 16 17 18 19 20 21 22 23

end if bestImprovementL then temp ← temp · (1/α); else temp ← temp · α; f c ← f c + 1;

24 25 26

end i + +; until bestImprovementG == f alse; return M∗

27 28 29 30

τ (M ,... )−τ (M,... )

temp if 1 <= min{1, e } then M ← M ; M ← M ; p ← 0; if τ (M , . . . ) < τ (M∗ , . . . ) then M∗ ← M ; bestImprovementL ← true bestImprovementG ← true; f c ← 0; if τ (M∗ , . . . ) = 0 then return M∗ ; end end else p ← p + 1; if p == pmax then M ← M; p ← 0; end

11

end

The distance between two solutions (d) is measured according to the needed changes to transform the solution M1 into M2 , for this reason the maximum distance between two solutions in the problem of constructing MCAs is Nk.

A. Rodriguez-Cristerna et al. / Electronic Notes in Discrete Mathematics 47 (2015) 109–116

113

Most of the time neighborhood functions only explore neighboring solutions at a fixed distant d, but this approach can lead to stagnate the search in a local optimum. VNS allows to explore distant neighborhoods dynamically during the search and allows to scape from local optimal in the search process [9,8,13]. The basic idea of VNS is to change systematically the neighborhood within the local search algorithm and its basic steps are: shaking, local search, and neighborhood change. In the SAVNS the shake function implemented establishes k−1arbitrarily the symbols of p cells randomly selected, having a cost of O(p t−1 ); the probabilistic mixture of the neighborhood functions operates as the local search; and the neighborhood change its done with the metropolis acceptance criteria. The SAVNS algorithm was subject of a fine-tuning process with the purpose of selecting a good parameter configuration. After the fine-tuning process the best parameter configuration found was f f = 33, temp0 = 1, tempf = 1 × 10−10 ; α = 0.99, pmax = t, L = Nk(max(vi ))2 and nf1 = 0.5.

3

Results

A performance comparison was carried out with a benchmark consisting of 9 instances. The comparison between the best results obtained with the proposed SAVNS (in 31 iterations) and the reported results of state-of-art algorithms (IPOG-F [5], MiTS [6], SA [2], and SA-VNS [14]) is shown in Table 1, where the first column contains the instance number, the second column refers the short description of the constructed MCA. The seventh column presents the best previous solution (β), the eighth column presents the best results obtained by SAVNS and the ninth column presents the difference between the best result produced by our SAVNS and the previous best-known solution (Δ = Θ − β). Here we can see that the SA-VNS was able to equal seven best known solutions and improve the state-of-art algorithms in two instances. It was done a second comparison between the TS [7] and SAVNS algorithm, for which it was used a benchmark consisting in 9 instances and each algorithm was run 31 times over each instance. The results are presented in Table 2, where is shown that the proposed approach provides a better solution quality because it improves the previous published bounds over three instances and equals the remaining instances.

114

A. Rodriguez-Cristerna et al. / Electronic Notes in Discrete Mathematics 47 (2015) 109–116

Table 1 Comparison results between the approaches IPOG F [5], MiTS [6], SA-H [2], SA-VNS [14] and the proposed SAVNS. Id

MCA

IPOG-F

MiTS

SA

SA-VNS

Best

[5]

[6]

[2]

[14]

β

SAVNS Θ

Δ

1

t2k18v67 48 23

55

47

47

45

45

45

0

2

t2k19v69 43 27

60

51

51

50

50

50

0

3

t2k8v82 72 62 52

64

64

64

64

64

64

0

4

t3k9v45 34

100

85

80

83

80

80

0

5

t3k6v52 42 32

103

100

100

100

100

100

0

6

t3k7v101 62 43 31

361

360

360

360

360

360

0

7

t3k12v102 41 32 27

402

400

400

400

400

400

0

8

t3k15v66 55 34

426

-

-

376

376

375

-1

9

t3k8v82 72 62 52

554

540

535

500

500

499

-1

Table 2 Comparison results obtained for the third benchmark by the algorithms TS [7] and SAVNS. Id.

MCA

N*

TS [7] N

4

time

SAVNS N

time

Δ

1

t2k10v62 52 42 32

36

36

(0.066)

36

(0.922)

0

2

t3k10v62 52 42 32

180

180

(38906.771)

180

(14633.551)

0

3

t2k12v72 62 52 42 32

49

49

(0.283)

49

(2.602)

4

t2k12v72 62 52 42 32

294

329

(847.471)

307

(64134.004)

5

t2k14v82 72 62 52 42 32

64

64

(3.602)

64

(4.438)

6

t3k14v82 72 62 52 42 32 22

448

540

(19341.489)

503

(421599.828)

7

t2k16v92 82 72 62 52 42 32

81

81

(35.531)

81

(45.844)

8

t2k18v102 92 82 72 62 52 42 32

100

100

(702.300)

100

(564.849)

0

9

t2k20v112 102 92 82 72 62 52 42 32

121

122

(3927.934)

121

(14218.531)

-1

0 -22 0 -37 0

Conclusions

In this paper we presented an algorithm to construct MCAs based on a combination of SA and VNS, which we called SAVNS. The VNS was merged with the Metropolis acceptance criteria stage, in the cycle of the homogeneous Markov Chain of the SA. In this way the SA controls the acceptance movements using the Metropolis criteria, on the other hand the VNS works as a search booster by allowing to explore neighborhoods at different distances, improving the performance to escape from local optimal solutions and in some ocassions find the global optimal solution.

A. Rodriguez-Cristerna et al. / Electronic Notes in Discrete Mathematics 47 (2015) 109–116

115

The major features of the SAVNS algorithm are: a mechanism to change the size of the neighborhood, in the range [0, p − 1], according to the accepted movements; a mechanism to increase the temperature and decrease the probabilities of a premature convergence; the use of a mixture of neighborhood functions that works as local search and diversify the search strategy. A one-side Wilcoxon signed-rank test shows that SAVNS has a significance quality improvement performance over the best previously reported metaheuristics [5,6,2,7], furthermore, SAVNS was able to improve the stateof-art algorithms in five instances confirming that SAVNS is a good approach to construct MCAs.

Acknowledgments The authors acknowledge to the General Coordination of Information and Communications Technologies (CGSTIC) at Cinvestav for providing HPC resources on the Hybrid Cluster Supercomputer “Xiuhcoatl”. This research was partially funded by the following projects: CONACyT 58554 - C´alculo de Covering Arrays, 51623 - Fondo Mixto CONACyT y Gobierno del Estado de Tamaulipas.

References [1] Avila-George, H., “Constructing Covering Arrays using Parallel Computing and Grid Computing,” Ph.D. thesis, Universitat Politecnica de Valencia (2012). [2] Avila-George, H., J. Torres-Jimenez, V. Hern´ andez and L. Gonzalez-Hernandez, Simulated annealing for constructing mixed covering arrays, in: Proceedings of the 9th International Symposium on Distributed Computing and Artificial Intelligence - DCAI, Advances in Intelligent and Soft Computing 151 (2012), pp. 657–664. [3] Bouffard, V. and J. Ferland, Improving simulated annealing with variable neighborhood search to solve the resource-constrained scheduling problem, Journal of Scheduling 10 (2007), pp. 375–386. [4] Colbourn, C., Combinatorial aspects of covering arrays, Le Matematiche 59 (2004), pp. 125–172. [5] Forbes, M., J. Lawrence, Y. Lei, R. Kacker and D. Kuhn, Refining the inparameter-order strategy for constructing covering arrays, Journal of Research of the National Institute of Standards and Technology 113 (2008), pp. 287–297.

116

A. Rodriguez-Cristerna et al. / Electronic Notes in Discrete Mathematics 47 (2015) 109–116

[6] Gonzalez-Hernandez, L., N. Rangel-Valdez and J. Torres-Jimenez, Construction of mixed covering arrays of variable strength using a tabu search approach, in: W. Wu and O. Daescu, editors, Combinatorial Optimization and Applications, Lecture Notes in Computer Science 6508, Springer Berlin / Heidelberg, 2010 pp. 51–64. [7] Gonzalez-Hernandez, L., N. Rangel-Valdez and J. Torres-Jimenez, Construction of mixed covering arrays of strengths 2 through 6 using a tabu search approach, Discrete Mathematics, Algorithms and Applications 04 (2012), p. 1250033. [8] Hansen, P. and N. Mladenovi´c, A tutorial on variable neighborhood search, Technical report, LES CAHIERS DU GERAD, HEC MONTREAL AND GERAD (2003). [9] Hansen, P. and N. Mladenovic, Variable neighborhood search, in: F. Glover, G. Kochenberger, F. S. Hillier and C. C. Price, editors, Handbook of Metaheuristics, International Series in Operations Research & Management Science 57, Springer New York, 2003 pp. 145–184. [10] Kirkpatrick, S., C. Gelatt Jr and M. Vecchi, Optimization by simulated annealing, science 220 (1983), pp. 671–680. [11] Kuhn, D., D. Wallace and A. Gallo Jr, Software fault interactions and implications for software testing, IEEE Transactions on Software Engineering 30 (2004), pp. 418–421. [12] Lawrence, J., R. Kacker, Y. Lei, D. Kuhn and M. Forbes, A survey of binary covering arrays, the electronic journal of combinatorics 18 (2011), p. 1. [13] Mladenovic, N. and P. Hansen, Variable neighborhood search, Computers & Operations Research 24 (1997), pp. 1097–1100. [14] Rodriguez-Cristerna, A. and J. Torres-Jimenez, A simulated annealing with variable neighborhood search approach to construct mixed covering arrays, Electronic Notes in Discrete Mathematics 39 (2012), pp. 249 – 256, eURO Mini Conference.