Information and Software Technology 74 (2016) 69–85
Contents lists available at ScienceDirect
Information and Software Technology journal homepage: www.elsevier.com/locate/infsof
Effective algorithms for constructing minimum cost adaptive distinguishing sequences Uraz Cengiz Türker∗, Tonguç Ünlüyurt, Hüsnü Yenigün Sabanci University, Orhanli, Tuzla, Istanbul 34956, Turkey
a r t i c l e
i n f o
Article history: Received 4 January 2015 Revised 1 February 2016 Accepted 2 February 2016 Available online 26 February 2016 Keywords: Finite State Machines Adaptive distinguishing sequences Checking sequences
a b s t r a c t Context: Given a Finite State Machine (FSM), a checking sequence is a test sequence that determines whether the system under test is correct as long as certain standard assumptions hold. Many checking sequence generation methods use an adaptive distinguishing sequence (ADS), which is an experiment that distinguishes the states of the specification machine. Furthermore, it has been shown that the use of shorter ADSs yields shorter checking sequences. It is also known, on the other hand, that constructing a minimum cost ADS is an NP-hard problem and it is NP-hard to approximate. This motivates studying and investigating effective ADS construction methods. Objective: The main objective of this paper is to suggest new methods that can compute compact ADSs to be used in the construction of checking sequences. Method: We briefly present the existing ADS construction algorithms. We then propose generalizations of these approaches with a set of heuristics. We also conduct experiments to compare the size of the resultant ADSs and the length of the checking sequences constructed using these ADSs. Results: The results indicate that when the ADSs are constructed with the proposed methods, the length of the checking sequences may reduce up to 54% (40% on the average). Conclusions: In this paper, we present the state of the art ADS construction methods for FSMs and we propose generalizations of these methods. We show that our methods are effective in terms of computation time and ADS quality. © 2016 Elsevier B.V. All rights reserved.
1. Introduction Testing is an important part of the software development process but is typically manual and, as a result, expensive and error prone. Therefore, there has been a significant interest in automating testing from formal specifications. A widely used formal model for the specification is the Finite State Machine (FSM) model. The FSM model and its extensions such as Specification and Description Language (SDL) [1] or State-Charts [2] are also used to model the semantics of the underlying software. Deriving test sequences from FSM models, therefore, has been an attractive topic for various application domains such as sequential circuits [3], lexical analysis [4], software design [5], communication protocols [6–11], object-oriented systems [12], and web services [13,14]. Such techniques have also been shown to
∗
Corresponding author. Tel.: +90 5073631731. E-mail addresses:
[email protected],
[email protected] (U.C. Türker),
[email protected] (T. Ünlüyurt),
[email protected] (H. Yenigün). http://dx.doi.org/10.1016/j.infsof.2016.02.001 0950-5849/© 2016 Elsevier B.V. All rights reserved.
be effective in important industrial projects [15]. The purpose of generating these test sequences is to decide whether an implementation conforms to its specification. An implementation is said to conform to its specification when the implementation has the same behavior as defined by the FSM specification. In order to determine whether an implementation N has the same behavior as the specification M, a test sequence (an input/output sequence) is derived from M and the input portion of the sequence is applied to N. The final decision is made by comparing the output sequence produced by N (i.e. the actual output) and the output portion of the test sequence (i.e. the expected output). If there is a difference between the actual and the expected output, then N is a faulty implementation of M. Although, in general, having no difference between the actual and the expected output does not mean that N is a correct implementation of N, it is possible to construct a test sequence with such a guarantee under some conditions on M and N. A test sequence with such a full fault coverage is called a checking sequence [5,16]. The literature contains many techniques that automatically generate checking sequences [5,16–21]. In principle, checking
70
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85
sequences constructed by these approaches consist of three types of components: initialization, state identification, and transition verification. As the transition verification components are also based on identifying the starting and ending states of the transitions, a checking sequence incorporates many applications of input sequences to identify the states of the underlying FSM. For state identification several alternative approaches exist, such as Distinguishing Sequences (DS), Unique Input Output (UIO) Sequences or Characterizing Sets (W-Set). Among these alternatives, a checking sequence of polynomial length can be constructed in polynomial time when a DS exists [19,22]. Checking sequences constructed without using a DS, on the other hand, are in general of exponential length [19]. Therefore, many techniques for constructing checking sequences either use a given DS [17,18,23,24], or use both DS and other alternatives together [25–27] for state identification. There are two types of distinguishing sequences. A Preset Distinguishing Sequence (PDS) is a single input sequence for which different states of FSM produce different output sequences. On the other hand, an Adaptive Distinguishing Sequence (ADS) (also known as a Distinguishing Set [28]) can be thought as a rooted decision tree with n leaves, where n is the number of states of M. The internal nodes of the tree are labeled by input symbols and the leaves are labeled by distinct states. The edges emanating from a common node have different output symbols labeling the edges. The concatenation of input and output labels on a path from the root node a leaf node labeled by a state s, correspond the output sequence that would be obtained when this input sequence is applied to the state s. We present a formal definition of ADS in Section 2. The use of ADS is straightforward: to identify the current state of an FSM, one applies the input symbol at the root and follows the outgoing edge labeled by the output symbol that is produced by the FSM. The procedure is repeated for the root of the subtree reached in this way, as long as the current node is an internal node of the ADS. When a leaf node is reached, the state label of the node gives the initial state that the experiment started. In this paper, we consider deterministic and completely specified FSMs1 . For constructing a checking sequence for such FSMs, using an ADS rather than a PDS is advantageous. Lee and Yannakakis show that checking the existence of and computing a PDS is a PSPACE-complete problem. On the other hand, for a given FSM M with n states and m input symbols, the existence of an ADS can be decided in O(mnlog n) time [29]. 1.1. Literature review This section reviews previous work on ADSs. There are many computational complexity results regarding ADSs for deterministic and complete FSMs. Although earlier bounds for the height of ADSs are exponential in the number of states [30], Sokolovskii proved that if an FSM M with n states has an ADS, then it has an ADS with height ≤ π 2 n2 /12 [31]. Moreover, Kogan claimed that, for a given n state FSM, the length of an ADS is bounded above by n(n − 1 )/2 [32]. Later Rystsov proved this claim [33]. Lee and Yannakakis proposed an algorithm (LY algorithm) that constructs an ADS with upper bound of n(n − 1 )/2 in the worst case in O(mn2 ) time [29]. It was proven that minimizing the height of an ADS (in fact minimizing ADS size with respect to some other metrics as well) is an NP-hard problem [34]. Türker and Yenigün proposed two heuristics as a modification of the LY algorithm for minimizing ADSs [34]. Recently Türker et al. also presented an enhanced version of successor tree algorithm called the lookahead based algorithm (LA) for ADS minimization [35].
1
Please see Section 2.1 for the definitions of these terms.
Unfortunately, not all FSMs possess an ADS. For such cases, Hierons and Türker introduced the notion of incomplete ADSs [36]. They showed that the optimization problems and the corresponding approximation problems related to incomplete ADSs are PSPACE-complete. A greedy algorithm to construct incomplete ADSs is also given in this work. Besides these results for deterministic and complete FSMs, there are also works on ADSs for non-deterministic and incomplete FSMs. Kushik et al. present an algorithm for constructing ADSs for non-deterministic observable FSMs [37]. Since the class of deterministic FSMs is a subclass of nondeterministic observable FSMs, the algorithm can also be used to construct ADSs for a given FSM M. It was recently shown that for partial FSMs, checking the existence of an ADS can be done in polynomial time and checking the existence of a PDS is PSPACE-complete [38]. The height of a minimum ADS for a partial FSM is known to be at most (n − 1 )2 , although it is not known if this bound is tight [39]. Finally in [40] the authors propose a brute-force massively parallel algorithm for deriving ADSs/PDSs from partial observable nondeterministic FSMs. 1.2. Motivation and problem statement As the length of the checking sequence determines the duration and hence the cost of testing, there exists a line of work to reduce the length of checking sequences. In these works, the goal is to generate a shorter checking sequence, by putting the pieces that need to exist in such a checking sequence together in a better way [17,18,21,23,41–43]. However in [34] Türker and Yenigün show the potential enhancements of constructing minimum cost ADSs on the length of checking sequences and examined the computational complexity of constructing minimum cost ADSs. In their work, they define the “cost” of an ADS as (i) the height of the ADS (MinHeightADS problem), (ii) the sum of the depths of all leaves in the ADS (external path length) (MinADS problem), and (iii) the weighted sum of the depths of the leaves in the ADS (MinWeightADS problem). They showed that constructing a minimum ADS with respect to these cost metrics are NP-complete and NPhard to approximate. They proposed two different modifications for the LY algorithm called GLY1 and GLY2 for constructing compact ADSs with respect to minimum height and minimum external path length. As shown in Section 1.1, except for the exponential time algorithms [30,35,44], there have been no polynomial time algorithm proposed for constructing minimum cost ADSs. Besides there have been no work reported for constructing ADSs with minimum weight and there exists no work that shows the effect of using such ADSs for constructing checking sequences. This paper is mainly motivated by these observations. In this paper, we first provide a brief summary for the existing ADS construction algorithms including STA, LY, GLY1, GLY2 and LA algorithms and then we propose generalizations of these approaches: (1) Low-cost ST construction approach (LCST) (2) Splitting Forest Algorithm (SFA), and (3) Splitting Graph Algorithm (SGA) for constructing reduced size ADSs. Furthermore, we present a set of new heuristics to construct ADSs with minimum height, minimum external path length and minimum weight. LCST is a generalization of GLY1 and GLY2 algorithms. SFA makes use of a splitting forest (SF) to construct an ADS, and SGA makes use of a splitting graph (SG) to construct an ADS. Construction of STs, SFs and SGs are guided by different heuristics based on the objective, such as minimizing the height, the external path length or the weight of the ADS. LCST and SFA are polynomial time methods but SGA may require exponential time (with the number of states of the underlying FSM) to construct an ADS.
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85
We compare the existing and the proposed methods by performing experiments and we report on the results of these experiments. In the experiments, we compared the quality of ADSs computed by the existing and proposed methods with different objective functions and compared the length of the checking sequences constructed with the ADSs computed by above mentioned approaches. The experiment subjects included randomly generated FSMs, FSMs drawn from a benchmark and a special class of FSMs proposed in [31]. The results suggest that the length of the checking sequences reduce 40% on the average if they were constructed with the ADSs computed by the proposed methods. 1.3. Summary of the paper Section 2 introduces the terminology and the notation that we use throughout the paper. We summarize the existing ADS construction algorithms in Section 3. The LCST, the SFA, and the SGA methods are presented in Section 4. We present new heuristics in Section 5. The results of the experiments are given in Section 6. Finally in Section 7, we conclude with discussions.
71
x1/y2
x2/y1
s1
s2 x1/y1
x2/y1 s4
x2/y1
x1/y1,x2/y1
x1/y1 s3
Fig. 1. An example deterministic, completely specified and minimal FSM M1 .
the transitions of M. An FSM M is strongly connected if the corresponding directed graph G is strongly connected. In Fig. 1 an example FSM M1 is given, where S = {s1 , s2 , s3 , s4 }, X = {x1 , x2 }, and Y = {y1 , y2 }. Note that FSM M1 is a minimal machine, as the input sequence x1 x2 x1 x1 x2 x1 is a splitting sequence for every pair of different states. 2.2. Adaptive distinguishing sequences (ADSs) In the literature, an ADS is typically defined for an FSM M, or in other words, for the entire set S of states of M. However, we prefer to define an ADS for a block as follows: formally;
2. Preliminaries 2.1. Finite State Machines (FSMs) A Finite State Machine (FSM) M is defined by a tuple M = (S, X, Y, δ, λ ) where S is a finite set of states, X = {x1 , x2 , . . . , xm } is a finite set of input symbols (or simply inputs), Y = {y1 , y2 , . . . , yl } is a finite set of output symbols (or simply outputs). δ is a transition function in the form of δ : S × X → S and λ is the output function in the form of λ: S × X → Y. When δ and λ are total functions, M is said to be deterministic and completely specified. We will assume that FSM M will reside at a single state s ∈ S and when it receives a single input symbol x ∈ X, it produces a single output symbol λ(s, x) and changes its state to δ (s, x). The transition and output functions are extended to a sequence of inputs as follows, where we use ε as the empty sequence. For α ∈ X and x ∈ X, δ¯ (s, ε ) = s, δ¯ (s, xα ) = δ¯ (δ (s, x ), α ), λ¯ (s, ε ) = ε, λ¯ (s, xα ) = λ(s, x )λ¯ (δ (s, x ), α ). We call a subset B ⊆ S of states a block. The transition and output functions are extended to blocks as follows. For a block B and α ∈ X , δ¯ (B, α ) = ∪s∈B δ¯ (s, α ) and λ¯ (B, α ) = ∪s∈B λ¯ (s, α ). In the rest of the paper, we will use the ¯ , respectively. symbols δ and λ to denote δ¯ and λ An input x ∈ X is called a valid input with respect to a block B if for any pair of different states s, s of B, we have that δ (s, x ) = δ (s , x ) ⇒ λ(s, x ) = λ(s , x ). An input sequence α ∈ X is said to be a splitting sequence for a block B, if |λ(B, α )| > 1, and for any α , α ∈ X , x ∈ X such that α = α xα , x is a valid input for δ (B, α ). Intuitively, α is an input sequence such that at least two states in B produce different output sequences for α , and no two states in B are merged without distinguishing them. In other words, α splits B, hence the name. We call an input symbol x a splitting input for B, if x is a splitting sequence of length one for B. For a block B, an input sequence α and an output sequence β , we use the notation Bα /β to denote the set Bα /β = {s ∈ B | λ(s, α ) = β}. In other words, Bα /β is the set of states in B that produce the output sequence β when the input sequence α is applied. We also use the notation Bα to denote the set Bα = {Bα /β |β ∈ λ(B, α )}. In
tuitively, Bα is the set of blocks in B where two states s and s in B are in the same block in Bα iff s and s produce the same output sequence to α . If for any pair of different states of an FSM M, there exists a splitting sequence then M is called a minimal FSM. An FSM M can be represented by using a directed graph G where the vertices of G correspond to the states of M and the edges of G correspond to
Definition 1. Let M be an FSM with the set of states S, and let B ⊆ S be a block with |B| = η. An adaptive distinguishing sequence for the block B is a rooted tree TB with exactly η leaves where the non-leaf nodes are labeled by input symbols, the edges are labeled by output symbols, and the leaves are labeled by distinct states in B such that: (1) the output labels of the edges emanating from a common node are distinct, (2) for a leaf node of TB labeled by a state s, if α (resp. β ) is the input (resp. output) sequence formed by the concatenation of the node (resp. the edge) labels on the path from the root node to this leaf node, then λ(s, α ) = β . Note that when B = S, Definition 1 corresponds to the classical notion of an adaptive distinguishing sequence and we use T to refer to such an ADS. An ADS T defines an adaptive experiment to identify the unknown initial state of an FSM, where the next input to be applied is decided by the input/output sequence observed previously. Let us assume that we are given an FSM M and we want to identify its current unknown state. One starts by applying the input symbol labeling the root of T. We then follow the outgoing branch of the root that is labeled by y where y is the output symbol produced by M as a response to the input applied. This procedure is recursively applied for each subtree reached, until we reach a leaf node. The label of the leaf node gives the initial unknown state. An ADS of FSM given in Fig. 1 is given in Fig. 2.
y1 x2
y1 x1
y1 x1
y1 x2 y1 x1
y2 s4
x1
y2 s1
y2 s3
y1 s2
Fig. 2. An ADS T of FSM M1 in Fig. 1.
72
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85
Fig. 3. An example of a successor tree (edge labels are not given to reduce visual complexity).
For a given deterministic FSM M, an ADS may or may not exist. One can check if M has an ADS in O(mnlog n) time [29]. In this work, we consider only deterministic, minimal and completely specified FSMs, which are common assumptions in the FSM based testing literature. Furthermore, we also consider only FSMs for which an ADS exists. Let T be an ADS and p be a node in T. We use depth of p (or d(p)) to refer to the length of the path from the root of T to p. The height of T (or hT ) is defined to be the maximum depth of the leaves in T. The external path length of T (or eT ) is the sum of the depths of all the leaves in T. 3. Existing approaches for constructing ADSs In this section we introduce the existing ADS construction algorithms. All these methods are applicable to deterministic, completely specified, connected, and minimal FSMs. 3.1. Successor tree approach (STA) In [45], Hennie describes how a minimal height adaptive distinguishing sequence can be constructed. In this approach for a given FSM with n states and m inputs, a tree called the successor tree is constructed and the ADS is extracted from the successor tree (Fig. 3). Each node r of the tree is associated with a block bl(r), where the root is associated with S. Given a node r with |bl(r)| > 1, and a valid input2 x for bl(r), for each output y ∈ λ(bl(r), x), there exists an edge from r to a child node r of r, where the edge is labeled by the input/output symbol pair x/y, and bl (r ) = δ (bl (r )x/y , x ). The construction of the successor tree requires exponential amount of time and space. Since n(n − 1 )/2 is an upper bound for the height of an ADS, the successor tree can be pruned at nodes at depth n(n − 1 )/2. From a given successor tree, an ADS is constructed by processing the tree in a bottom up manner, starting from the leaves. The cost of a node with a singleton block label is set to 0, whereas the cost a leaf node r with |bl(r)| > 1 is set to infinite. For a node r and a valid input x for bl(r), cost(r, x) is the maximum of the costs of children r of r such that the edge from r to r is labeled by x/y, for some y ∈ λ(bl(r), x). If x is a valid input symbol for bl(r) such that cost(r, x) is the minimum among all valid inputs for bl(r), then cost(r) is set to 1 + cost (r, x ), and x is said to be selected input for r. Thus in the successor tree approach, the input that provides the lowest cost is selected to be used in the ADS. When the block bl (r ) = S is reached, the selected inputs in the subtree of r define an ADS [45]. 3.2. The LY algorithm The LY algorithm constructs an ADS in two steps: (1) a tree called splitting tree is constructed (the LY-ST algorithm). (2) The ADS is constructed by using the splitting tree (the LY-ADS algorithm). We briefly explain these algorithms in the following sections. 2 Since we consider only FSMs for which an ADS exists, there always exists a valid input for bl(r).
3.2.1. Constructing a splitting tree (the LY-ST algorithm) A splitting tree (ST) is a rooted tree τ . Each node of the tree is associated with an input sequence (α ) and a block B. The block of the root node is set to S. For an internal node q labeled by a non-singleton block B and an input sequence α , α is a splitting sequence for B. There is a child node q of q for each β ∈ λ(B, α ). The block of q is set to be the block Bα /β . Therefore, the blocks of the children of an internal node q associated with a non-singleton block B is a partitioning of B. The leaves are labeled by singleton blocks and the empty input sequence. The algorithm sets the block of a node q when the node q is created but it sets the input sequence labeling the node q when the ST algorithm processes the node q. The construction of an ST is initiated by creating an ST with only the root node. Afterwards the nodes in the ST are processed iteratively, until n leaves are generated, where n is the number of states of the FSM. At each iteration, an unprocessed node q that is associated with a non-singleton block B in the ST is processed, where B has the maximum cardinality among the blocks associated with all unprocessed nodes. The algorithm finds a splitting sequence α for B, then the input sequence label of q is set to be α . Finally, the children of q are created as explained in the previous paragraph. The input sequence for a given non-singleton block B of node q is set as follows. ST algorithm, first attempts to find a splitting input by considering every input symbol in X. The LY algorithm does not specify any specific order on the input symbols to be considered for this check. Thus, a naïve implementation of the LY-ST algorithm would use some fixed (possibly lexicographical) ordering of the input symbols for this check. If such an input symbol x is found, then the splitting sequence for B is set to x. Such an input x is referred to as a Type 1 input. When there is no Type 1 input for B, the algorithm attempts to find an already processed node q (associated with a block B and a splitting sequence α ) and an input symbol x such that δ (B, x) ⊆ B , and none of the children q of q has a block B where δ (B, x) ⊆ B . If such a node q and an input symbol x are found, the splitting sequence for B is set to xα . Such an input symbol x is referred to as a Type 2 input. If the algorithm cannot find a Type 2 input for B, it finds a valid input sequence α such that ∃q associated with a block B where δ (B, α ) ⊆ B , and none of the children q of q has a block B where δ (B, α ) ⊆ B . Such an input sequence is referred to as a Type 3 input. The splitting sequence to be used for q is then ob tained as α α , where α is the splitting sequence for the block associated with node q . When the FSM has an ADS, the existence of such a node q and a valid input sequence α are guaranteed3 . The summary of ST construction algorithm is given in Algorithm 1 . The following results are proven in [29]. Lemma 1. If p is an internal node of an ST labeled by an input sequence α and a block B, where |B| = η, then the length of α is at most n + 1 − η. Lemma 2. For a given FSM M with n states and m inputs, the time complexity of the LY-ST algorithm is O(mn2 ). 3 Since the proof of correctness of the LY algorithm is out of scope of this paper, we refer the reader to [29] to see why such nodes have to exist.
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85
73
Algorithm 1: The LY-ST algorithm.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Input: An FSM M Output: Splitting tree for M. begin Construct a node q associated with block B = S Initialize τ to be an ST having the root q only Q ← {q} // Q is the set of nodes yet to be processed while Q is not empty do Pick a node q ∈ Q such that the block B of q has the maximum cardinality among the nodes in Q Remove q from Q if Type 1 input for B exists then α ← a Type 1 input else if Type 2 input for B exists then α ← a Type 2 input else // Type 3 input for B must exist α ← Type 3 input Associate α with q as the splitting sequence for B foreach B ∈ Bα do Introduce a new node q to τ where q is associated with B Introduce an edge from q to q if B is not singleton then Q ← Q ∪ {q }
to α (lines 6 and 7). For each β ∈ λ(B, α ), a child node p of p is created in T, by setting B ( p ) = δ (Bα /β , α ) (lines 8–12). Once T becomes a tree where for all the leaves p we have |B ( p)| = 1, T defines an ADS. We refer the reader to [29] to see why T defines an ADS. The following is also proven in [29]. Lemma 3. For a given FSM M with n states and m inputs, and a given ST, the LY-ADS algorithm can construct an ADS for M in O(n2 ) time. Furthermore, the height of the ADS is bounded from above by n(n − 1 )/2, and the number of nodes in the ADS is bounded from above by O(n2 ). 3.3. Enhancements on the LY algorithm: GLY1 and GLY2 In [34] two modified versions (GLY1 and GLY2) of the LY-ST algorithm are presented. The modification on Algorithm 1 is at line 8. In GLY1 and GLY2 all possible Type 1 inputs (as opposed to considering a single Type 1 input) are considered. In GLY1, during the construction of the ST, for a given block B, it selects a splitting input symbol that partitions B most evenly. Let B be a block and W be the set of all Type 1 splitting inputs for B. GLY1 selects a Type 1 input from W as follows:
GLY1(B ) = rand (F1 (B, W )) 3.2.2. Constructing an ADS using the LY-ADS algorithm The LY-ADS algorithm uses an ST to construct a tree T that defines an ADS. The tree T has a similar construction with an ST. The difference is that, during the construction of an ST τ , the LY-ST algorithm considers the initial states, whereas in the construction of T, the LY-ADS algorithm considers the final states reached. Each node p of T is associated with an input sequence X ( p) and with a block B ( p). For a leaf node p of T, we have |B ( p)| = 1 and X ( p) = . For an internal node p, on the other hand, |B ( p)| > 1. The edges of T are labeled by output sequences. Let p be an internal node in T with X ( p) = α and B ( p) = B. Also let p be a child of p, where the edge from p to p is labeled by an output sequence β . In such a case we have that B ( p ) = δ (Bα /β , α ). Note that unlike an ST, the states labeling the children of a node p do not necessarily form a partition of B ( p). Algorithm 2: The LY-ADS algorithm.
1 2 3 4 5 6 7 8 9 10 11 12
Input: A splitting tree ST Output: An ADS T . begin Construct a node p with B ( p) = S Initialize T to be a tree having the root p only Q ← { p} // Q is the set of nodes yet to be processed while Q is not empty do Pick a node p ∈ Q and let B = B ( p) α ← ST (B ) // get a splitting seq. for B from ST X ( p) ← α foreach B ∈ Bα do Introduce node p to T such that B ( p ) ← δ (B , α ) Introduce an edge from p to p with label α /β if δ (B , α ) is not singleton then Q ← Q ∪ { p }
The LY-ADS algorithm is given in Algorithm 2. As in the case of the LY-ST algorithm, the construction of T is performed iteratively. First a tree is created that only includes the root node p of T, with B ( p) = S. As long as there is an unprocessed node p in the partial tree, where B ( p) = B and |B| > 1, p is processed in the following way: The LY-ADS algorithm searches the ST to find the deepest node q in the ST such that the block B of q includes B, i.e. B ⊆ B . If α is the splitting sequence labeling q in the ST, then X ( p) is set
(1)
where the function rand(.) chooses an element of the set given as its input randomly4 and function F1 considers the differences in the cardinalities of the partitionings of B provided by the splitting inputs given in W:
F1 (B, W ) = argmin x∈W
||B | − |B ||
(2)
B ,B ∈Bx
On the other hand, the GLY2 approach aims to select splitting input x in W that maximizes the size of the partitioning Bx , without considering the sizes of the elements in Bx . Besides if there are two or more input symbols that give the same maximum value, it selects the input symbol that produces the most even partitioning. Formally,
⎧ ⎨x ,
GLY2(B ) =
if argmaxx∈W
|Bx | = {x }
⎩
F1 (B, argmaxx∈W |Bx | ),
(3)
otherwise
Although GLY1 and GLY2 have asymptotically the same computational complexity as the LY-ST algorithm [34], in practice GLY1 and GLY2 require more time to construct an ST (hence an ADS). In order to construct an ST using GLY1 and GLY2, lines 8 of the LY-ST algorithm (Algorithm 1) should be changed as follows.
6 5 4 3 2 1
7 8
. . . . . . . . . . . . if Type 1 input for B exists then α = GLY 1(B ) or GLY 2(B )
. . .
. . .
. . .
. . .
. . .
. . .
3.4. The lookahead based ADS construction algorithm The lookahead based ADS construction algorithm [35] can be regarded as an enhancement on the successor tree approach explained in Section 3.1. 4 Note that operators argmin/argmax return the set of arguments achieving the optimum value.
74
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85 s1, s2, s3, s4, . . . , sn sa, st, . . . , sg
...
... ...
...
...
sx, sy , . . . , sz
sc, sr , . . . , sm ...
...
... ...
...
...
... ...
...
...
...
... ...
sr , sa, . . . , sm ...
...
...
...
... ...
Fig. 4. The successor tree constructed by the lookahead based method. The tree is pruned at the crossed out nodes (edge labels are not given to reduce visual complexity).
As in the case of the successor tree approach, the lookahead based ADS construction algorithm constructs a successor tree called the Enhanced Successor Tree (EST). The key strategy of the lookahead algorithm is that the ADS is constructed on the fly, as the EST is being formed. The successor tree approach generates the tree exhaustively up to depth n(n − 1 )/2, and extracts a minimal cost ADS. The EST approach also constructs the tree exhaustively, but up to a predefined height k, which is called the lookahead parameter. The algorithm then greedily selects a part of the tree that looks appealing at the moment, while pruning the rest of the tree (Fig. 4). The exhaustive subtree construction is recursively performed by considering the selected parts of the tree. The process is repeated until an ADS is formed. The lookahead algorithm uses heuristics to decide how the tree is constructed. As in the case of the successor tree approach, the score of a node reflects the size of the part of the ADS that will be rooted at this node. However, since the tree is only partially built in the lookahead based approach, the score of a node is only an estimate for the size of the ADS that will be eventually formed if that node is decided to be used in the ADS. The score of a node is calculated by processing the subtree rooted at this node in a bottom–up manner, starting from the current leaves of the partially constructed tree. Depending on the minimization objective, the cost of a (yet to be processed) leaf node r is computed in different ways by using heuristics to estimate the cost of the subtree that would appear under r (if we were to process r and construct a subtree under r). We use two different heuristic functions HU and HLY for minimizing the height. Similarly, while minimizing the external path length, we use heuristic functions LU and LLY . We now give these heuristic functions that we use to assign a cost to a leaf node r with a block bl(r). Let η = |bl (r )| and please recall that we use d(r) to denote the depth of the node r. The score of a leaf node r with respect to the heuristic function HU is calculated as follows:
HU (r ) = d (r ) + η (η − 1 )/2
(4)
The term η (η − 1 )/2 is just an estimation for the upper bound on the height of the ADS for bl(r). As d(r) is the depth of r, HU (r ) is an estimation for the maximum depth of a node that would appear under r. Rather than estimating the depth of an ADS for bl(r), one can simply use the LY algorithm to construct an ADS T for bl(r). This is the approach taken by the heuristic function HLY which is given below. Please recall that we use hT to denote the height of the tree T.
HLY (r ) = d (r ) + hT
(5)
For minimizing the external path length, the lookahead algorithm uses LU and LLY functions to assign a cost to a leaf node r. The heuristic function LU is defined as follows:
LU (r ) = (d (r ) + (η )(η − 1 )/2 )η
(6)
As in the case of minimizing the height of the ADS, the heuristic function LLY again uses the LY algorithm to obtain an ADS T for the block bl(r). Please recall again that we use eT to denote the external path length of T.
LLY (r ) = d (r )η + eT
(7)
After assigning the costs of the leaf nodes using one of these heuristic functions, the costs of internal nodes are calculated. Let r be an internal node with block bl(r), and x be a valid input for bl(r). While minimizing for height, cost(r, x) is the maximum of the costs of children r of r such that the edge from r to r is labeled by x/y for some y ∈ λ(bl(r), x). While minimizing for external path length on the other hand, cost(r, x) is the sum of the costs of chil dren r of r such that the edge from r to r is labeled by x/y for some y ∈ λ(bl(r), x). In both cases (for height and external path length minimization), if x is a valid input for bl(r) such that cost(r, x) is the minimum among all valid inputs for bl(r), then cost(r) is set to cost(r, x) and x is said to be the selected input for r. The root of the EST constructed so far is said to be live. If a node r is live and x is the selected input for r, and r is a child of r where the edge from r to r has a label x/y for some y, then r is also live. The EST is pruned at the nodes that are not live. In each iteration of the lookahead algorithm, the subtrees rooted at live nodes are constructed exhaustively up to depth k, and the process is repeated until the EST can be used to derive an ADS. 4. New approaches to construct ADS S In this section, we introduce three different enhancements on the LY algorithm. The first one is called Low-cost ST algorithm, the second one is called Splitting Forest Algorithm (SFA) and the third one is called the Splitting Graph Algorithm (SGA). 4.1. The low-cost ST algorithm The low-cost ST (LCST) approach can be regarded as a generalization of GLY1 and GLY2. Recall that in GLY1 and GLY2, while constructing an ST, for a given block B all Type 1 inputs are considered, and the splitting symbol that seems best is selected. In LCST while constructing an ST, all types of splitting inputs/sequences (Type 1, Type 2 and Type 3) are considered, and according to the heuristic used, the one that provides a better splitting is selected. In order to construct an ST by LCST approach the only modification needed on Algorithm 1 is to replace lines 7 through 12 by the following lines.
. . . . . . . . . . . . . . . W ← all Type 1, Type 2 and Type 3 inputs retrieved from ST. α ← func(W ) . . . . . . . . . . . . . . .
Here, func(W) denotes that one needs to use a heuristic to select an appropriate splitting sequence among a set of splitting sequences (such as GLY1 and GLY2). In Section 5.1 we will present a set of heuristics that can be used to select a splitting sequence among a set of splitting sequences. 4.2. The splitting forest algorithm The LY algorithm and its variations, such GLY1, GLY2 and LCST, construct and use a single splitting tree. Therefore for a block B ( p),
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85
retrieved from the ADS under construction, there exists only one input sequence in the splitting tree to split B ( p). However, if multiple splitting sequences are given to split the block B ( p), one can choose the splitting sequence that works best for B ( p). The intuition behind splitting forest algorithm (SFA) is to provide such a set of potential splitting sequences. SFA works with a set of splitting trees called splitting forest (SF). While constructing the ADS, SFA uses all of these trees and picks the input sequence that fits best for the underlying objective function. SFA itself has no particular restrictions on the underlying ST construction method, and can work with any given set of STs. It is possible to construct a splitting forest by generating splitting trees using different heuristics. For example, one can construct a splitting tree by using GLY1, another one by using GLY2, and yet another one by using LCST. The only modification needed in the LY-ADS algorithm (Algorithm 2) is at line 6, which should be changed as
α ← SF (B ( p)) Here, we assume that the function SF first forms a set of splitting sequences W for B ( p) by considering all STs in the SF. Then by using a heuristic, it returns the splitting sequence that gives the best splitting with respect to the objective used. In Section 5.1, we will present a set of heuristics that can be used at this step of the algorithm. After an SF is constructed, the time complexity of the ADS construction step of the LY-ADS algorithm becomes O( n2 ) where is the number of STs in the forest. Note that, the LY-ADS algorithm still iterates at most n − 1 times, as in the case of the original LY algorithm. However, the cost of an iteration is now O( n) (instead of O(n) for the original LY-ADS algorithm), since splitting the block B ( p) is performed times, one for each splitting sequence from each splitting tree. 4.3. The splitting graph algorithm The splitting graph algorithm (SGA) is an enhancement on the splitting forest algorithm. Recall that in SFA we use a set of STs which are constructed by different heuristics. In SGA, we again use a set of heuristics. However, instead of constructing STs separately using different heuristics, we construct a single splitting structure where each block is split by using splitting sequences suggested by all the heuristics. The blocks generated by different splitting sequences may turn out to be the same. These identical blocks are represented by the same node in the structure generated. Therefore, what we obtain is not a tree anymore, but a directed acyclic graph called a splitting graph (SG). The main difference between an SF and an SG is that, while the nodes in an ST of an SF constructed by a heuristic have only one splitting sequence suggested by that heuristic, the nodes in the SG have splitting sequences suggested by all the heuristics. Each node v in an SG G still corresponds to a block B, but now it is associated with a set of splitting sequences W (as opposed to a single splitting sequence in an ST). The pseudocode for constructing an SG G is given in Algorithm 3 . The construction of the SG starts by initiating the graph with a root node v0 associated with block B = S. Afterwards, the algorithm iteratively processes the nodes (only when it is associated with a non-singleton block) of the graph. For a given node v and its block B, the algorithm uses different heuristics to produce a set W of splitting sequences for B (line 7). Then for each splitting sequence α ∈ W, the algorithm considers the blocks in the partitioning Bα of B. For each block B ∈ Bα , the algorithm searches a node v in G such that v is associated with B (line 11). If it cannot find such a node then the algorithm creates a node v such that v is associated with B (line 12). If |B | > 1, then
75
Algorithm 3: Constructing a splitting graph for M.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Input: An FSM M Output: A splitting graph G begin Construct the root node v0 associated with the block B = S Initialize G to be a graph consisting of the node v0 only Q ← {v0 } // Q is the set of nodes yet to be processed while Q = ∅ do Pick a node v ∈ Q such that the block B of v has the maximum cardinality among the nodes in Q Remove v from Q Using different heuristics, construct a set of splitting sequences W ← {α1 , α2 , . . . , α } for B Associate W with v as the set of splitting sequences for B foreach α ∈ W do foreach B ∈ Bα do if G has no node associated with block B then Introduce a node v and associate v with B if |B | > 1 then Q ← Q ∪ {v }
15
Introduce an edge from
v to v labeled by α .
the algorithm adds the generated node v to Q (lines 13 and 14). Finally it adds an edge from v to v labeled by α (line 15). Note that, the edge from v to v is added even if there already exists a node v in SG. After the SG is formed, an ADS can be constructed using the LY-ADS algorithm (Algorithm 2), after a slight modification. The algorithm now works with an SG (instead of an ST) and the only change needed is at line 6 of Algorithm 2, where the ST is consulted. Instead, this line is modified to
α ← SG(B ( p)) Clearly, when one searches for a splitting sequence for the block B ( p) using an SG G, one may find a set W of splitting sequences. As in the case of using an SF, one needs to apply a heuristic function to select the most appealing splitting sequence from W. In the next section, we introduce a set of heuristics that can be used to select a splitting sequence among a set of splitting sequences. We now comment on some properties of splitting graphs.
Lemma 4. Let (v, v ) be an edge in an SG G, and also let B and B be the blocks associated with nodes v and v , respectively. Then |B| > |B |. Proof. Consider the input sequence α that labels the edge (v, v ). From the construction of the splitting graph G, we know that α is a splitting sequence for B and hence |λ(B, α )| > 1. In other words, the block B must have been partitioned into smaller blocks by the application of α . Therefore we must have |B| > |B |. Lemma 5. The length of a path from the root node v0 to a leaf node in G is at most n − 1. Proof. Let v be a node, v be a child of the node v and also let B and B be the blocks associated with the nodes v and v , respec tively. Then by Lemma 4 we know that B has at least 1 less element than B. Since the block of the root has n states and the block of a leaf node is a singleton, the result follows. Note that for a given block B of node v, the algorithm may apply splitting sequences to B which implies that for a given node, the branching factor is . Therefore an SG may contain 2n − 1 (excluding the empty set) nodes in the worst case. Formally, Lemma 6. For a given FSM M with n states, if the Algorithm 3 returns a graph G then there are at most 2n − 1 nodes in G.
76
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85
5. New heuristics for constructing compact ADS S
LCL (W ) = argmin
In this section, we introduce a set of heuristic functions LCH , LCL , F2 , W LU and W LLY that can be used to construct ADSs with respect to different objective functions. In the previous section, we introduce LCST, SFA and SGA methods. While constructing an ST, an SG or an ADS, these methods select a single splitting sequence among a set of splitting sequences by using heuristics. LCH , LCL and F2 heuristics receive a set of splitting sequences W and return a single splitting sequence according to the underlying objective function. Let W be a set of splitting sequences for a block B. For each splitting sequence α ∈ W, LCH , LCL and F2 consider the partitioning Bα of B. Then LCH , LCL and F2 evaluate the partitioning and find the cost of applying the splitting sequence α . On the other hand, recall that while constructing an ADS by using a successor tree (or EST) we need to select a node that minimizes the cost with respect to a given objective function. The heuristic WLU receives a node r that is associated with a block B and then estimates the cost of the ADS TB for B. The heuristic WLLY receives a node r that is associated with a block B, constructs an ADS TB and computes the weighted external path length of the ADS TB . By using these heuristics we are able to calculate the costs of the nodes and select the one that seems the most appealing. The heuristics LCH and LCL aim to construct ADSs with minimum height and minimum external path length. The heuristics W LU , W LLY , and F2 aim to construct ADSs with minimum weighted external path length, as we define in the next section. 5.1. Heuristics for constructing ADSS with minimum height / minimum external path length Before introducing the heuristics, we provide an upper bound on the height of the ADS for a block B. Recall that the LY algorithm is used to construct an ADS for the entire set of states S, when it starts with the root node which is associated with the block S. However, it is also possible to use the LY algorithm to construct an ADS TB for a block B, by simply starting the algorithm with a root node associated with B. Using this observation and by using Lemma 1, we can state an upper bound on the height of TB as follows. Lemma 7. Let B be a block of η states. The height of the ADS TB is at most (2n − η )(η − 1 )/2. Proof. If one uses the LY algorithm to construct an ADS for B, the algorithm will retrieve splitting sequences from the ST at most η − 1 times. Each time it retrieves a splitting sequence α from the ST for a block of size i where 2 ≤ i ≤ η, the length of α will be at most n + 1 − i by Lemma 1. In the worst case, the total length of the splitting sequences and hence the height of the i = η ADS constructed for B will be i=2 n + 1 − i = (2n − η )(η − 1 )/2 as suggested. The function LCH aims to construct an ADS with minimum height and is defined as follows:
LCH (W ) = argmin α ∈W
|α| + max ((2n − |B | )(|B | − 1 )/2 ) B ∈Bα
(8)
For each input sequence α ∈ W, the LCH function first estimates the height of the ADS TB that would be generated after α is applied to block B. Then, LCH function selects the input sequence α if it will lead to a shallowest subtree among other subtrees. For minimizing the external path length, LCL is used and it is defined as follows:
α ∈W
(|α| + (2n − |B | )(|B | − 1 )/2 )|B |
(9)
B ∈Bα
Similar to the LCH function, for each input sequence, the LCL function computes the upper bound on the external path length of the ADS TB that would be constructed after the splitting sequence is applied. Then LCL function selects the splitting sequence which leads to a subtree that possesses the minimum estimated external path length. 5.2. Heuristics for constructing minimum weighted ADSs In [34] authors discuss that constructing compact ADSs in terms of height and external path length may not always reduce the length of a checking sequence. Given an ADS T, the state distinguishing sequence (SDS) of a state s in T is the input sequence formed by concatenating input symbols from the root node of T to the leaf node that is labeled by the state s. We use d(s) below to denote the length of the SDS for s in T. Note that d(s) is simply the depth of the leaf node labeled by s in T. The reason why height/external path length minimized ADSs may not reduce the length of a checking sequence is that, SDSs of different states may be used different number of times in a checking sequence. In a checking sequence, in order to verify that each transition (s, s , x/y) is correctly implemented, the implementation N is brought to the state that corresponds to s and the input symbol x is applied (with the hope that y will be observed). In order to see that N made a transition into the state that corresponds to s , the application of the SDS for s takes place. Therefore, for a completely specified FSM M with n states and m input symbols, there will be nm transition verification sequences, each one having an application of an SDS. For that reason in [34] authors suggest to use the number of incoming transitions of a state s as the weight φ (s) of that state. We will be referring to the number of incoming transitions of that state when we refer to the weight of a state from now on. In order to address this issue, MinWeightADS problem is formulated in [34]. In MinWeightADS problem, each state s of M has a weight φ (s), and the aim of the problem is to construct an ADS such that the total cost (S) of the ADS is minimal where (S) is given as follows:
(S ) =
φ ( s )d ( s )
(10)
s∈S
In the rest of this section we provide three heuristic functions to construct ADSs with minimum weighted external path length. The function F2 is the first heuristic function that aims to construct an ADS with minimum weighted external path length.
F2 (W ) = argmin α ∈W
B ∈Bα
|α| s∈B φ (s ) |Bα |
(11)
The strategy employed by the function F2 is to select the splitting sequence for B which distributes the weighted states to different, small blocks by using the shortest input sequence. In order to achieve this, the function F2 considers three measures: First it computes the weight of the block by multiplying the weights of the states in a block. Second, it takes the cardinality of the block B and the length of the splitting input α into consideration by multiplying them with the weight of the block. Finally it considers the quality of the partitioning caused by the splitting sequence α by dividing the sum of the these values by the number of blocks in Bα . Another heuristic function we propose is WLU , which is a combination of a modification of the heuristic function LU and the heuristic function LCL . For a given node r associated with a block B, WLU tries to estimate the weighted external path length cost of
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85
the ADS that would appear under r. It is formally defined as follows:
WLU (r ) = (d (r ) + (2n − |B| )(|B| − 1 )/2 )|B| max(φ (s )) s∈B
(12)
Finally, the heuristic function WLLY actually constructs an ADS TB for the block B by using the LY algorithm, and computes the weighted external path length. Similarly, WLLY is a modified version of heuristic function LLY given in Eq. (7). Let r be a node in successor tree (or EST) and B be the block associated with r, then the modified heuristic function constructs the ADS TB and selects the input that will lead to an ADS whose weight is minimum. It is formally defined as follows:
WLLY (r ) =
(d (r ) + d (s ))φ (s )
(13)
s∈B
6. Empirical study In this section we present the result of our experiments. We used an Intel Quad-Core CPU with 4GB RAM to carry out these experiments. 6.1. FSMs used in the experiments, experiment settings and evaluation In order to investigate the ADS construction methods and the proposed heuristics, we generated different classes of FSMs and we also considered a class of FSMs that are available as benchmark sets. In this section we give brief information about the FSMs we used throughout the experiments and how we evaluate the results of experiments. 6.1.1. FSMs in CLASS I The first class (Class I) is designed to investigate the performance of the methods with respect to objective functions minimizing height (Section 6.2) and minimizing external path length (Section 6.3). The FSMs in this class were generated as follows. First, for each input x and state s we randomly assigned the values of δ (s, x) and λ(s, x). After an FSM M was generated we checked its suitability as follows. We checked whether M was strongly connected5 , minimal, and had an ADS. If the FSM passes all these tests, we included it into Class I, otherwise we omitted this FSM and produced another one. Consequently, all generated FSMs were strongly connected, minimal, and had ADSs. By following this procedure we constructed 10 0 0 FSMs with n states, where n ∈ {50, 60, . . . , 100}. The number of the input and the output symbols were 3. In total we constructed 60 0 0 FSMs for the first class of FSMs. 6.1.2. FSMs in CLASS II The second class of FSMs (Class II) was generated to study the performance of the methods that aim to construct ADSs with minimum weighted external path lengths (Section 6.4). Note that for FSMs in Class I, since the next state of transitions are randomly selected, the in-degree of the states are close to each other. However, in Class II, we need a nonuniform in-degree distribution. To create such a distribution, we first randomly select a subset S¯ of states which will have higher in-degree values than the states in S \ S¯. To create a higher in-degree values for the states in S¯, we randomly select a subset of transitions (where each element of is a pair (s, x) denoting the transition of state s for input symbol x). We then force the transitions in to end in states in S¯. 5
M is strongly connected if for any pair (s, s ) of states of M there is some input sequence that takes M from s to s .
77
Table 1 Benchmark FSMs and their sizes. Name
No of states
No of transitions
Shift Register Ex4 Log DVRAM Rie
8 14 17 35 29
16 896 8704 8960 14,848
The key point of constructing weighted FSMs was choosing the cardinalities of S¯ and . If | | was too large and |S¯| was too small then one might not able to construct a connected FSM, or might not be able to construct an FSM with an ADS. On the other hand, if | | was too small and |S¯| was too large then the in-degrees of states became similar. In these experiments we chose |S¯| to be 10% of the states and we set | | to be 30% of the transitions. We observed that if the percentage for | | is increased further, it takes too much time to construct an FSM with an ADS. As in the case of generation of Class I FSMs, after an FSM M was generated we checked its suitability. We constructed 40 0 0 weighted FSMs, with number of states n ∈ {50, 60, 70, 80}, where for each n there were 10 0 0 FSMs. The cardinalities of the input and the output alphabets were 3. We could not construct FSMs with higher number of states in a reasonable amount of time. 6.1.3. FSMs in CLASS III Note that the number of input and output symbols for the FSMs in Class I and Class II are always 3. In order to investigate effect of having different number of inputs and outputs on the performance of our approaches, we constructed yet another set of FSMs randomly. We call this set of FSMs as Class III. In this class, we fixed the number of states to 50 and the number of inputs to 3, but we considered the number of outputs to be 2, 3, and 4. There are 10 0 0 FSMs for each one these output alphabet cardinalities, hence there are a total of 30 0 0 FSMs in Class III. 6.1.4. Benchmark FSMs In addition to the randomly generated FSMs, we also used FSM specifications retrieved from the ACM/SIGDA benchmarks, a set of test suites (FSMs) used in workshops between 1989–1993 [46]. The benchmark suite has 59 FSM specifications ranging from simple circuits to advanced circuits obtained from industry. The FSM specifications were available in the kiss2 format. In order to process FSMs, we converted the kiss2 file format to our FSM specification format. We only used FSMs from the benchmark that were minimal, deterministic, had ADS and had fewer than 10 input bits6 . 19% of the FSMs had more than 10 input bits, 38% were not minimal, 15% of the FSMs had ADS, 48% of the FSMs were nondeterministic. Consequently, 8.5% of the FSM specifications passed all of the tests, which are DVRAM, Ex47 , Log, Rie, and Shift Register. In Table 1, we present the number of states and the number of transitions of these FSMs. 6.1.5. FSMs with quadratic ADSS We have mentioned before that the upper bound on the height of the ADS is n(n − 1 )/2. However, randomly generated and the 6 Since the circuits receive inputs in bits, and since b bits correspond to 2b inputs, we do not consider FSMs with b ≥ 10 bits. 7 FSM specification Ex4 is partially specified. We complete the missing transitions by adding self looping transitions with a special output symbol, and do not use these inputs for ADS construction.
78
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85
Table 2 The settings used for the construction of ADSs. Setting
ADS construction method
Heuristics used to construct underlying splitting structure
GLY1 GLY2 LY SG(H) SG(L) F2 SF(H) SF(L) LA(H) LA(L) WLU WLLY BF
ADS is constructed ADS is constructed ADS is constructed ADS is constructed ADS is constructed ADS is constructed ADS is constructed ADS is constructed ADS is constructed ADS is constructed ADS is constructed ADS is constructed Successor tree
ST is constructed by GLY1. ST is constructed by GLY2. ST is constructed by lexicographic ordering. SG is constructed by using LCH , GLY1, and GLY2 at all nodes. SG is constructed by using LCL , GLY1, and GLY2 at all nodes. SG is constructed by using LCF2 , GLY1, and GLY2 at all nodes. τ 1 constructed by LCH , τ 2 constructed by GLY1 and τ 3 constructed by GLY2. τ 1 constructed by LCL , τ 2 constructed by GLY1 and τ 3 constructed by GLY2. Not applicable Not applicable Not applicable Not applicable Not applicable
by by by by by by by by by by by by
LY. LY. LY. SGA using LCH SGA using LCL SGA using LCF2 SFA using LCH SFA using LCL LA using HLY LA using LLY LA using WLU LA using WLLY
benchmark FSMs have ADSs that are much shallower than this theoretical upper bound. In order to be able to test the performance of the methods suggested in this paper for those FSMs with the minimum ADS heights close to the upper bound, we also consider the class of FSMs suggested by Sokolovskii [31]. Sokolovskii introduced a special class of FSMs, what we call here as s-FSMs. The lower bound on the minimum height of the ADS of an s-FSM is bounded below by (n/2 )2 − 1, where n is the number of states. We also used a set of s-FSMs in our experiments and reported on the performance of our ADS construction methods. The transition and the output functions of an s-FSM are defined below. Let n = n/2 and n > 2;
δ ( si , xj ) =
⎧ ⎪ ⎪si+1 , ⎪ ⎪ ⎪ ⎪s1 , ⎨
sn +1 ,
si , ⎪ ⎪ ⎪ ⎪ sn +1 , ⎪ ⎪ ⎩
λ ( si , xj ) =
si−n ,
y0 , y1 ,
if if if if if if
j j j j j j
= 1 ∧ i = n ∧ i = n = 1 ∧ i = n =1∧i=n = 0 ∧ 1 ≤ i ≤ n − 1 = 0 ∧ i = n = 0 ∧ n + 1 ≤ i ≤ n
if j = 0 ∧ i = n otherwise
(14)
(15)
We generated an s-FSM with n states for each n ∈ {4, 10, 20, . . . , 70}. 6.1.6. Experiment settings and evaluation In order to carry out these experiments for each FSM we compute the ADS using the brute force (BF) (implemented as given in [30]), LA, SGA, SFA and LY algorithms and different versions of the LY algorithm such as GLY1 and GLY2. We used different heuristics with different ADS construction methods. We refer to each combination of ADS construction method and heuristic used as a setting. Table 2 provides the settings that we used during the experiments. The first column provides the abbreviation of the setting. The second column refers to the underlying ADS construction method and the heuristic used to select splitting sequence among splitting sequences suggested by the underlying splitting structure. Finally, the third column refers to the heuristic function used during the construction of the underlying splitting structure. The interpretation of settings can be explained as follows: consider the setting SG(H) in Table 2. In Setting SG(H), the ADS was constructed by using the splitting graph algorithm, and while constructing a splitting graph, heuristic functions GLY1, GLY2 and LCH were used to form a set of splitting sequences for a node. In addition, for Setting SG(H), heuristic function LCH (LCL for SG(L)) was used while an ADS was being constructed. That is after function SG(B ( p)) retrieves all possible splitting sequences suggested by the
splitting graph for B ( p), the heuristic function LCH (LCL for SG(L)) was used to pick one splitting sequence. For the settings that used SFA algorithm, the splitting trees in the forest were constructed by using the low cost ST approach given in Section 4.1, where each tree was constructed by a separate heuristic, LCH (or LCL ), GLY1, and GLY2. In other words settings that were associated with the splitting forest algorithm had three STs and each ST is constructed by using the LCST approach. Throughout the experiments, for the settings that used lookahead ADS construction algorithm (LA, WLLY and WLU), the lookahead parameter was set to k = 2. For a given setting, we constructed an ADS for each FSM in our pool. The average of the size (external path length or height) of all the ADSs of FSMs in the pool, is considered as the performance of that setting. We will present the individual and the relative performance of the settings. We present the performance of the settings using boxplot diagrams generated by the ggplot2 library of the tool R [47,48]. For each box the first quartile corresponds to the lowest 25% of data, the second quartile gives the median, and the third quartile corresponds to the highest 25%. For each boxplot we added the smoothing line computed with the LOESS [49] method, and the semi-transparent ribbon surrounding the solid line is the 95% confidence interval. In order to represent the relative performances of the settings we used B function given below. Let, A refer to the setting used and size(A(M )) refer to the height, the external path length, or the weighted external path length of an ADS constructed by the method A for FSM M. The function B (A, M ) gives the percentage decrease in the size of the ADS constructed by BF algorithm compared to the size of the ADS constructed by A. Therefore, lower B values indicate a better performance of the setting A, approaching to the optimal values computed by BF algorithm.
B ( A , M ) =
size(A(M )) − size(BF (M )) × 100 size(A(M ))
(16)
Note that B (A, M ) indicates how much the size of an ADS could be improved if one had used BF algorithm instead of A. Therefore, lower the B (A, M ) value, better the performance of A. 6.2. Comparison of heights In Fig. 5, we present the averages of the heights of the ADSs of the FSMs in Class I. The results suggest that in terms of ADS heights, LA(H) and SG(H) produce shallower ADSs. Besides results also suggest that the ADSs constructed by the SF(H) are slightly higher than the ADSs constructed by the LA(H) and SG(H) but shallower than the ADSs constructed by WLU, WLLY, GLY1, GLY2 and LY. We also observed that the average height of the ADSs increases with the number of states as expected.
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85
79
Fig. 5. Comparison of heights for the FSMs in Class I. Each boxplot summarizes the distributions of 10 0 0 FSMs.
Fig. 6. Comparison of external path lengths for the FSMs in Class I. Each boxplot summarizes the distributions of 10 0 0 FSMs.
Table 3 Comparison of heights of ADSs constructed for FSMs in Class I with respect to function B .
Table 5 Comparison of external path lengths of ADSs constructed for FSMs in Class I with respect to function B .
States
50 60 70 80 90 100
B
States
LA(H)
SG(H)
SF(H)
F2
WLU
WLLY
GLY2
GLY1
LY
5.02 4.22 3.55 3.09 5.23 4.87
7.50 6.36 5.38 4.68 6.32 5.81
24.99 24.85 24.64 23.52 20.97 20.70
25.34 26.17 27.30 26.58 22.20 22.60
26.23 26.01 25.54 26.35 22.64 22.51
26.49 26.72 26.49 25.90 22.34 22.31
37.76 38.08 38.32 37.68 32.28 32.43
40.31 39.90 41.36 40.15 35.28 34.76
43.75 43.83 43.79 43.25 38.00 38.12
50 60 70 80 90 100
B LA(L)
SG(L)
SF(L)
F2
WLU
WLLY
GLY2
GLY1
LY
9.69 9.20 9.66 8.54 7.90 7.28
9.21 9.15 8.86 8.95 7.89 7.75
24.30 24.44 24.38 24.50 21.35 21.04
38.66 38.67 38.47 38.53 33.83 34.49
39.19 38.59 38.88 39.22 33.53 34.32
38.42 38.02 38.00 38.06 33.61 33.86
43.58 43.46 43.73 43.61 38.81 38.79
47.00 46.84 46.81 46.66 41.68 41.86
50.03 49.87 50.15 49.78 44.93 45.00
Table 6 External path length comparison for benchmark FSMs.
Table 4 ADS height comparison for benchmark FSMs. Name
LY
GLY1
GLY2
SG(H)
F2
LA(H)
WLLY
WLU
SF(H)
FSM
LY
GLY1
GLY2
SG(L)
F2
LA(L)
WLLY
WLU
SF(L)
Log DVRAM Ex4 Rie Shift Reg.
2 6 4 3 3
2 6 4 3 3
2 4 3 3 3
2 4 3 3 3
2 4 4 4 4
2 4 3 3 3
3 5 4 4 4
3 5 4 4 4
2 4 3 3 3
Log DVRAM Ex4 Rie Shift Reg.
21 104 29 46 24
21 104 29 46 24
17 78 22 46 24
17 77 22 46 24
21 108 32 50 28
17 77 22 46 24
21 108 32 50 28
21 108 32 50 28
17 78 22 46 24
Table 3 presents the performance of the settings relative to the BF approach. We observe that in general the B values decrease as the number of states increase. This means that the performance of the methods increase with the number of states. Moreover, we observe that the ADSs constructed by the SG and LA have comparable heights with the ADSs constructed by the BF. Furthermore, except the settings that use exponential time ADS constructing methods (LA and SG), the height of the ADSs constructed by the SF are slightly better than those constructed by the other settings. Recall that the setting SF used SFA as the ADS construction method and it relies on the LCH , GLY1, and GLY2 heuristics. Therefore as the qualities of the ADSs constructed by the settings GLY1 and GLY2 are much worse than the ADSs constructed by the setting SF, we can deduce that the performance of the SF may be due to the heuristic function LCH . In order to verify this observation, throughout the experiments we checked how often the heuristic function LCH (or LCL ) is used by SF during the construction of ADSs. The results are just as expected, in most of the times (more than 96%), SF applied the input sequences suggested by the LCH (or LCL ) function during the construction of an ADS. The results of benchmarks are given in Table 4. We observed that interms of heights, settings LA, GLY2, SG and SF produced better results.
6.3. Comparison of external path lengths The results of the experiments are given in Fig. 6. As expected, the external path lengths increase with the number of states. We observe that in terms of the average of external path lengths, settings LA(L) and SG(L) produced better results. The performance of SF(L) is less promising than LA(L) and SG(L) but much better than WLLY, WLU, F2, GLY1, GLY2 and LY. In Table 5 we present the B results of the experiments. The results suggest that the external path length of the ADSs computed by the SG and the LA are closest to the best possible. We also note that as the number of states increases B values decrease. The results of benchmarks are given in Table 6. As in the case of heights, we observed that in terms of external path lengths, settings LA(L), GLY2, SG(L) and SF(L) produced better results. 6.4. Comparison of weighted external path lengths The comparison on the weighted external path lengths are performed using FSMs in Class II. The weighted external path length of an FSM M with n states were obtained by using the Formula 10, i.e. for each state s of M we multiply its depth d(s) with its weight φ (s). Then we sum up all these values and divide by the number of states. The results are given in Fig. 7.
80
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85
Fig. 7. Comparison of weighted external path lengths for the FSMs in Class II. Each boxplot summarizes the distributions of 10 0 0 FSMs.
Fig. 8. Comparison of heights for the FSMs in Class III. Each boxplot summarizes the distributions of 10 0 0 FSMs.
Table 7 Comparison of weighted external path lengths of ADSs constructed for FSMs in Class II with respect to function B . States
50 60 70 80
B LY
GLY1
GLY2
LA(L)
SF(L)
SG(L)
F2
WLU
WLLY
45.37 45.66 45.55 45.65
42.51 42.43 42.54 42.11
39.23 39.62 39.45 39.36
7.84 8.34 7.62 8.11
21.57 21.46 21.65 21.63
8.64 8.42 8.33 8.04
11.16 11.25 11.27 11.21
8.62 8.69 8.70 8.66
3.08 3.10 3.08 3.04
Table 8 Weighted external path length comparison for benchmark FSMs. FSM
LY
GLY1
GLY2
SG(L)
F2
LA(L)
WLLY
WLU
SF(L)
Log 8662 8662 8004 7936 7712 7945 7531 7638 8004 DVRAM 26,880 26,880 21,034 20,803 20,356 20,867 18,034 18,110 21,034 Ex4 1681 1681 1641 1612 1604 1643 1562 1575 1641 Rie 22,272 22,272 22,272 21,116 20,052 22,102 21,012 21,450 22,272 Shift 32 32 32 31 28 31 27 27 32 Reg.
The results are promising. We observed that the average weighted external path lengths of the FSMs computed by WLLY, WLU and F2 are lower than the average weighted external path lengths of the FSMs computed by SG(L), LA(L), SF(L), GLY1, GLY2 and LY. This indicates that during the construction of the ADSs, considering the weight of the states can reduce the cost of ADSs. We applied the B function to the results (Table 7). The results reveal that WLLY constructs ADSs such that weighted external path lengths were closest, on the average, to the weighted external path lengths constructed by the BF approach. Following WLLY, we observed that SG(L), LA(L) and WLU are effective in reducing the weighted external path length. The results of benchmarks are given in Table 8. We observed that in terms of weighted external path lengths, except FSM Rie, WLLY is the best among other approaches followed by WLU and followed by F2. 6.5. The effect of number of inputs and outputs In this section we present the effect of different number of inputs and outputs. We used FSMs in Class III for these tests. The results of constructing ADSs with minimum height and minimum external path length are presented in Figs. 8 and 9. The results reveal that there is an inverse relation with the number of output symbols and the height and the external path
Fig. 9. Comparison of external path lengths for the FSMs in Class III. Each boxplot summarizes the distributions of 10 0 0 FSMs. Table 9 Comparison of heights and weighted external path lengths of ADSs constructed for FSMs in Class III with respect to function B . i/o
3/2 3/3 3/4
B LA(H)
SG(H)
SF(H)
F2
WLU
WLLY
GLY2
GLY1
LY
8.11 5.02 4.24
8.43 7.50 4.62
21.66 24.99 15.67
33.24 25.34 18.78
33.86 26.23 24.84
36.88 26.49 24.63
38.05 37.76 32.07
40.32 40.31 36.56
45.63 43.75 39.36
i/o
LA(L)
SG(L)
SF(L)
F2
WLU
WLLY
GLY2
GLY1
LY
3/2 3/3 3/4
11.12 9.69 7.85
12.62 9.21 7.23
26.88 24.30 18.76
39.23 38.66 22.77
40.56 39.19 24.34
42.94 38.42 26.77
43.33 43.58 31.56
48.02 47.00 36.23
52.66 50.03 39.44
length of ADSs (Figs. 8 and9). This is expected, since as the number of outputs increase, splitting a block becomes easier, i.e. an splitting sequence splits a given block into more blocks with smaller sizes. We also note that, the discussions for the experiment results of FSMs in Class I are also applicable for FSMs in Class III. The results are promising, in general the performances of settings LA(H), SG(H) and SF(H) (LA(L), SG(L) and SF(L), respectively) and F2 are better than WLU, WLLY, GLY1, GLY2 and LY. These observations are supported by B values given in Table 9.
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85
Fig. 10. Average time to construct minimum height ADSs for FSMs in Class I.
81
Fig. 12. Average time to construct ADSs with minimum weighted external path length for FSMs in Class II.
Fig. 11. Average time to construct ADSs with minimum external path length for FSMs in Class I.
6.6. Comparison of timings In this section we present the average time required to construct ADSs. We used line charts to represent the results under log 10 scale. 6.6.1. Timings for FSMs in CLASS I, CLASS II, CLASS III and benchmark FSMs The average time required to compute ADSs with different methods and heuristics are provided in Fig. 10 (the time required to construct ADSs with minimum height for the FSMs in Class I), Fig. 11 (the time required to construct ADSs with minimum external path length for the FSMs in Class I), Fig. 12 (the time required to construct ADSs with minimum weighted external path length for the FSMs in Class II), Fig. 13 (the time required to construct ADSs with minimum height for the FSMs in Class III), and Fig. 14 (the time required to construct ADSs with minimum external path length for the FSMs in Class III). The results have important implications. Although settings SG (SG(H) or SG(L)) and F2 use an exponential method (SGA), we observed that the time required to compute ADSs with these settings grew slowly with the number of states. This stems from the fact that although it is theoretically possible to have exponentially many nodes in an SG, each corresponding
Fig. 13. Average time to construct ADSs with minimum height for FSMs in Class III.
to a different block, such a case does not typically occur for FSMs in Class I and Class II. Moreover we also see that there is an inverse relation with the number of output symbols and the time required to construct ADSs (Figs. 13 and 14). That is, as the number of output symbols increases the time required to derive an ADS reduces. On the other hand, as expected the average time required to compute an ADS by settings LA (LA(H) or LA(L)), WLLY, and WLU grew exponentially with the number of states. Furthermore as expected, LY, GLY1 and GLY2 were the fastest settings for constructing ADSs. 6.6.2. Timings for benchmark FSMs The results are presented in Table 10 (minimizing the height) and Table 11 (minimizing the weighted external path length). The results are just as expected except for SG(H) and SG(L). Unexpectedly, the time required to construct an ADS with setting SG(H) (or SG(L)) are comparable with settings LA(H) (or LA(L)), WLLY and WLU. This suggests that benchmark FSMs do have a different
82
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85 Table 12 The time required to construct ADSs for s-FSMs given in [31] (ms). Setting
LY GLY1 GLY2 LA(L) SF(L) SG(L) F2 WLU WLLY
States 4
10
20
30
40
50
60
70
0.005 0.005 0.006 0.0 0 0 0.012 0.029 0.038 0.0 0 0 0.0 0 0
0.011 0.011 0.012 0.0 0 0 0.023 0.049 0.053 0.0 0 0 0.0 0 0
0.021 0.021 0.022 0.0 0 0 0.044 0.084 0.082 0.0 0 0 0.0 0 0
0.031 0.031 0.032 0.0 0 0 0.068 0.124 0.124 0.0 0 0 0.0 0 0
0.041 0.041 0.042 0.0 0 0 0.088 0.155 0.150 0.0 0 0 0.0 0 0
0.051 0.051 0.052 0.111 0.109 0.185 0.188 0.011 0.112
0.061 0.061 0.062 112.316 0.131 0.216 0.219 93.292 115.292
0.071 0.071 0.071 188059.162 0.151 0.254 0.253 100513.162 170044.127
Fig. 14. Average time to construct ADSs with minimum external path length for FSMs in Class III. Table 10 Minimum height ADS construction times for case studies (ms). Name
Log DVRAM Ex4 Rie Shift Reg.
Setting - ADS generation time (ms) LY
GLY1
GLY2
SG(H)
F2
LA(H)
WLLY
WLU
SF(H)
0.03 0.08 0.00 0.26 0.00
0.07 5.70 0.00 6.26 0.00
0.12 7.26 0.01 11.59 0.00
6.62 70.69 6.85 97.76 3.35
6.15 70.89 6.25 95.73 3.56
7.91 77.66 8.36 114.43 9.18
7.81 74.53 8.62 116.86 9.28
7.78 75.06 8.58 117.70 9.20
4.27 19.86 3.66 18.54 0.94
Table 11 Minimum weighted external path length ADS construction times for case studies (ms). Name
Setting - ADS generation time (ms) LY
Log DVRAM Ex4 Rie Shift Reg.
GLY1 GLY2
0.03 0.07 0.08 5.70 0.00 0.00 0.26 6.26 0.00 0.00
SG(L)
F2
0.12 5.80 5.44 7.26 44.28 45.59 0.01 8.94 8.43 11.59 112.03 116.09 0.00 4.50 4.23
LA(L)
WLLY
WLU
SF(L)
7.07 7.81 7.08 4.23 69.69 72.53 71.06 18.32 9.26 8.62 8.58 3.01 116.21 116.86 117.70 18.54 9.02 9.28 8.20 1.04
transition structure than the transition structures possessed by the other FSMs used in this section. 6.6.3. Timings for s-FSMs For each s-FSM we run the settings LY, GLY1, GLY2, F2, SF(L), SG(L), WLLY, WLU and LA(L) and noted the timings and ADSs. Each setting was able to construct the shortest ADSs (each setting constructed the same ADS for the same s-FSM), thus we do not discuss the ADS quality. The timings are given in Table 12. The results are as expected; settings LA, WLLY, and WLU grow much faster than the other methods. As in the case of Class I and Class II, the time required by SG is similar to the settings that use polynomial time ADS construction approaches. 6.7. Checking sequence quality In this section we discuss the effect of constructing checking sequences with the ADSs constructed by WLLY, WLU, LA(L), F2,
Fig. 15. Comparison of the checking sequence lengths constructed by the DY algorithm. Each boxplot summarizes the distributions of 10 0 0 FSMs.
SG(L), SF(L), LY, GLY1 and GLY2. Note that settings LA(L), SG(L) and SF(L) constructed ADSs by using the heuristics for minimizing external path length of the ADSs. On the other hand settings WLLY, WLU and F2 constructed ADSs using the heuristics for minimizing weight of the ADSs. For each FSM, we constructed ADSs with above mentioned settings, and we noted the improvement on the )−C S(A ) checking sequence length by C S(LY × 100. In other words, we CS(LY ) consider the percentage improvement on the length of the checking sequences that are constructed by using an ADS generated by LY algorithm. We make use of the following checking sequence construction methods: HEN method given in [22], UWZ method given in [17], HIU method given in [18], SP method given in [23], and DY method given in [50]. These checking sequence generation methods rely on the existence of distinguishing sequences. The first method in the literature is (HEN) and the most recent ones are SP and DY, besides UWZ and HIU are some important improvements on the early versions. The methods HEN, UWZ, HIU, SP, and DY mainly differ in considering the state verification and transition verification components, and how they generate the transfer sequences to put these components together. For more details on these methods, we direct reader to the references [17,18,22,23,50]. We present the improvements on the length of the checking sequences constructed in Figs. 15–19. The first observation is that, regardless of the checking sequence construction method, the checking sequence lengths are reduced by using the ADSs generated by the approaches given in this paper. Also, the
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85
Fig. 16. Comparison of the checking sequence lengths constructed by the SP algorithm. Each boxplot summarizes the distributions of 10 0 0 FSMs.
Fig. 17. Comparison of the checking sequence lengths constructed by the HIU algorithm. Each boxplot summarizes the distributions of 10 0 0 FSMs.
average percentage improvement in the checking sequence length increases as the number of states increases. Interestingly, for newer methods like SP and DY (which construct shorter checking sequences than their predecessors like HIU, UWZ, and HEN), the improvement in the checking sequence length is better than the improvement obtained from older methods. The percentage improvement reaches to 40% for FSMs with 80 states. Another observation that can be made is the following. Recall that, the in-degree distributions for the FSMs in Class II are nonuniform. Our motivation for optimizing the weighted external path length is validated by the experiments. The heuristics aiming for the minimization of weighted external path length of ADSs (i.e. WLLY, WLU, and F2) in fact give better results for the checking sequences as well, regardless of the checking sequence construction method.
83
Fig. 18. Comparison of the checking sequence lengths constructed by the UWZ algorithm. Each boxplot summarizes the distributions of 10 0 0 FSMs.
Fig. 19. Comparison of the checking sequence lengths constructed by the HEN algorithm. Each boxplot summarizes the distributions of 10 0 0 FSMs.
The settings WLLY, WLU, F2, LA(L), and SG(L) are all based on techniques with exponential worst case complexity (although the time performance of settings F2 and SG are reasonable for random FSMs). Among the polynomial time methods for ADS construction, the setting SF gives the best reduction. We also considered the FSMs in the benchmark set and the results are given in Table 13. We have observations similar to those obtained from the experiments on random FSMs. The settings WLLY and WLU again give better reductions (e.g. reaching to 35.64% for DVRAM) compared to the other settings. 6.8. Threats to validity We identify some threats to the validity of empirical studies in this section. First, we evaluate the proposed methods with different settings on randomly generated FSMs. It is possible that a different setting may produce better results. Moreover, it is also
84
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85
Table 13 Checking sequence length comparison for case studies. CS. gen. alg.
Name
DY
SP
HI
UWZ
HEN
Log DVRAM Ex4 Rie Shift Reg. Log DVRAM Ex4 Rie Shift Reg. Log DVRAM Ex4 Rie Shift Reg. Log DVRAM Ex4 Rie Shift Reg. Log DVRAM Ex4 Rie Shift Reg.
Setting - checking sequence improvement GLY2
SG(L)
F2
LA(L)
WLLY
WLU
SF(L)
7.38 10.23 2.41 0.00 0.00 6.45 11.22 1.44 0.00 0.00 7.68 9.58 2.49 0.00 0.00 8.21 12.39 3.34 0.00 0.00 10.21 21.48 5.45 0.00 0.00
9.36 19.85 2.67 4.67 1.02 11.26 21.16 3.68 6.40 2.24 12.79 22.39 5.31 7.79 4.01 14.11 24.16 6.57 9.76 5.59 15.11 25.85 8.29 10.81 6.73
10.15 20.67 2.51 5.42 2.12 12.11 22.58 4.16 6.85 4.09 13.50 23.58 5.21 8.13 5.68 14.98 25.18 6.42 9.84 7.04 16.65 26.89 7.52 11.18 8.39
9.12 20.12 2.60 4.33 1.02 11.03 21.69 3.69 6.12 3.02 12.18 22.81 5.34 7.33 4.73 13.53 24.32 6.68 8.97 5.84 14.59 28.57 8.60 10.79 7.84
23.16 30.43 6.45 5.70 2.46 24.67 31.56 7.52 6.92 4.39 26.36 32.86 9.01 8.31 6.35 28.21 33.89 10.27 9.35 8.05 29.81 35.64 11.34 10.68 9.14
21.92 29.21 6.32 5.56 2.46 22.92 30.36 7.82 7.24 3.86 24.36 31.94 8.91 8.47 5.68 25.82 33.70 10.00 9.62 6.99 27.17 34.77 11.68 10.66 8.06
7.38 10.23 2.41 0.00 0.00 8.92 11.97 3.65 1.33 1.34 10.18 13.02 5.55 2.82 3.11 11.62 14.25 6.64 4.32 5.02 13.02 15.53 8.38 5.73 6.89
possible that for the FSMs used in real-life situations, the performance of proposed methods can differ. Although using random FSMs is a general approach for the works in this field, in order to test the generalization of these methods, we also test them on some case studies obtained from benchmark FSM specifications as explained in Section 6.7. We see that LA(H) (or LA(L)) and SG(H)(or SG(L)) perform better than SF(H) (or SF(L)), similar to the results we obtained from random FSMs. However, unlike the results obtained from random FSMs, SG(H) (or SG(L)) required large amount of time. Another threat could be the lack of variety in the number of states, inputs and outputs of the randomly generated FSMs. In order to eliminate this threat, we used a set of s-FSMs and benchmark FSMs. Note that s-FSMs used in these experiment had different number of states but the size of the input and output symbols are 2. Moreover benchmark FSMs consist of number of FSMs with different number of states but mostly large number of inputs and outputs. Therefore while constructing randomly generated FSMs we selected the number of inputs/outputs as 3/3 for FSMs in Class I and Class II we selected the number of inputs/outputs as 3/2, 3/3 and 3/4 for FSMs in Class III. Another threat could be our incorrect implementation of the approaches. To eliminate this threat, we also used an existing tool that checks if a given tree is an ADS for an FSM. The ADSs generated by settings are all double checked with this tool to decide if what is produced is really an ADS. A threat to our motivation for using minimized ADSs could be that, the checking sequence method may or may not support this motivation. In order to see the effect of minimizing ADSs for checking sequence construction, we considered several checking sequence construction methods. The results suggest that, in general, regardless of the checking sequence method, the use of minimum weighted ADS results in shorter checking sequences. 7. Conclusions and future directions In this paper, we studied the problem of constructing compact ADSs for FSMs. After providing a brief overview of the existing ADS construction algorithms, we propose generalizations of these
methods and we also propose a set of heuristics to construct compact ADSs. We then performed an experimental study to explore the effect of constructing checking sequences with compact ADSs by comparing the size of the checking sequences produced using well known checking sequence generation methods. In the experiments, we observed that when the objective is to minimize the weighted external path length, methods were able to construct much shorter checking sequences compared to the checking sequences generated by using ADSs obtained from the well known ADS construction method (the LY algorithm). We extended these experiments to consider a set of FSMs from a benchmark collection. Similarly, we found that it is possible to reduce the length of the checking sequences by 35.67% when compact ADSs constructed by the methods proposed in this paper are used. We compared the results of the brute force approach and the proposed methods and found out that the proposed methods were able to compute almost minimum ADSs. As the experiments demonstrate, although our approaches improve the quality of ADSs compared to ADSs constructed by the LY algorithm, we obviously require more time. As a future direction of research, we plan to improve the time performance of our approaches. As a final remark, we must note that recent checking sequence construction methods employ additional optimization principles, which do not exist in the checking sequence construction methods used in this paper. These principles include omitting redundant transition tests [43,51], exploiting overlapping of subtests [23,52,53], and using an ADS together with other state recognition approaches [25–27,54,55]. For such methods, the ADS might be employed less extensively, or it becomes desirable that the ADS has a specific form. Constructing a compact ADS taking these aspects into account is also a promising research direction. In this paper, we observed that we can obtain shortest ADSs from given FSMs with the proposed methods and hence reduced the cost of checking sequences. Experimental results suggest that as the size of the FSM increases, the time required to construct ADSs also increases on the average. As a future work to address this issue, we will investigate scalable algorithms for constructing ADSs. Acknowledgments This work is supported by the Scientific and Technological Re˙ search Council of Turkey (TÜBITAK) under Grant #113E292. References [1] ITU-T, Recommendation Z.100 Specification and Description Language (SDL), International Telecommunications Union, Geneva, Switzerland, 1999. [2] D. Harel, M. Politi, Modeling Reactive Systems with Statecharts: The STATEMATE Approach, McGraw-Hill, New York, 1998. [3] A. Friedman, P. Menon, Fault Detection in Digital Circuits, Computer Applications in Electrical Engineering Series, Prentice-Hall, 1971. [4] A. Aho, R. Sethi, J. Ullman, Compilers, Principles, Techniques, and Tools, Addison-Wesley Series in Computer Science, Addison-Wesley Pub. Co., 1986. [5] T. Chow, Testing software design modelled by finite state machines, IEEE Trans. Softw. Eng. 4 (1978) 178–187. [6] E. Brinksma, A theory for the derivation of tests, in: Proceedings of the IFIP Symposium on Protocol Specification, Testing, and Verification VIII, NorthHolland, Atlantic City, 1988, pp. 63–74. [7] A. Dahbura, K. Sabnani, M. Uyar, Formal methods for generating protocol conformance test sequences, Proc. IEEE 78 (8) (1990) 1317–1326, doi:10.1109/5. 58319. [8] D. Lee, K. Sabnani, D. Kristol, S. Paul, Conformance testing of protocols specified as communicating finite state machines-a guided random walk based approach, IEEE Trans. Commun. 44 (5) (1996) 631–640, doi:10.1109/26.494307. [9] S. Low, Probabilistic conformance testing of protocols with unobservable transitions, in: Proceedings of the 1993 International Conference on Network Protocols, 1993, pp. 368–375, doi:10.1109/ICNP.1993.340890.
U.C. Türker et al. / Information and Software Technology 74 (2016) 69–85 [10] K. Sabnani, A. Dahbura, A protocol test generation procedure, Comput. Netw. 15 (4) (1988) 285–297. [11] D. Sidhu, T. Leung, Formal methods for protocol testing: A detailed study, IEEE Trans. Softw. Eng. 15 (4) (1989) 413–426. [12] R. Binder, Testing Object-Oriented Systems: Models, Patterns, and Tools, Addison-Wesley, 1999. [13] M. Haydar, A. Petrenko, H. Sahraoui, Formal verification of web applications modeled by communicating automata, in: Formal Techniques for Networked and Distributed Systems (FORTE 2004), in: Lecture Notes in Computer Science, vol. 3235, Springer-Verlag, Madrid, 2004, pp. 115–132. [14] M. Utting, A. Pretschner, B. Legeard, A taxonomy of model-based testing approaches, Softw. Test. Verif. Reliab. 22 (5) (2012) 297–312. [15] W. Grieskamp, N. Kicillof, K. Stobie, V. Braberman, Model-based quality assurance of protocol documentation: Tools and methodology, Softw. Test. Verif. Reliab. 21 (1) (2011) 55–71. [16] E. Moore, Gedanken-experiments, in: C. Shannon, J. McCarthy (Eds.), Automata Studies, Princeton University Press, 1956. [17] H. Ural, X. Wu, F. Zhang, On minimizing the lengths of checking sequences, IEEE Trans. Comput. 46 (1) (1997) 93–99. [18] R. Hierons, H. Ural, Optimizing the length of checking sequences, IEEE Trans. Comput. 55 (2006) 618–629. [19] D. Lee, M. Yannakakis, Principles and methods of testing finite-state machines - a survey, Proc. IEEE 84 (8) (1996) 1089–1123. [20] A. Simão, A. Petrenko, Checking completeness of tests for finite state machines, IEEE Trans. Comput. 59 (8) (2010) 1023–1032. [21] G. Gonenc, A method for the design of fault detection experiments, IEEE Trans. Comput. 19 (1970) 551–558. [22] F. Hennie, Fault-detecting experiments for sequential circuits, in: Proceedings of Fifth Annual Symposium on Switching Circuit Theory and Logical Design, Princeton, New Jersey, 1964, pp. 95–110. [23] A. Simão, A. Petrenko, Generating checking sequences for partial reduced finite state machines, in: Proceedings of the Twentieth IFIP International Conference on Testing of Communicating Systems (TESTCOM) and the Eighth International Workshop on Formal Approaches to Testing of Software, TestCom/FATES, 2008, pp. 153–168. [24] R. Hierons, G. Jourdan, H. Ural, H. Yenigün, Checking sequence construction using adaptive and preset distinguishing sequences, in: Proceedings of the 2009 Seventh IEEE International Conference on Software Engineering and Formal Methods, in: SEFM ’09, IEEE Computer Society, Washington, DC, USA, 2009, pp. 157–166, doi:10.1109/SEFM.2009.12. [25] M. Yalcin, H. Yenigün, Using distinguishing and uio sequences together in a checking sequence, in: M. Uyar, A. Duale, M. Fecko. (Eds.), Testing of Communicating Systems, Lecture Notes in Computer Science, vol. 3964, Springer Berlin Heidelberg, 2006, pp. 259–273, doi:10.1007/11754008_17. [26] A. Simao, A. Petrenko, Checking sequence generation using state distinguishing subsequences, in: Proceedings of the International Conference on Software Testing, Verification and Validation Workshops, ICSTW ’09, 2009, pp. 48–56, doi:10.1109/ICSTW.2009.25. [27] M. Kapus-Kolar, On the global optimization of checking sequences for finite state machine implementations, Microprocess. Microsyst. 38 (3) (2014) 208– 215. http://dx.doi.org/10.1016/j.micpro.2014.01.007. [28] R. Boute, Distinguishing sets for optimal state identification in checking experiments, IEEE Trans. Comput. 23 (1974) 874–877. [29] D. Lee, M. Yannakakis, Testing finite-state machines: State identification and verification, IEEE Trans. Comput. 43 (3) (1994) 306–320. [30] A. Gill, Introduction to the Theory of Finite State Machines, McGraw-Hill, New York, 1962. [31] M. Sokolovskii, Diagnostic experiments with automata, Cybern. Syst. Anal. 7 (1971) 988–994, doi:10.1007/BF01068822. [32] I. Kogan, A bound on the length of the minimal simple conditional diagnostic experiment, Avtom. Telemekh 2 (1973) 354–356. [33] I. Rystsov, Proof of an achievable bound on the length of a conditional diagnostic experiment for a finite automaton, Cybernetics 12 (3) (1976) 354–356.
85
[34] U. Türker, H. Yenigün, Hardness and inapproximability of minimizing adaptive distinguishing sequences, Form. Methods Syst. Des. 44 (3) (2014) 264–294, doi:10.1007/s10703- 014- 0205- 0. [35] U. Türker, T. Ünlüyurt, H. Yenigün, Lookahead-based approaches for minimizing adaptive distinguishing sequences, in: M. Merayo, E. de Oca (Eds.), Proceedings of the Twenty Sixth IFIP WG 6.1 International Conference on Testing Software and Systems, ICTSS 2014, Lecture Notes in Computer Science, vol. 8763, Springer, Madrid, Spain, 2014, pp. 32–47, doi:10.1007/978- 3- 662- 44857- 1_3. [36] R. Hierons, U. Türker, Incomplete distinguishing sequences for finite state machines, Comput. J. 58 (2015) 3089–3113, doi:10.1093/comjnl/bxv041. [37] N. Kushik, K. El-Fakih, N. Yevtushenko, Adaptive homing and distinguishing experiments for nondeterministic finite state machines, in: H. Yenigün, C. Yilmaz, A. Ulrich (Eds.), Testing Software and Systems, Lecture Notes in Computer Science, vol. 8254, Springer Berlin Heidelberg, 2013, pp. 33–48. [38] R. Hierons, U.C. Türker, Distinguishing sequences for partially specified fsms, in: J. Badger, K. Rozier (Eds.), NASA Formal Methods, Lecture Notes in Computer Science, vol. 8430, Springer International Publishing, 2014, pp. 62–76, doi:10.1007/978- 3- 319- 06200- 6_5. [39] N. Kushik, N. Yevtushenko, H. Yenigün, Some classes of finite state machines with polynomial length of distinguishing test cases, in: M. Merayo, G. Salaün (Eds.), Software Verification and Testing, ACM Symposium on Applied Computing, 2016.in press [40] R. Hierons, U. Türker, Parallel algorithms for generating distinguishing sequences for non-deterministic partial observable fsms(2015). [41] H. Ural, K. Zhu, Optimal length test sequence generation using distinguishing sequences, IEEE/ACM Trans. Netw. 1 (3) (1993) 358–371. [42] R. Hierons, H. Ural, Reduced length checking sequences, IEEE Trans. Comput. 51 (9) (2002) 1111–1117. [43] J. Chen, R. Hierons, H. Ural, H. Yenigün, Eliminating redundant tests in a checking sequence, in: F. Khendek, R. Dssouli (Eds.), Testing of Communicating Systems, Lecture Notes in Computer Science, vol. 3502, Springer Berlin/Heidelberg, 2005, pp. 146–158, doi:10.1007/11430230_11. [44] Z. Kohavi, Switching and Finite State Automata Theory, McGraw-Hill, New York, 1978. [45] F. Hennie, Finite-state Models for Logical Machines, Wiley, 1968. [46] F. Brglez, ACM/SIGMOD benchmark dataset,
,1996(accessed: 15.04.23). [47] S. Stowell, Instant R: An Introduction to R for Statistical Analysis, Jotunheim Publishing, 2012. [48] P. Teetor, R Cookbook, first, O’Reilly, 2011. [49] W. Cleveland, Robust locally weighted regression and smoothing scatterplots, J. Am. Stat. Assoc. 74 (368) (1979) 829–836, doi:10.2307/2286407. [50] M. Dinçtürk, A Two Phase Approach for Checking Sequence Generation, Sabanci University, 2009 Master’s thesis. [51] K. Tekle, H. Ural, M. Yalcin, H. Yenigün, Generalizing redundancy elimination in checking sequences, in: P. Yolum, T. Güngör, F. Gürgen, C. Özturan (Eds.), Computer and Information Sciences - ISCIS 2005, Lecture Notes in Computer Science, vol. 3733, Springer Berlin Heidelberg, 2005, pp. 915–926, doi:10.1007/ 11569596_93. [52] H. Ural, F. Zhang, Reducing the lengths of checking sequences by overlapping, in: M. Uyar, A. Duale, M. Fecko (Eds.), Testing of Communicating Systems, Lecture Notes in Computer Science, vol. 3964, Springer Berlin Heidelberg, 2006, pp. 274–288, doi:10.10 07/117540 08_18. [53] G. Jourdan, H. Ural, H. Yenigün, J. Zhang, Lower bounds on lengths of checking sequences, Form. Asp. Comput. 22 (6) (2010) 667–679, doi:10.1007/ s0 0165-0 09-0135-6. [54] L. Duan, J. Chen, Exploring alternatives for transition verification, J. Syst. Softw. 82 (9) (2009) 1388–1402, doi:10.1016/j.jss.2009.05.019. [55] M. Kapus-Kolar, On “exploring alternatives for transition verification”, J. Syst. Softw. 85 (8) (2012) 1744–1748. http://dx.doi.org/10.1016/j.jss.2012.03.034.