Efficient test vector volume reduction based on equal run length coding technique

Efficient test vector volume reduction based on equal run length coding technique

Microprocessors and Microsystems 68 (2019) 1–10 Contents lists available at ScienceDirect Microprocessors and Microsystems journal homepage: www.els...

2MB Sizes 0 Downloads 17 Views

Microprocessors and Microsystems 68 (2019) 1–10

Contents lists available at ScienceDirect

Microprocessors and Microsystems journal homepage: www.elsevier.com/locate/micpro

Efficient test vector volume reduction based on equal run length coding technique V. Suresh kumar a,∗, R. Manimegalai b a b

Department of EIE, SRM Valliammai Engineering College, Chennai, India Department of Information Technology, PSG College of Technology, Coimbatore, India

a r t i c l e

i n f o

Article history: Received 17 December 2018 Revised 6 February 2019 Accepted 15 April 2019 Available online 16 April 2019 Keywords: Test compression Low power Equal Run Length Coding (ERLC) X-filling VLSI testing

a b s t r a c t Excessive test power utilization is one of the major obstacles which the chip industry is facing at present. In SOC plan, test data volume is reduced extensively by using Test data compression strategies. In this paper, a variable-to-variable length compression method in light of encoding with perfect examples is presented. Initially, the don’t care bits in the test vector are loaded with a proposed X -filling algorithm which is then encoded using the proposed Modified Equal Run Length Coding (MERLC) based encoding scheme. In relationship with the proposed X-filling and encoding scheme, an efficient decoder is designed and implemented with low area overhead. To assess the effectiveness of the proposed approach, it is tested on the ISCAS89 benchmark circuits. The tests results demonstrate that the proposed algorithm gets a higher compression ratio, when compared with the existing schemes. The Percentage compression of this scheme is 4.28%, 8.72%, 2.19%, 14.42% and 1.15% higher than those of ERLC, FDR, EFDR, Golomb and 9C coding respectively. © 2019 Elsevier B.V. All rights reserved.

1. Introduction Testing of Integrated Chips (ICs) becomes more and more difficult due to functional complexity and its size, caused by increase in integration levels of Very Large Scale Integrated (VLSI) chips. Testing of chips accounts for dominant cost factor in VLSI industry and reducing the cost of testing is the major challenge, VLSI designers and IC manufacturers are given to accomplish vivaciously. Test data volume and Test power are the two most important sources of test cost. By and large, much power is needed for a circuit when it undergoes test mode than in normal mode. This powerful utilization amid testing influences the circuit dependability [1]. Test force relies upon the quantity of sweep components present in the circuit. Productive test volume reduction procedures can reduce the test time, test power, and Automatic Test Equipment (ATE) memory requirement. BuiltIn Self-test (BIST) is generally known as an important method for testing specific centers. It installs the precomputed test vectors in longer pseudorandom arrangements which are created on chip. The strategies’ downside is their long test application time and is hard to accomplish high fault coverage. Alternate solution is compressing test set, reduces the test data volume and thereby



Corresponding author. E-mail addresses: [email protected] [email protected] (R. Manimegalai). https://doi.org/10.1016/j.micpro.2019.04.001 0141-9331/© 2019 Elsevier B.V. All rights reserved.

(V.

Suresh

kumar),

the application time without applying futile test data to the Design under Test (DUT). One attractive methodology is to utilize Linear Feedback Shift Register (LFSR) reseeding. A few routines based on reseeding have been advanced and some commercial tools have been created for LFSR reseeding. LFSR reseeding exploit the run on the typical test vectors, that have very few specified bits and the compression ratio achieved using this method has direct relationship with the number of don’t care bits/unspecified bits (X-bits)in test patterns. On the other hand, the LFSR method is not proficient when the number of specified bits is substantial in every test vector. To tackle the problem, different compression methods have been proposed for encoding source test data. Linear compression schemes are exceptionally proficient at exploiting unspecified bits in the test cubes to accomplish a lot of compression. Coding schemes are categorized into five types based on code word size in the compressed test data [2,3]: fixedto-fixed length coding such as dictionary based coding [4] and LFSR-based reseeding coding [5], fixed to-variable-length such as Huffman-based coding [6], variable-to-fixed length such as traditional run-length based coding [7], variable-length coding such as VIHC coding [8], Golomb coding [9], Frequency Directed Runlength (FDR) coding [10], alternating run-length coding [11,12], Extended Frequency-Directed Run-length (EFDR) coding [13], v9C coding [14], and (BMUC) coding [15] and finally the mixed coding in view of fixed and variable length such as VTFPVL [3].

2

V. Suresh kumar and R. Manimegalai / Microprocessors and Microsystems 68 (2019) 1–10

In this paper, we have proposed a new modified compression technique not only to accomplish high compression ratio, also to achieve low average scan in and scan out test power with little hardware overhead. To achieve this, we exhibit a modified encoding technique based on equal run-length coding [16–20]. Our modified encoding scheme exploits the properties of a recently proposed test vector rearrangement algorithm [19,20] to upgrade the compression ratio. To begin with, the test set with unspecified bits is X-filled and is further encoded with our modified equal runlength scheme. We additionally introduce the decompression architecture to translate the encoded test which will have little area overhead. The hardware portion of our module is coded in Verilog and this module is implemented on Xilinx Virtex4 XC4VLX20011FF1513 FPGA. For performance comparison we use the MATLAB programming and the outcomes are contrasted with the comparable existing techniques. The Section 2 discuss about some of the recent researches done with the objective of test data compression. Section 3 explains the proposed method along with its advantages. Section 4 describes the architecture of proposed compression technique and simple decompression architecture. The experimental results and comparison with other similar compression techniques are shown in Section 5. Finally, Section 6 concludes this paper. 2. Related work Zhan and El-Maleh [19] proposed a scheme of test data compression taking into account run-length, to be specific equal-runlength coding (ERLC). Here they considered both runs of 0’s and 1’s and also explores the relationship between two consecutive runs in terms of its length. A shorter code word is used to encode the entire second consecutive runs with equal run length. They have also proposed a new scheme for filling the don’t care bits, to maximize the number of consecutive equal length runs. Yuan et al. [21] proposed a power efficient BIST Test Pattern Generator (TPG) system to reduce the test power scattering amid scan testing. The original test set is preprocessed with progression of methods like don’t care bit based 2-D adjusting, 2-D reordering based on Hamming distance and two transpose based on test cube matrix before they are injected into the scan chain. Experimental results demonstrate viable reduction in switching activities when the test set is stacked for on-chip scan testing. A novel compatibility-based test data compression method is introduced by Wan et al. in [22]. With the high compression productivity of broadened frequencydirected run length coding algorithm, this method groups the test vectors that have minimum incongruent bits and amalgamates them into a single vector by relegating 1 or 0 to unspecified bits and c to inconsistent bits. Three runs of 1, 0 and c can be encoded at the same time and a decoder with little area overhead has been developed for this compression scheme. Tseng and Lee [23] proposed a variable-to-variable length compression method in which runs of compatible patterns are used for encoding. Test data in the test set is partitioned into various successions. Every succession is constituted by a progression of compatible patterns in which design length and number of pattern runs is encoded. Hypothetical investigations on the development of the proposed MultiDimensional Pattern Run-Length Compression (MDPRC) are made individually from one-Dimensional-PRC to three-Dimensional-PRC. The Count Compatible Pattern Run-Length (CCPRL) coding compression method is presented by Yuan et al. [24], to enhance the compression ratio. First, a section of pattern in the test set is retained. Besides, don’t care bits are filled in order to make consequent pattern compatible with the retained pattern for number of times it can never again be made good. Third, the compatible patterns are replaced by symbol “0" (equal) and symbol “1" (contrary) in the code word. Also, the quantity of successive compatible patterns is tallied and represented in binary to show, when the

codeword ends. Eggersglub [25] proposed a new methodology to reduce peak capture power during at-speed scan testing. In this system, a novel dynamic X-filling procedure, Opt-Justification-fill which utilizes optimization techniques to process promising X-bits for low-power filling is proposed. These strategies are integrated into a dynamic compaction stream to make quiet test cubes with high compaction capacity. By this, X-filling for fault detection and switching activity is adjusted. The proposed procedure can be used during test set generation as well as it can be applied to post-ATPG stage to reduce switching activity of previously generated test sets. Analyses proved a huge reduction of peak capture power amid a little increment in pattern count which prompts reduced test costs. Sivanantham et al. [20] proposed two multistage compression techniques to reduce the volume of test data in scan test applications. They have proposed two encoding plans in particular Alternating Frequency-Directed Equal-Run-length (AFDER) coding and Run-Length based Huffman coding (RLHC). To improve the compression ratio these encoding schemes together with the nine-coded compression technique is used. First, nine-coded compression scheme is used to encode the pre-generated test cubes with unspecified bits. Later, the encoding procedure used the properties of compressed data to improve the test data compression. This multistage compression is powerful when the rate of don’t care bits in a test set is high. They additionally introduced basic decoder architecture to decode the original data. Wu et al. [26] presented an efficient test independent compression technique taking into account Block Merging and Eight Coding (BM-8C) to reduce the test data volume and test application time for IP cores in SoCs. Consecutive compatible blocks are merged and are encoded with exact eight code words to achieve data compression. This scheme compresses the pre-processed test data without requiring any auxiliary data for the circuit under test. 3. Proposed method It is known that 95–98% of the bits in the test cube generated by an ATPG for the purpose of testing any industrial circuit are unspecified bits. These bits can be uninhibitedly loaded with the logic values 0 or 1 keeping in mind the end goal is to decrease the volume of test data and/or the test power. The existing schemes discussed so far, effectively compressed the test data without exploring the relationship between consecutive runs. Therefore by considering the relationship between consecutive runs, a modified scheme called Modified Equal-Run-Length coding (MERLC) is proposed to achieve better compression effect. In this proposed scheme both runs of 0’s and 1’s are considered to reduce the total number of runs and also the transitions during scan-in operation, which would lead to improved compression ratio and reduced test power. The relationship between two consecutive runs is explored to represent the longer symbol with shorter code word. We propose a new coding scheme in which, coding for run of 0’s is reduced by 1 bit compared to that of the existing ERLC scheme [19]. This would result in better compression ratio if the test set contains more number of run of 0’s. To further improve the compression ratio the entire second run of two consecutive runs of equal lengths are encoded by shorter code word. Experimental results show that the proposed scheme achieves a higher compression ratio than that of previously proposed schemes. The architecture for efficient decoder for decoding these compressed bits is also proposed in this work. 4. Block schematic of the proposed method This section describes the details for implementing the compression and decompression blocks of the proposed schematic as shown in Fig. 1.

V. Suresh kumar and R. Manimegalai / Microprocessors and Microsystems 68 (2019) 1–10

3

Fig. 1. Diagram of the Proposed Compression Technique.

The overall proposed structure includes a software part for compressing and the hardware part for decompressing the test vectors. In general the test vectors for different circuits are generated by the ATPG, and then these test vectors are fed to software based compression module to generate the compressed test data. The compressed test data are then subjected to the hardware based decompression module incorporated within the target circuit for testing, and thus the original test data are recovered from its compressed form. In our proposed technique, initially in the compression part, the don’t care bits of the test vector related to the selected circuit for testing, is filled with appropriate values (1 or 0) using an enhanced scheme for X-filling. This X-filling scheme maximizes the repeated runs with equal run-length, which aids in the reduction of codeword length generated by the proposed MERLC technique, and also reduces the excessive switching activity during testing, which is one of the contributions of this paper. The resulting Xfilled test pattern is then compressed using the proposed modified ERLC encoding technique. This proposed modified ERLC encoding technique focuses on replacing redundant runs of equal-run-length with shorter codeword. The compressed test data thus generated from our proposed compression scheme is decompressed by simple decoder architecture. The rest of this section provides the detailed description of the functioning of each block of the overall compression and decompression block in Fig. 1.

4.1. X-filling scheme By and large, just around 1–10% of the total bits in the test cube for a circuit is determined, while the others stay unspecified (don’t minds). So this requires X-filling logic to build a completely defined test vector. Rather than essentially filling all don’t care bits with 0’s or 1’s as mentioned in other comparative existing procedures, if the don’t care bits are filled by considering the kind of run utilized as a part of specific compression scheme, the better compression can be accomplished. Therefore by utilizing a suitable X-filling plan, we can venture up the event of rehashed equal runs and can contribute adequately in the test data compression system. The X-filling is done by keeping in mind, the end goal is to exploit the accessible rehashed equal runs successfully in our proposed modified ERLC encoder, which encodes the repeated runs by a three bit code word, which further contribute in the improvement of compression ratio. The goal of our work is to

expand the frequency of occurrence of rehashed equal runs, to increase the compression ratio and to reduce the test power. 4.1.1. X-filling algorithm Considering a test vector from a test cube for a specific circuit, the test vector joins various successions of bit b, where b ∈ {0, 1} is known as run of type b and the arrangement’s length is known as the run-length. Equal run-length is an instance of run length, in which two or more contiguous runs are with equal number of bits, independent of their run type. For better understanding, 0 0 0 0 0 0 0 0 is an arrangement of run type 0 with run length 8 (0, 8), and 11111111 is a grouping of run type 1 with run length 8 (1, 8). The proposed procedure of X-filling is shown in Fig. 2. The proposed algorithm focuses on maximizing the test vector run, with equal run-length to accomplish better compression ratio and to reduce bit transitions and hence the test power. In this procedure, the given test cube is divided into test cuts. Within these test cuts three pointers F0 , F1 and F2 are fixed according to the definition given in Table 1. Then by appropriately shifting these pointers equal run-length pattern can be obtained as explained below. Pointers F0 and F2 are fixed to the least significant bit position and most significant bit position respectively. The bit length from the pointers F0 to F2 is the X-filling window length Wl . This can be given as,

Wl = F2 − F0

(1)

When the segments are of different length, the proposed Xfilling scheme equalizes the bit length between the two segments, by checking the value of P0 and P1 . Three possible conditions and the task carried out in all these cases to obtain rehashed equal run-length are described below and are represented pictorially in the flowchart shown in Fig. 2. Case 1. P0 = P1 For this situation, since the bit lengths of both the portions are equal, the X-values are filled with 0 or 1 depending upon the predefined bit stream inside the segment (every segment incorporates either 0 or 1 indicated bit only). Case 2. P0 > P1 This condition implies that the length of first segment is greater than the length of second segment. Now in order to make segments of equal length, the pointer F0 is shifted towards right

4

V. Suresh kumar and R. Manimegalai / Microprocessors and Microsystems 68 (2019) 1–10

Fig. 2. Flowchart for the X-Filling Scheme. Table 1 Description of pointers. Pointer

Description

F0 F1 F2

This pointer is fixed to the first bit of the X-filling window irrespective of its value. This pointer is fixed to first bit transition within the specified bits. This pointer is placed to the second bit transition within the specified bits.

V. Suresh kumar and R. Manimegalai / Microprocessors and Microsystems 68 (2019) 1–10

5

Fig. 3. Steps involved in X-Filling Scheme.

by one bit position. Then the value of P0 and P1 are checked. This process is repeated until the condition P0 = P1 is met. Then X-filling is done similar to the process explained in case 1.

Test Slice: X0 0 0 0XX01111XXXX0XX00XXX01XX111XX0XX0 0 0 0 111XXX11 4.2. Modified ERLC coding

Case 3. P0 < P1 This case is similar to case-2; aside from the procedure of pointer moving that is F0 must be shifted towards left instead of right. For illustrating our proposed scheme of X-filling, let us consider the following sample test slice which meets all the three criteria in our scheme. Fig. 3 illustrates the generation of equal run lengths and X-filling done for this test slice.

Once X-filling is done, ERLC codes are utilized to reduce the number of runs independent of their type and this successfully contributes for the significant reduction of total test data volume. Since the run-length is a variable and hence the code word is a variable here, subsequently this goes under variable to-variablelength coding technique. Table 2 tabulates our modified codewords for different run lengths. In the table, the first column incorporate the different run-length, the second column incorporate the group list, the third and the fourth column incorporates the prefix and

6

V. Suresh kumar and R. Manimegalai / Microprocessors and Microsystems 68 (2019) 1–10

Fig. 4. Comparison of ERLC and MERLC.

Table 2 Modified ERLC scheme. Run-length

Group

Prefix

Tail

Code word Runs of 0’s

Code word Runs of 1’s

1 2 3 4 5 6 7 8 9 10 11 … 13

– A1

– 10

A2

110

– 00 01 10 11 000 001 010 011 100 101 … 111

000 10 0 0 1001 1010 1011 110 0 0 0 110 0 01 110010 110011 110100 110101 … 110111

001 110 0 0 11001 11010 11011 1110 0 0 0 1110 0 01 1110010 1110011 1110100 1110101 … 1110111

tail of the codeword and the last two columns contain the code words for runs of 0’s and runs of 1’s. The principle contribution in our scheme of MERLC coding is the diminishment of codeword length for run of 0’s. Here the codewords for run of 1’s is similar to that of the ERLC method presented in [19], whereas the codeword for runs of 0’s for all the run-length is reduced by 1-bit, when compared to ERLC method. Apart from this, if the run-length is 1, then an altered codeword for 0 and 1 are given in our method that is for 0 the codeword is 0 0 0 and for 1 the codeword 001. In the proposed method, if two or more adjacent runs are of equal run length, then the first run is represented by the appropriate codeword in the table, and the subsequent consecutive runs are represented by a 3-bit code expression of 010 for runs of 0, and 011 for runs of 1. Thus in the proposed scheme reduction in codeword length for each occurrence of first run of 0’s by 1-bit, further contributes for compressing test data volume effectively. This would result in better compression ratio if the test set contains more number of run of 0’s. The comparison of proposed MERLC encoding scheme with ERLC scheme is illustrated with sample test data with repeated runs, in Fig. 4.

4.3. Architecture for decompression Since in a lossless compression scheme the first data must be reproduced productively with no loss in data, the decoder ought to be a proficient one. In test data compression, subsequent to the preprocessing (X-filling, ERLC encoding) happens remotely i.e. by using software, the test data compression scheme depends just on the compression ratio and free to the preprocessing time. Since the decoder is a part of SOC its architecture ought to be a low power and should require just a small area in the SOC. The decoder architecture ought not to be an overhead to the first working of the circuit on test. The decoder architecture adopted here is a FSM based one, like that of the decoder utilized as a part of FDR or EFDR. The decoder recreates the first test data from the encoded bit streams. The architecture of the decoder is shown in Fig. 5. In the process of initializing the decoder, the memory is stored with all the possible code words for that respective circuit in which the decoder is residing. The steps involved in the process of decompressing the compressed test data are as takes after, Step. 1: FSM-1 is fed with the encoded bit stream, Data_in and once test mode is enabled for the DUT, both the FSM-

V. Suresh kumar and R. Manimegalai / Microprocessors and Microsystems 68 (2019) 1–10

7

Fig. 5. Block diagram of the Proposed MERLC Decoder.

The reconstruction of original test data from the codeword in the FSM-2 follows the below algorithm.

Algorithm 1 Input : Codeword {Ci = c0 c1 c2 .......cn }, BitCountchec ker {P = odd, even} Out put : Origional T est vector {T i = t0 t1 t2 .......tm } 1. Load Ci 2. i f P = odd, then 3. run = c0     4.

length = c1 ......c n−1 2

+ c n+1 c n+3 .......cn 2

2

5. else, i f P = even, then 6. run = 0     7. length = c0 c1 c2 ......c n2 + c n2 +1 c n2 +2 .......cn 8. end i f 9. end i f 10. return T i = {run, length} Fig. 6. Comparison based on CompressionRatio.

1 and FSM-2 are fed with a clock pulse clk. The En_F1 enables the FSM-1, which in turn enables the complete decoder architecture. Step. 2: The FSM-1 then enables the bit matching engine, which matches each bit from the Data_in and the code words that are pre-loaded inside the memory. Step 2 repeats until an exact matching string is detected. Step. 3: When an extract match is detected, then the corresponding codeword from the memory is extracted and send to the Bitcounter checker for odd or even number of bit checking. Step. 4: Once the Bitcountchecking is completed, FSM-1 enables the FSM-2, which reconstructs the original run and length from the codeword.

5. Results and discussion In this segment, the viability of the proposed Modified ERLC scheme is verified on the ISCAS89 benchmark circuits. For examination with different plans, we make use of the Min-Test test sets, which are the same as [19-26]. 5.1. Performance evaluation The comparison of compression ratio of the proposed compression technique with other existing compression techniques is tabulated in Table 3. The compressed test data size and compression ratio of the proposed plan are appeared in the 2nd and 3rd rows, separately. Alternate rows demonstrate the compressed test data size and compression ratio of ERLC [19], FDR [10], EFDR [13], Golomb [9] and 9C [27] coding methods, individually. The compression ratio is controlled by,

8

V. Suresh kumar and R. Manimegalai / Microprocessors and Microsystems 68 (2019) 1–10 Table 3 Comparison of compression ratio. Circuit

s5378 s9234 s13207 s15850 s38417 s38584 Avg.

Td

23,754 39,273 165,200 76,986 164,736 199,104

Proposed Scheme

ERLC [19]

FDR [17,23]

EFDR [23]

Golomb [16,22]

9C [17]

Te

CR%

Te

CR%

Te

CR%

Te

CR%

Te

CR%

Te

CR%

10,613 21,365 26,118 24,213 63,869 60,935

55.32 45.6 84.2 68.54 61.23 69.39 64.05

12,389 22,210 32,044 25,844 67,990 76,473

47.84 43.44 80.6 66.43 58.72 61.59 59.77

12306 21644 35226 36276 74896 93860

48.19 44.88 78.67 52.87 54.53 52.85 55.33

11419 21250 29992 24643 64962 73853

51.93 45.89 81.85 67.99 60.57 62.91 61.86

14086 22252 41663 40718 92054 104111

40.7 43.34 74.78 47.11 44.12 47.71 49.63

11487 19279 29223 25882 64856 68631

51.64 50.91 82.31 66.38 60.63 65.53 62.90

Table 4 Comparison of Average Scan-in and Peak-Power Consumption. Proposed scheme

s9234 s13207 s15850 s38417 s38584 Average

ERLC

FDR

EFDR

PAvg

PPeak

PAvg

PPeak

PAvg

PPeak

PAvg

PPeak

3468 7789 12,810 100,490 86,343 42,180

11,716 89,736 61,288 398,446 453,220 202,881.2

3500 8115 13,450 120,775 89,356 47,039.2

12,069 97,614 63,511 404,693 479,573 211,492

5692 12,416 20,742 172,665 136,634 69,629.8

12,994 101,127 81,832 172,834 531,321 180,021.6

3469 8016 13,394 117,834 89,138 46,370.2

12,062 97,613 63,494 404,654 479,547 211,474

Fig. 7. Comparison based on Bit-size Deviationfrom original Test Data.

Table 5 Device utilization summary for MERLC decoder. Logic Utilization

Used

Available

Utilization

No. of 4 input LUTs No. of occupied Slices Total no. of 4 input LUTs No. of bonded IOBs

34 128 251 333

178,176 89,088 178,176 960

1% 1% 1% 34%

Compression ratio : CR% =

Td − Te × 100% Td

(4)

where, Td is the original bit size of test data. Te is the bit size of compressed data. Fig. 6 plots the compression ratio of proposed MERLC, ERLC, FDR, EFDR, Golomb and 9C coding techniques. The above comparison reveals that the compression ratio for our proposed MERLC scheme exhibits a high CR for most of the ISCAS89 circuits. Also

a slight deviation is observed for all other cases. The comparison of bit difference between original test data and other compression schemes (including proposed) for various ISCAS89 benchmark circuits is plotted in Fig. 7. The bit size deviation for our proposed test data compression technique, is much better than, most of its other counterparts. The resultant compression of the this scheme is 4.28%, 8.72%, 2.19%, 14.42% and 1.15% higher than those of ERLC, FDR, EFDR, Golomb and 9C coding respectively. Bit difference between the proposed and existing compression schemes for various ISCAS89 benchmark circuits is plotted in Fig. 8. It is seen that the ERLC scheme exhibits a slight variation in bit size comparing to that of our proposed MERLC scheme, whereas the Golomb scheme of compression shows a vast variation. 5.2. Power evaluation The average and peak-powers during scan-in modes are estimated using weighted transition metric (WTM) as given in [11].

V. Suresh kumar and R. Manimegalai / Microprocessors and Microsystems 68 (2019) 1–10

9

Fig. 8. Comparison based on Bit Difference of existing schemes from proposed scheme.

Table 4 shows the comparison of scan in average power and peak power of the proposed compression procedure with other distributed works. Our system gives significant power savings compared to that of the circuits with ERLC, FDR and EFDR schemes.

5.3. Area evaluation of decoder We implemented decompression circuits of proposed MERLC for the six large ISCAS’89 benchmark circuits using Verilog HDL using the Xilinx Virtex4 XC4VLX200-11FF1513 FPGA. The XST tools in the Xilinx, synthesize the designs and map to the target device with area optimization. The RTL technical schematic of decoder is shown in Fig. 9. The inbuilt I-sim simulator is used for the verification process of the designed architecture. Simulations are performed for verifying the functioning of the decoder. The simulation result is shown in Fig. 10. The proposed decoder module uses just 34 (1%) Fig. 9. RTL Schematic of MERLC decoder.

Fig. 10. Simulation result for MERLC decoder.

10

V. Suresh kumar and R. Manimegalai / Microprocessors and Microsystems 68 (2019) 1–10

LUTs, 128 (1%) slices, and 251 (1%) flip-flops of the XC4VLX20011FF1513FPGA device. The device utilization summary for the MERLC decoder is shown in Table 5. Thus, based on experimental results it can be concluded that our proposed scheme can achieve better compression results with acceptable low hardware overhead. 6. Conclusion In order to enhance the test data volume reduction and thereby contributing in the overall test power reduction a modified technique based on ERLC compression scheme is presented in this paper. The main advantage of the proposed scheme includes an advanced X-filling scheme, a modified compression scheme that exploits this X-filling scheme and generates code words with less bit length, when compared to the existing compression techniques. Hardware architecture for decoder is designed and implemented using Xilinx software. The experimental results did with ISCAS 89 benchmark circuits exhibits an improvement in compression ratio with low scan. Moreover, the proposed scheme has a decoder of low complexity and less area overhead. Thus, it is an effective solution for test data compression/decompression of SOC design. Conflict of Interest There is no conflict of interest. References [1] P. Girard, Survey of low-power testing of VLSI circuits, IEEE Des. Test. Comput. 19 (2002) 82–92. [2] N.A. Touba, Survey of test vector compression techniques, IEEE Des. Test Comput. 23 (2006) 294–303. [3] Wenfa Zhan, Huaguo Liang, Feng Shi, Test data compression scheme based on variable-to-fixed-plus-variable-length coding, J. Syst. Archit. 53 (2007) 877–887. [4] L. Lei, K. Chakrabarty, Test data compression using dictionaries with fixed-length indices, in: Proceedings of the VLSI Test Symposium, 2003, 2003, pp. 219–224. [5] A. Al-Yamani, E. McCluskey, Seed encoding for LFSRs and cellular automata, in: Proceedings of the Design Automation Conference, 2003, 2003, pp. 560–565. [6] A. Jas, J. Ghosh-Dastidar, N.A. Touba, An efficient test vector compression scheme using selective Huffman coding, IEEE Trans. Comput. Aided Des. 23 (2003) 797–806. [7] A. Jas, N.A. Touba, Test vector decompression via cyclical scan chains and its application to testing core-based designs, in: Proceedings of the International Test Conference, 1998, 1998, pp. 458–464. [8] T. Paul, B. Al-Hashimi, N. Nicolici, Variable-Length input huffman coding for system-on-a-chip test, IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 22 (2003) 783–796. [9] A. Chandra, K. Chakrabarty, System-on-a-chip test data compression and decompression architectures based on Golomb codes, IEEE Trans. Comput. Aided Des. Integr. Cir. Syst. 20 (2001) 355–368. [10] A. Chandra, K. Chakrabarty, Test data compression and test resource partitioning for system-on-a-chip using frequency-directed run-length (FDR) codes, IEEE Trans. Comput. 52 (2003) 1076–1088. [11] A. Chandra, K. Chakrabarty, Reduction of SOC test data volume, scan power and testing time using alternating run-length codes, in: Proceedings of the IEEE/ACM Design Automation Conference, 2002, 2002, pp. 673–678. [12] A. Wuertenberger, C.S. Tautermann, S. Hellebrand, A hybrid coding strategy for optimized test data compression, in: Proceedings of the International Test Conference, 2003, 2003, pp. 451–459.

[13] Aiman El-Maleh, Test data compression for system-on-a-chip using extended frequency-directed run-length code, IET Comput. Digital Tech. 2 (2008) 155–163. [14] M. Tehranipoor, M. Nourani, K. Chakrabarty, Nine-coded compression technique for testing embedded cores in socs, IEEE Trans. VLSI Syst. 13 (2005) 719–731. [15] Maoxiang Yi, Huaguo Liang, Lei Zhang, Wenfa Zhan, A novel x-ploiting strategy for improving performance of test data compression, IEEE Trans. Very Large Scale Integr. Syst. 18 (2010) 324–329. [16] U.S. Mehta, K.S. Dasgupta, N.M. Devashrayee, Run-Length-Based Test Data Compression Techniques: how Far from Entropy and Power Bounds?—A Survey, Run-Length-Based Test Data Compression Techniques, 2010, Hindawi Publishing Corporation VLSI Design, 2010. [17] Bo Ye, Qian Zhao, Duo Zhou, Xiaohua Wang, Min Luo, Test data compression using alternating variable run-length code, Integr. VLSI J. 44 (2011) 103–110. [18] Jaeseok Park, Sungho Kang, A Twin symbol encoding technique based on run-length for efficient test data compression, ETRI J. 33 (2011) 140–143. [19] Wenfa Zhana, Aiman El-Maleh, new scheme of test data compression based on equal-run-length coding (ERLC), Integr. VLSI J. 45 (2012) 91–98. [20] S. Sivanantham, M. Padmavathy, G. Gopakumar, P.S. Mallick, J.R. Paul Perinbam, Enhancement of test data compression with multistage encoding, Integr. VLSI J. 47 (2014) 499–509. [21] Haiying Yuan, Kun Guo, Xun Sun, Jiaping Mei, Hongying Song, A power efficient BIST TPG method on don’t care bit based 2-D adjusting and hamming distance based 2-D reordering, J. Electron. Test. 31 (2015) 43–52. [22] Min-yong Wan, Yong Ding, Yun Pan, Xiao-lang Yan, An efficient compatibility-based test data compression and its decoder architecture, J. Electron. Test. 27 (2011) 787–796. [23] Wang-Dauh Tseng, Lung-Jen Lee, Test data compression using multi-dimensional pattern run-length codes, J. Electron. Test. 26 (2010) 393–400. [24] Haiying Yuan, Jiaping Mei, Hongying Song, Kun Guo, Test data compression for system-on-a-chip using count compatible pattern run-length coding, J. Electron. Test. 30 (2014) 237–242. [25] Stephan Eggersglub, Dynamic X-filling for peak capture power reduction for compact test sets, J. Electron. Test. 30 (2014) 557–567. [26] Tie-Bin Wu, Heng-Zhu Liu, Peng-Xia Liu, Efficient test compression technique for soc based on block merging and eight coding, J. Electron. Test. 29 (2013) 849–859. [27] R. Sankaralingam, R.R. Oruganti, N.A. Touba, Static compaction techniques to control scan vector power dissipation, in: Proceedings of the IEEE VLSI Test Symposium, 20 0 0, 20 0 0, pp. 35–40. V. Suresh Kumar is working as Assistant Professor in SRM Valliammai Engineering college, Chennai. He is doing his research in Anna University, Chennai. He completed his B.E. (EIE) from University of Madras and M.Tech. (VLSI Design) from Sathyabhama University, Chennai. He has over 17 years of experience in Teaching and Industrial Experience. He is Life member of ISTE and member of IEEE, IE(I) and ISC.

R. Manimegalai is working as Professor in the Department of Information technology, has done Ph.D. in the Department of Computer Science and Engineering at IIT Madras. She received her B.E. (CSE) from PSG Tech., Coimbatore, and M.E. (CSE) from College of Engineering Guindy, Anna University. She has worked as software engineer with DCM Technologies, New Delhi and Team Lead in Xilinx Technologies, Hyderabad. She has served in various educational institutions, with different capacities, as Dean, Director and Principal She holds membership as Fellow in Institution of Engineers India (IEI), Senior Member in Computer Society of India (CSI), ISTE, IEEE, ACM and VLSI society of India. Her areas of interest include Security in Distributed, Embedded and IoT Systems, VLSI/FPGA Algorithms. She has widely published in journals and conferences and is guiding several PhD research scholars through Anna University, Chennai.