Hardware implementation of a sub-pixel algorithm for real-time saw blade deflection monitoring

Hardware implementation of a sub-pixel algorithm for real-time saw blade deflection monitoring

ARTICLE IN PRESS INTEGRATION, the VLSI journal 39 (2006) 291–309 www.elsevier.com/locate/vlsi Hardware implementation of a sub-pixel algorithm for r...

611KB Sizes 2 Downloads 74 Views

ARTICLE IN PRESS

INTEGRATION, the VLSI journal 39 (2006) 291–309 www.elsevier.com/locate/vlsi

Hardware implementation of a sub-pixel algorithm for real-time saw blade deflection monitoring Joanna C.K. Laia,, Waleed H. Abdullaa,, Stephan Hussmannb a

Department of Electrical and Computer Engineering, School of Engineering, University of Auckland, Private Bag 92019, Auckland, New Zealand b Department of Electrical Engineering and Information Technology, University of Applied Sciences Westku¨ste, Fritz-Thiedemann-Ring 20, 25746 Heide, Germany Received 5 November 2004; received in revised form 20 July 2005; accepted 27 July 2005

Abstract Deflections of saw blades during timber sawing process due to tension loss lead to downgrading and value loss of timber. In this paper a CCD-type laser triangulation sensor is used to monitor the saw blade deflections. Deflections monitoring has to be done in real-time which compels the necessity to use efficient algorithm with low computational cost. High speed algorithm that makes use of an approximated centre of gravity (COG) and an overall peak-to-peak amplitude methods has been designed and implemented in a Field Programmable Gate Array (FPGA). The approximated COG method tracks the position of the saw blade in real-time. It generates results at 7000 frames/s at the sensor’s maximum clock rate of 2 MHz. It also provides a sub-pixel resolution of 1/8 pixel. The overall peak-to-peak amplitude method determines the deflection level of the saw blade based on its recorded positions. Both methods are simple to implement and require low resource usage, while providing reliable real-time results. r 2005 Elsevier B.V. All rights reserved. Keywords: FPGA; CCD; Laser triangulation sensor

Corresponding authors. Tel.: +649 373 7599; fax: +649 373 7461.

E-mail addresses: [email protected] (J.C.K. Lai), [email protected] (W.H. Abdulla). 0167-9260/$ - see front matter r 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.vlsi.2005.07.004

ARTICLE IN PRESS 292

J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

1. Introduction Bandsaws are commonly used in sawmills industry to saw timber logs. The mechanism involves rolling a steel saw blade around two wheels. Tension is applied onto the saw blade by the wheels. If the tension is lost, the saw blade will vibrate and deflect sideways, forming a standing wave when it runs, as shown in Fig. 1. In such case, the saw will not be able to cut in a straight line. As a result, the value of the sawn timber will be downgraded. Tension of the saw blade usually looses over time, or can be lost in a short time if the blade is overheated. Therefore, there should be a measure to monitor the deflections of the saw blade. Analogue inductive proximity sensors displaying lights that indicate the saw blade deflection level have been employed in some sawmills in New Zealand [1]. However, the sensors have to be mounted very close to the saw blades. Damages of the sensors by the saw blades are very likely to happen. Also, the analogue approach is sensitive to thermal drifts and component ageing. On the contrary, a CCD-type laser triangulation sensor [2] provides a digital approach of monitoring the saw blade deflections. Moreover, the long measuring range of laser triangulation sensors is superior to that of inductive proximity sensors [3]. The risk of damaging the sensor can thus be greatly reduced. Furthermore triangulation sensors are widely used in industries such as the sheet metal industry [4,5]. Hence they can be used in the harsh environment of timber sawmills. The measurement rate of CCD-type laser triangulation sensor is normally low due to the extensive computational load in processing the data [6]. However, saw blade deflections monitoring has to be done in high speed and real-time due to the rapid movement of the saw blade. To achieve this, fast and simple algorithm deployed on a single FPGA has been developed. This has been done by first acquiring saw blade movement data using a CCD-type laser triangulation sensor, so as to investigate the saw blade deflection behaviours. The data have been rolling direction

saw blade

wh eel upper guide deflection lower guide

saw teeth

wheel

Fig. 1. Mechanism of a bandsaw and the problem of saw blade deflection.

ARTICLE IN PRESS J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

293

analysed and tested against some common processing methods for CCD-type laser triangulation sensor data. However these methods cannot fulfill the accuracy and real-time processing requirements at the same time. Therefore two new methods have been designed to achieve the goal. The approximated centre of gravity (COG) method which tracks the position of the saw blade in real-time has been based on the idea of cumulative distribution function, while the overall peak-to-peak amplitude method determines the deflection level of the saw blade by finding the maximum and minimum points of saw blade position over a certain period of time. The methods require low computational cost. The parallel processing property of an FPGA can further speed up the processes [7]. These allow the saw blade deflections to be monitored in realtime and at low cost, while retaining the accuracy. In this paper, the working principle of a CCD-type laser triangulation sensor for saw blade deflection monitoring is briefly explained in Section 2. Section 3 outlines the experiments for acquiring data that represent the saw blade deflections. Section 4 describes the data analysis and algorithm development, and the hardware implementation of the algorithm is explained in Section 5. The implementation results are presented in Section 6. Finally, Section 7 concludes the work presented in this paper.

2. Working principle of the laser triangulation sensor A laser triangulation sensor is a non-contact sensor which measures object distances by projecting a laser beam onto an object surface [8], which is a saw blade in this case. An optical lens then focuses the reflected light onto an optical sensor (CCD linear sensor). This forms a laser image spot on the CCD. As the saw blade position varies, the spot position on the CCD varies as well. The working principle for the saw blade deflection monitoring system is illustrated in Fig. 2. The CCD contains an array of pixel elements. The image spot is distributed across several pixels instead of forming a sharp spot due to the presence of speckle noise [9]. Each of the pixel elements delivers a voltage level, depending on the light intensity distributed on it, as shown in Fig. 3. The voltage can be converted into digital values, which can be used for digital signal processing to determine the saw blade position.

3. Experimental set-up for real sensor data acquisition Since the algorithm is to be developed for a practical application, it is based on real data that represent situations that the sensor is actually measuring. For this reason, experiments have been conducted to collect the required data. The data are then sent to a personal computer (PC) for data analysis and algorithm development. 3.1. Experimental set-up The experimental set-up is illustrated in Fig. 4. A laser triangulation sensor is mounted on a small-scale bandsaw, with a laser beam projected onto the saw blade where maximum deflections would take place. The distance between the sensor and the saw blade is 195 mm. The laser

ARTICLE IN PRESS J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

Laser triangulation sensor Laser beam saw Laser diode

blade

Reflected beams Optical lens Optical detector

Fig. 2. Working principle of a laser triangulation sensor for saw blade deflection monitoring.

Laser spot image CCD

Pixel elements

Voltage Level

294

Pixel Positions

Fig. 3. Relationship between laser spot image position and voltage distribution on a CCD linear sensor.

ARTICLE IN PRESS J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

295

Bandsaw station

Saw blade Laser spot

PC

sensor

FPGA

Interface circuitry

Fig. 4. Experimental set-up. power Laser Triangulations

clock signals

Sensor measurement data

Interface Circuitry Digital and analog power supply ADC

measurement data

clock signals

FPGA data (RS232)

Clock signal generation Data I/O Data processing

Fig. 5. Block diagram of the test bed system.

triangulation sensor is made up of a laser diode and a Sony ILX521A device, which is a CCD linear image sensor. It consists of 256 effective pixels, with a resolution of 0.8 mm/pixel. The information given in the 256 pixels at a particular moment makes up to one frame, which can be processed to give an object distance based on the working principles described in Section 2. The sensor’s maximum clock rate is 2 MHz. To clock out the 256 pixels with that speed plus some overhead clock timing requirements, the measurement rate is approximately 7000 frames/s.

ARTICLE IN PRESS 296

J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

The FPGA device is a Xilinx Spartan-IIE XC2S300E chip. The block diagram of the test bed is shown in Fig. 5. The FPGA generates signals to trigger the sensor, and the measurement data are then sent from the sensor to the FPGA, all via an interfacing circuit. The interfacing circuit supplies power to the sensor, drives the FPGA signals to the sensor, converts the analogue voltages from the sensor into 8-bit digital values using an analogue-to-digital converter (ADC), and sends them back to the FPGA. When the FPGA receives the data, it buffers them and sends them to a PC via the RS232 serial interface. The data are then stored as data files which are used for data analysis and algorithm development in later stages. 3.2. Data acquisition During the experiments, the saw blade has been subjected to the stationary, high tension, medium tension and low tension modes. The observed deflection ranges under different situations are presented in Table 1. Under each situation, 40 data sets have been collected. Each set of data consists of 25 frames. The data sets have been collected using different sampling rates, i.e. different time intervals between the 25 frames. This is to investigate the vibration frequency of the saw blade, which will be useful in designing the algorithm. With the 40 data sets under each situation, 10 have been collected using the maximum rate, i.e. 7000 frames/s. For the remaining sets, they have been down sampled by 10, 20 and 30 to get the frame rates of 700, 350 and 233.33 frames/s, respectively. Each of them contains 10 sets. Once the experiments are started, the sensor keeps sending the measurement frame data to the FPGA at the maximum rate. The FPGA determines whether to buffer a frame or not, depending on the current sampling rate setting. If a frame, which is a set of 256  8-bit data, is to be buffered, it will be stored in a circular buffer. The circular buffer can hold 26 frames at a time, with the latest frame overwriting the oldest record. Every time a button on the FPGA board is pressed, the past 25 frames will be sent to the PC in the correct order. It is assumed that when the button is pressed, one of the 26 slots in the buffer is in the middle of its writing stage. The data in that slot are therefore not sent.

4. Real-time saw blade deflection measuring algorithms After the data are collected, they will be analysed in MATLABTM. To determine the saw blade deflection, two sub-algorithms have been developed. The first sub-algorithm tracks the saw blade Table 1 Observed deflection ranges under different situations Situations

Deflection range (mm)

Stationary High tension Medium tension Low tension

0 0.5 1–1.5 2–2.5

ARTICLE IN PRESS J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

297

movements by finding its positions with sub-pixel accuracy, while the second sub-algorithm computes the deflection level of the saw blade according to its movements. 4.1. Sub-pixel position tracking algorithm Several common methods of finding the saw blade position have been tested. These include peak search [10,11], peak of moving average, COG [6], and COG in a fixed window [12]. In the peak search, merely the maximum point of the frame is used to determine the saw blade position. This is unreliable as the laser image spot on the CCD sometimes lies across more than one pixel; hence, the peak does not represent the actual position of the spot. Moreover, this method lacks the sub-pixel resolution. The peak of moving average method attempts to filter out the speckle noise effect through fourpoints filtering. That means each point in the frame obtains a new value which is the average of the past three points and itself. The peak of the smoothed frame is then used to determine the blade position. However, this method demonstrates similar results as the peak search method. The COG method uses all the pixels to determine the blade position. It can be formulated by P256 i¼1 ði  Pi Þ , (1) P256 i¼1 Pi where i denotes the pixel position and Pi denotes the pixel grey value of the ith pixel. The results are badly influenced by the presence of secondary peaks, which shift the COG away from the true spot position. The secondary peaks may be caused by light reflection from some floating dust particles that are present around the saw. The results can be improved by applying COG inside a fixed window around the peak, so as to exclude the secondary images. This is the COG in fixed window method. However, the problem is the uncertainty of the secondary peak position. It is difficult to determine the position and width of the window. A method which combines threshold and COG has therefore been designed and investigated in the research. First of all, the peak position is located. The threshold is set to 1/2 of the peak value. Starting from the peak position, pixels adjacent to the peak are scanned, in both left and right directions. The pixels including the peak are entered into a window until it comes to a point where a pixel has a value below the threshold. We found that less than 0.6% of the frames lie outside a window width of more than eight pixels. Thus, the width of the window is restricted to eight pixels. COG is then applied inside the window. Results of this method display the saw blade vibration patterns clearly. It also provides a sub-pixel resolution. However, the drawback of this method is that it requires a lot of multiplications and divisions, which are computationally exhaustive. This method is therefore modified to eliminate the multiplications and divisions, which leads to the approximated COG method used in the final algorithm realisation. The approximated COG method makes use of the following concept. The purpose of COG is to look for a centre point where there is a balance of values on both sides of the point. Eq. (1) demonstrates such a concept. However, there is another way of doing this. The pixels inside the window can be used to construct a cumulative distribution function (CDF). Using the sample frame shown in Fig. 6, pixel values around the peak are displayed in Table 2. The italic numbers represent the pixels with grey values above the threshold, which is 255/2 ¼ 127.5.

ARTICLE IN PRESS J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

298

Fig. 6. One sample frame.

Table 2 Pixel grey values around the peak in the sample frame Pixel position Pixel value

230 101

232 159

231 109

233 249

234 255

235 228

236 54

237 27

238 27

Table 3 CDF values of the window in the example frame Pixel position x Pixel value Px CDF gðxÞ

232 159 159

233 249 408

234 255 663

235 228 891

Therefore, pixels 232–235 are entered into the window, with pixel 234 being the peak. The CDF gðxÞ is calculated using Eq. (2): gðxÞ ¼

x X

Pi

and x ¼ ½232; 233; 234; 235,

(2)

i¼232

where x and i denote the pixel position and Pi denotes the pixel grey value of the ith pixel. The values of the CDF are shown in Table 3. As a result, the sum of all the pixel grey values within the window is 891. To find a point where there is a balance of values on both sides of the point, this final sum is divided by 2,

ARTICLE IN PRESS J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

299

which is: Gmid ¼

gð235Þ 891 ¼ ¼ 445:5. 2 2

(3)

According to Table 3, Gmid lies between pixels 233 and 234. That means the COG lies somewhere between pixels 233 and 234 as well. Pixel 233 may be regarded as the pixel position of the COG. To improve the resolution, a sub-pixel algorithm has been introduced. The CDF interval (d) between pixels 233 and 234 is divided into eight sub-intervals, each separated by 1/8 ( ¼ 0.125) of d. The eight sub-intervals are d 0 ¼ 0, d 1 ¼ 31:875, d 2 ¼ 63:750, d 3 ¼ 95:625, d 4 ¼ 127:500, d 5 ¼ 159:375, d 6 ¼ 191:250 and d 7 ¼ 223:125. A linear relationship is imposed between the positions of Gmid and COG in the interval. The concept is illustrated in Fig. 7. The difference (dmid) between Gmid and gð233Þ is: d mid ¼ Gmid  gð233Þ ¼ 445:5  408 ¼ 37:5.

(4)

Since dmid (37.5) lies above d1 (31.875), but below d2 (63.875), the COG is therefore defined as: COG ¼ 233 þ 0:125 ¼ 233:125.

(5)

However, since this method is a discrete approach, the COG is only an approximation. There is an offset between the original COG calculated using Eq. (1) and the approximated COG. The offset between the COGs of each frame is calculated. An average offset is calculated among the frames under the four situations. The average offset is found to be 0.49. After adding the average

Fig. 7. Concept of the approximated COG with sub-pixel resolution.

ARTICLE IN PRESS 300

J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

offset to the approximated results, it has been found that there is a close match between the results of the two approaches, as illustrated in Fig. 8. The mean square errors (mse) between the results within a 25-frame set can be calculated by P25 ðxn  yn Þ2 , (6) mse ¼ n¼1 25 where xn denotes the original COG of the nth frame, and yn denotes the approximated COG of the nth frame. The average mse among different frame sets under different situations are presented in Table 4. They are so small that they could be deemed negligible. Moreover, the approximated COG method can be incorporated with calibrations and look-up tables to find the relationship between the approximated COG and the actual distances.

Fig. 8. Comparison between original COG and approximated COG with offset.

Table 4 Mean square errors between the results of original and approximated COG methods Situations

Mean square errors

Stationary High tension Medium tension Low tension

0.0001 0.0027 0.0044 0.0044

ARTICLE IN PRESS J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

301

4.2. Deflection level detection algorithm The movement patterns of the saw blade obtained by the approximated COG method can be used to compute the deflection level. They are first plotted as graphs so that potential methods can be investigated. Results obtained using a down sampling factor of 30 are used since they give the best pattern of movements. Examples from each of the situations are shown in Fig. 9(a)–(d). Since there has been no similar algorithm for monitoring saw blade deflection, several new methods have been tested to determine the deflection level from the movement patterns. The overall peak-to-peak amplitude method was selected due to its reliability with real-time performance and it could be implemented at low cost. The current deflection level will be determined by finding the maximum and minimum values of COG over an analysis period and then calculating the difference between them. The larger the difference indicates the larger the deflection. The analysis period can neither be too short nor too long. It has to be long enough to include more than one cycle of the vibrations, otherwise it will not indicate the true blade

Fig. 9. Saw blade movement patterns collected with a down sampling factor of 30 under (a) Stationary (b) High tension (c) Medium tension and (d) Low tension situations.

ARTICLE IN PRESS J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

302

deflection. On the other hand, if the period is too long, the current result will be affected by the earlier samples which are uncorrelated to the present samples. From data analysis, it has been found that a period corresponding to two cycles of low tension vibration will be suitable for determining deflection level with reliable results. As can be seen from Fig. 9(d), an analysis period that covers two cycles of low tension vibration is equivalent to the time required to take approximately 16 samples with a down sampling factor of 30. However in practical situation, the maximum measurement rate of 7000 frames/s is preferred over the lower measurement rates obtained by down sampling. With a higher frame rate, the number of samples captured in the same period of time goes up. The analysis period (p) can be calculated using Eq. (7). p¼

1  dsf  ðs  1Þ, 7000

(7)

where 7000 is the maximum measurement rate of the sensor in frames/s, dsf denotes the down sampling factor and s denotes the number of samples. In the case of dsf ¼ 30, s ¼ 16, it gives p¼

1  30  ð16  1Þ. 7000

(8)

Keeping the analysis period constant, substituting dsf ¼ 1 and rearranging the equation, the number of samples required at measurement rate of 7 kHz can be calculated by 1 1  1  ðs  1Þ ¼  30  ð16  1Þ 7000 7000 s ¼ ½30  ð16  1Þ þ 1 ¼ 451.

ð9Þ

This implies that the current deflection level can be determined based on the overall peak-topeak amplitude in the past 451 samples. This method has been applied onto the results from the approximated COG method. The summary of the overall peak-to-peak amplitudes, with the unit in pixel, is presented in Table 5. To distinguish among the deflection levels, a threshold should be set between the maximum of a level and the minimum of the next level above. For example, to distinguish between Medium tension and Low tension, the threshold should be set to 1.750, so that an amplitude value that lies below 1.750 will be defined as Medium tension, while a value that lies above will be defined as Low tension. The thresholds for other tension levels are set in a similar manner and are listed in Table 5 as well. Table 5 Overall peak-to-peak amplitudes under different situations Situations

Deflection levels

Maximum

Average

Minimum

Resulting threshold (pixels)

Stationary High tension Medium tension Low tension

Stationary Small Medium Large

0 0.750 1.750 3.500

0 0.625 1.269 2.510

0 0.375 0.750 1.750

0 0.375 0.750 1.750

ARTICLE IN PRESS J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

303

5. Hardware implementation of the algorithms 5.1. System overview The algorithm has been implemented in FPGA as a platform for the saw blade monitoring system. The system set-up is the same as described in Section 3, except the connection to the PC is not required. The deflection levels are presented by a set of signals, which can be used to drive alarming LEDs. They can also be directly fed to a feedback control system to control the wood feeding speed or the saw blade tension. The entire algorithm has been implemented in a single Xilinx Spartan-IIE XC2S300E FPGA device, using Very high speed integrated circuit Hardware Description Language (VHDL). Implementation, simulation and synthesis of the design are done in the Xilinx Foundation ISE software package. The algorithm is made up of three modules. The clock generation module generates triggering and clock signals for the sensor and the ADC. It is also responsible for handling data input from the ADC. It stores incoming pixels inside a 256  8-bit Block RAM. When it detects a new frame, it triggers the second module, which is the position tracking module described in Section 4. After computing the COG of the current frame, the result will be sent to the deflection level module for deflection level determination. Although the three modules are interdependent, they run in parallel and the results are computed in real-time. The resource usage of all three modules is very low and below 20% of the available resources. The implementation is mainly made up of low complexity operations such as additions, subtractions, comparisons and shifting, which accounts for the low resource usage. The overall structure of the FPGA implementation is illustrated in Fig. 10. The following sections will describe the implementation of the sub-pixel position tracking algorithm and the deflection level detection algorithm in more detail. 5.2. Position tracking module The position tracking module has been implemented as an eight states finite state machine (FSM). The algorithm flow of the FSM is depicted in Fig. 11. During state 0, the module is idle and waits for the arrival of the next pixel. When a pixel arrives, the module determines whether it

Fig. 10. Overall structure of FPGA implementation.

ARTICLE IN PRESS 304

J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

Fig. 11. Algorithm flow chart of the position tracking module FSM.

is the first pixel in a frame. If so, the pixel value and its position will be stored as a temporary peak. If it is not a first pixel, it will then be compared with the temporary peak. If its value is larger than that of the temporary peak, it will be updated as the new temporary peak. The arrival of a pixel does not necessarily take place in state 0, but can be in any of the subsequent processing states. Whenever the temporary peak is updated, the module will break into state 1. Otherwise, the module stays in its current state. State 1 is an initialisation state during which the counters and window registers are initialised for the subsequent states. After that, the module moves into state 2. Pixels preceding the temporary peak are read from the RAM and compared with the threshold. One pixel is processed on each system clock cycle, until a pixel that is below the threshold is reached. The threshold is half of the current temporary peak and could be obtained by right-shifting the peak by 1 bit. However, to avoid truncating the least significant bit (LSB) by right-shifting and create precision loss, instead of dividing the peak by 2, the pixel to be compared is multiplied by 2 by concatenating it with a ‘0’ bit at the LSB, and then compared with the peak. Once a pixel that is

ARTICLE IN PRESS J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

305

below the threshold or the start of the RAM is reached, the starting pixel position of the window will be recorded and state 3 begins. State 3 begins before the pixel after the temporary peak arrives. Therefore the function of state 3 is to wait for the arrival of the subsequent pixels. The same comparison method as in state 2 is used on the upcoming pixels, until the end of the COG window is detected. If the window only contains one pixel, the pixel will be updated as the new COG directly and the next state will be state 0. Otherwise, state 4 will be executed. In state 4, pixels in the window are read from the RAM and fed to an accumulator, at a rate of one pixel per cycle. A cumulative value will then be output from the accumulator on each clock cycle and written into the window registers, forming the CDF. The window registers are a set of eight registers, each with 11 bits, which are enough to hold the cumulative values of eight 8-bit pixels. When the CDF is calculated, the pixel with the cumulative value just below half of the total will be searched in state 5. Again, this is done by multiplying the cumulative values by two, instead of halving the total value. The position of this pixel will be regarded as the whole number of the COG, while the sub-pixel value is yet to be determined. This is done by first finding the difference between this pixel and half of the total, as described in Section 4. At the same time, the eight subpixel intervals are computed. These are done in parallel in one clock cycle in state 6. Finally, in state 7, the COG with sub-pixel resolution is determined by matching the difference to one of the sub-intervals computed in state 6. This requires one cycle only, and the next state is state 0. When a new COG is computed, it will overwrite the old one. The computation of COG occurs in parallel with the collection of data from the sensor. After the arrival of the last pixel and before the arrival of the first pixel in the next frame, the final result will be generated and passed on to the deflection level module for deflection level determination. Each COG result is made up of 8-bit whole pixel value and 3-bit sub-pixel value, which combine to form an 11-bit data. 5.3. Deflection level module The deflection level module applies the overall peak-to-peak method as described in Section 4. A block diagram is illustrated in Fig. 12. The method requires the number of COG samples to include at least two vibration cycles at low tension, which corresponds to 451 samples in real-time. Therefore a Block RAM which is capable of buffering 451  11 bits data is utilised. Each time a new COG is passed from the position tracking module, it is stored into the 451  11-bit RAM. If the RAM is full, it will overwrite the oldest record. Also, the new value is compared with the maximum and minimum registers. The maximum and minimum registers store the maximum and minimum COGs that currently exist in the buffer. They will be updated whenever a new maximum or minimum is detected. There is a chance that the current maximum or minimum will be overwritten by the next COG, which means that they expire within the specified period. In that case, they cannot be used for comparison with the incoming COG, and new values should be defined. In fact, during data collection from the sensor, the maximum and the minimum of all the data in the RAM, except the one to be overwritten by the next incoming COG, will be searched. Therefore the new COG will always be compared with the valid maximum and minimum. In parallel to these processes, the difference between the maximum and minimum is computed and compared with the thresholds to decide the current deflection level. The thresholds are

ARTICLE IN PRESS 306

J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

Fig. 12. Block diagram of deflection level module.

determined as described in Section 4 in Table 5. Notice that the values are in unit pixels. The deflection level is computed in one clock cycle upon the arrival of the new COG, and a 4-bit signal that indicates one of the four deflection levels is then output.

6. Implementation results 6.1. System accuracy and speed To verify the accuracy of the implementation, it has been tested with the data collected from the experiments. It has been found that it produces the same results as computed in software, both in the position tracking module and the deflection level module. As the data are collected from real life situation, it means that the FPGA implementation is capable of producing the same accurate results. Complicated and time consuming high-rank operations such as multiplications and divisions have been excluded from the implemented system. Also, the operations of the modules have been broken down into states. These largely reduce the datapath delay in the implementation and thus the system can achieve a clock speed as high as 64 MHz. This is by far faster than the 2 MHz clock speed of the sensor. On top of this, the computations of the COG and deflection level are executed in parallel with the collection of data. The results can almost be generated immediately after the arrival of the whole frame. The amount of time required to compute the COG depends on the length of the window and the position of the real peak. A larger number of entries in a window implies that more clock cycles are required to search for the starting and ending points of that window and to compute the CDF. If the real peak occurs well before the arrival of the last pixel, the COG can be computed in

ARTICLE IN PRESS J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

307

parallel with the collection of the rest of the pixels. The worst case scenario will be that the last pixel being the real peak while the preceding seven pixels are all included in the window. In that case, parallelism cannot take place and the result generation will be delayed. Nonetheless, with the clock speed of 64 MHz, the delay time is still negligible. Moreover, this situation can be easily avoided by controlling the measuring range so that the real peak always occurs well before the arrival of the last pixel. Taking the extra cycle for deflection level determination into account, the results can definitely be generated before the arrival of the first pixel of the next frame. Therefore, a real-time performance at 7000 frames/s can be achieved. 6.2. Discussion The current measurement rate of the system is 7000 frames/s. This is entirely limited by the clock frequency of the CCD sensor, rather than the data processing load. The CCD line sensor could be replaced by a sensor with a much higher clock speed. Hence the measurement rate could be as fast as an analogue sensor. An analogue sensor has a typical frame rate of up to 100 kHz [13]. Furthermore the low resource usage implies that a cheaper FPGA device with less resource can be used, so that the cost of the system can be reduced. As an alternative, the remaining resources can be used for either a two-dimensional CCD so that multiple rows of data can be processed at the same time, or implementing the feedback control algorithm for the saw blade tension. The algorithm described in this paper is not confined to the application of saw blade deflection monitoring. It can be applied to other applications that involve machine vibrations and the high speed algorithm of the position tracking module can be applied on other applications that involve measuring distance with a CCD-type laser triangulation sensor at high speed.

7. Conclusions An algorithm has been developed for saw blade deflection monitoring with a CCD-type laser triangulation sensor. The algorithm is made up of three modules. The first module generates the signals required by the sensor and an ADC, as well as handling the measurement data input from the sensor. The second module tracks the position and the movement patterns of the saw blade. The results from the position tracking module are passed on to the deflection level module to determine deflection level. The algorithm has been successfully implemented into a single FPGA chip (Xilinx Spartan-IIE XC2S300E) using VHDL. The position tracking module is implemented using the approximated COG method, which provides a sub-pixel resolution of 1/8 pixel. The computed COG is then used to determine the deflection level using the overall peak-to-peak method in the deflection level module. The results are output as a 4-bit signal which indicates one of the four predefined deflection levels. Due to the utilisation of low complexity operations, the system clock frequency of the implementation can be as high as 64 MHz and the overall FPGA resource usage is below 20%. The measurement rate is 7000 frames/s at the maximum clock frequency of the sensor of 2 MHz.We can conclude from the experiments that the proposed algorithm is effective for realtime, low-cost, sub-pixel measurement of saw blade deflection.

ARTICLE IN PRESS 308

J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

References [1] Sawblade displacement measuring system (Homepage of Forest Research) (2002, 04 April—last update), Available (Online): http://www.forestresearch.co.nz (2004, 20 July). [2] M. Dumberger, Taking the pain out of laser triangulation, Sensors 19 (7) (2002). [3] B. Bury, Proximity sensing for robots, in: IEE Colloquium on Robot Sensor, vol. 3 (1), 1991, p. 18. [4] S. Hussmann, W. Kleuver, High precision triangulation sensor for residual shorts measurement of coiling materials, in: Selected Papers from the International Conference on Optics and Optoelectronics, Proc. SPIE 3729 (1998) 264–267. [5] S. Hussmann, W. Kleuver, Coil identification algorithm inside a high precision triangulation sensor for residual shorts measurement of coiling materials, in: Sensors and Controls for Intelligent Machining II: Sensors and Applications, Proc. SPIE 3832 (1999) 146–153. [6] W.D. Kennedy, The basics of triangulation sensors, Sensors 15 (5) (1998). [7] Z. Salcic, A. Smailagic, Digital Systems Design and Prototyping: Using Field Programmable Logic and Hardware Description Languages, second ed, Kluwer Academic Publishers, Massachusetts, 2000. [8] M. Businski, A. Levine, W.H. Stevenson, Performance characteristics of range sensors utlising optical triangulation, in: IEEE 1992 National Aerospace and Electronics Conference, 1992, pp. 1230–1236. [9] M.J. Beesley, Lasers and Their Applications, Taylor & Francis Ltd, London, 1971. [10] J. Wu, J.S. Smith, J. Lucas, Weld bead placement system for multipass welding, in: IEE Proceedings—Science Measurement and Technology, vol. 143 (2), 1996. [11] S. Hussmann, Schnelle 3D-Objektvermessung mittels PMD/CMOS-Kombizeilensensor und SignalkompressionsHardware. Ph.D. Thesis, Center for sensor systems (ZESS), University of Siegen, Germany, Online publication (http://www.ub.uni-siegen.de/epub/diss/hussmann.htm), 2000. [12] K. Haug, G. Pritschow, Robust laser-stripe sensor for automated weld-seam-tracking in the shipbuilding industry, in: Proceedings of the 24th Annual Conference of the IEEE—Industrial Electronics Society, vol. 2, 1998, pp. 1236–1241. [13] C. Georgiadis, W. Kleuver, High speed and precise inspection on heavy accessible movable components, The International Society for Optical Engineering, SPIE 3832 (1999) 122–130.

Joanna Cheuk Kuen Lai has graduated and been awarded honour degree from the University of Auckland/ Electrical and Computer Engineering department in 2003. She received her M.E. degree from the same university in 2005 with First Class Honour. She is currently working as a R&D Engineer in a medical device development company; Fisher & Paykel Healthcare, in New Zealand. Her research interests are in Signal Processing and Embedded Systems.

Waleed H. Abdulla (B.Sc. (EENG), M.Sc. (EENG), Ph.D.) has a Ph.D. degree from the University of Otago, Dunedin, New Zealand. He was awarded Otago University Scholarship for 3 years and the bridging grant. He has been working since 2002 as a Senior Lecturer in the department of Electrical and Computer Engineering/The University of Auckland. He was a visiting scholar to Siena University/Italy in 2004. He has collaborative work with Essex University/UK, IDIAP Research Centre/Switzerland and Guilin University/China. He has more than 40 publications including a patent and a book. He has supervised more than 20 postgraduate students. He has many awards and funded projects. His main research areas are: Developing Generic Algorithms, Speech Signal Processing, Speech Recognition, Speaker Recognition, Speaker Localisation, Microphone Arrays Modelling, Speech Enhancement and Noise Cancellation, Statistical Modelling, Hidden Markov Modelling, Pattern Recognition, Human Biometrics, EEG Signal Analysis and Modelling, Time Frequency Analysis, and Neural Networks Applications. He is a member of ISCA and IEEE.

ARTICLE IN PRESS J.C.K. Lai et al. / INTEGRATION, the VLSI journal 39 (2006) 291–309

309

Stephan Hussmann received his M.E. and Ph.D. from the School of Electrical Engineering and Information Technology at the University of Siegen in Germany in 1995 and 2000, respectively. From 2001 to 2003 he worked as a lecturer in the Department of Electrical and Computer Engineering in the area of computer systems engineering (CSE) at the University of Auckland in New Zealand. Since the end of 2004 Stephan Hussmann is a full time Professor at the Westcoast University of Applied Sciences (FHW) in Germany in the area of microprocessor technology and electronic systems. His research interests include wireless optical sensors for the industrial environment, low-cost multi-sensor system design, realtime image processing, embedded systems design, and engineering education. He has consulted widely to industry and published over 37 refereed journal and conference papers in these research areas.