ELSEVIER
Computers in Industry 30 (1996) 193-209
Measurement of machine performance degradation using a neural network model Jay Lee
*
National Science Foundation, 4201 Wilson Blvd., Room 585, Arlington, VA 22230, USA
Abstract Machines degrade as a result of aging and wear, which decreases performance reliability and increases the potential for faults and failures. The impact of machine faults and failures on factory productivity is an important concern for manufacturing industries. Economic impacts relating to machine availability and reliability, as well as corrective (reactive) maintenance costs, have prompted facilities and factories to improve their maintenance techniques and operations to monitor machine degradation and detect faults. This paper presents an innovative methodology that can change maintenance practice from that of reacting to breakdowns, to one of preventing breakdowns, thereby reducing maintenance costs and improving productivity. To anakyze the machine behavior quantitatively, a pattern discrimination model (PDM) based on a cerebellar model articulation controller (CMAC) neural network was developed. A stepping motor and a PUMA 560 robot were used to study the feasibility of the developed technique. Experimental results have shown that the developed technique can analyze machine degradation quantitatively. This methodology could help operators set up machines for a given criterion, determine whether the machine is running correctly, and predict problems before they occur. As a result, maintenance hours could be used more effectively and productively. Keywords:
Intelligent
manufacturing;
Neural network
1. Introduction Equipment reliability and maintenance drastically affect the three key elements of competitiveness quality, cost, and product lead time. Well-maintained machines hold tolerances better, help reduce scrap and rework, and raise consistency and quality of the part. They increase uptime and yields of good parts, thereby cutting total production costs, and also can shorten lead times, by reducing downtime and the need for retooling [l]. The recent rush to embrace computer-integrated manufacturing (CIM) has fur-
* Email:
[email protected] 0166-3615/96/$15.00 Copyright PII SOl66-3615(96)00013-9
ther increased the use of relatively unknown and untested technology. Today, many factories are still performing maintenance on equipment in a reactive, or breakdown, mode. This is due to the fact that traditional process monitoring systems can only detect machine or process faults when they occur. Reactive maintenance is expensive because of extensive unplanned downtime and damage to machinery [2-71. In high-performance systems one often cannot tolerate significant degradation in performance during normal system operation. This has led many U.S. manufacturers to look to suppliers for smarter equipment that will ease the need for strong technical support for equipment. To avoid machine downtime, some companies have used preventive and predictive
0 1996 Elsevier Science All rights reserved.
J. Lee / Computers
194
in Industry
30 (I 996) 193-209
PROACTIVE MAINTENANCE
TECHNIQUE
MONITOR THE DEGRADATION OF MACHINE AND SCHEDULE MAINTENANCE WORK DURING THE DAlLY
DEGREE OF FAILURE
PROBABILIT OF FAILURE
,
I f---
BREAK IN PERIOD +
FAILURE LINE
DEGRADATlON
BEHAVlORMONlTORlNGRANGE
-
-
-
Y
I TIME
Fig. 1. Proactive maintenance
concept diagram.
ployed to monitor the degradation of a machine and its process in order to adaptively prevent faults or failure [ 1- 151. The authors believe that the best way of monitoring machine degradation is a proactive approach for machine maintenance. The goal of this research was to develop an innovative, proactive, maintenance implementation technique - an integrated learning, monitoring, and recognition methodology that can be used to analyze and monitor machine behavior quantitatively and provide an early warning of possible faults. By doing this, maintenance personnel can perform early diagnostics and
maintenance approaches either by using historical maintenance data or by sensing machine condition. However, there are no maintenance data available for many newly developed machines or advanced manufacturing systems. Adding sensors to monitor machine condition will also increase machine complexity and will require more highly trained personnel. Therefore, some manufacturers design a fault detection system that takes system redundancy into account, for example, a system containing several back-up subsystems [8-151. Currently, no proven technique has been em-
Degree of Degradation /
Machine
\
:\:
.
I
I
TABLE LOOK-UP NEURAL NElWORKS
Fig. 2. System concept of machine degradation
monitoring
and fault detection using neural networks approach.
J. Lee/Computers
in Industry 30 (1996) 193-209
part replacement during the regular daily maintenance hours. Therefore, the mean-time-betweenfailure can be extended to an unlimited length. Fig. 1 shows the concept of the proposed proactive maintenance implementation technique. Fig. 2 shows the system concept of the proposed methodology, an integrated learning, monitoring, and recognition technique, for a proactive maintenance system. The cerebellar model articulation controller (CMAC) is used to adaptively learn machine behavior and a pattern discrimination model (PDM) based on the CMAC is used to monitor, recognize, and quantify behavioral changes. The PDM serves as a warchdog to monitor the behavior of the machine by using a confidence value lbat represents the conditional probability of degradation. The fault then can be detected by comparing the confidence value of the PDM output with a threshold confidence value.
2. Previous
work
Automated fault detection and a diagnostic system based on neural networks have been implemented for jet and rocket engines by Dietz, Kiech, and Ali [ 161. In this approach, fault diagnosis is performed by mapping or associating patterns of input data to patterns representing fault conditions. Greenwood and Stevenson from Netrologic, Inc. [17] have estimated tool wear using neural networks. Uhrig and Guo [18] have used the back-propagation neural network to identify transient operating conditions in nuclear power plants. Franklin, Sutton, and Anderson [l 11 from GTE lhave used the ADALINE and the NADALINE, a one-layer neural network, to monitor fluorescent bulb manufacturing. Ford Motors has developed automotive control system diagnostics using neural networks for rapid pattern classification of large data sets [ 101. The data for training the systems were obtained by introducing twenty-six different faults into the power plants (shorted plug, open plug, plugged injector, broken manifold pressure sensors, etc.) and observing, the engine operation at a fast idle. A backpropagation classification algorithm was used for fault recognition. However, real-time fault detection and diagnostics have not been implemented due to the limitation of the real-time data acquisition and learning.
195
Netrologic Inc. has also developed a neural network-based fault detection and diagnostics tool for space systems in NASA. A feedforward neural network was used to recognize the signature of a valve in the ATLAS rocket (which contains some of 150 valves of various types) [9]. The data were collected in the form of electric current transient signatures from ATLAS rocket valves installed on a pneumatic test bench. The data were then used to train and test two neural networks: one was trained to distinguish among signatures for three separate types of valves during a valve-open state change (rising current>, and the other was trained to distinguish among signatures for the same types of valves during a valve-closed state change (falling current>. The use of neural network in monitoring machine degradation was first proposed by Lee and Kramer [19]. A CMAC neural network based reasoning tool was developed to analyze the consistence of the machine behavior based on the locations of weights in a lookup table. The degree of degradation was represented by a confidence value.
3. Methodology The authors propose to use the CMAC as a real-time data acquisition and learning tool for the behavior of the actuators and sensors when a system is in operation. The unique part of this approach is the use of CMAC to learn the good or normal patterns under different working conditions rather than to learn the bad or wrong patterns. In many cases, it is very difficult to create the bad and wrong conditions for a machine during training. The Marr-Albus cerebellum model [20-231 was originally a model of the cerebellum, the part of brain that coordinates fine motor movements. Analytical processing within the cerebellum suggests that, while there are layers of neurons and large numbers of interconnects, there is extensive filtering through the use of negative feedback to confine incoming activity to only a small subset of the most active fibers. This suggests that the most active fibers might be suppressing all less active fibers, rather than distributing signals across all fibers. Further work on this model led to a mathematical formalism, namely, the cerebellar model articulation
196
J. Lee/ Computers in Industr?; 30 (1996) 193-209
controller developed by Albus in 1972. The CMAC is essentially a clever adaptive table lookup technique for representing complex, nonlinear functions over multi-dimensional, discrete input spaces. It reduces the size of the lookup table through hashingcoding (many-into-few mapping); provides for response generalization and interpolation through a distributed, topographic representation of the inputs; and learns the appropriate nonlinear function through a supervised learning process that adjusts the content or weight of each address in the lookup table. The authors have developed a PC based CMAC learning algorithm. Fig. 3 is a schematic representation of the CMAC. In the CMAC architecture, each discrete input S maps to many locations in the memory (one “location” contains a vector of the same dimension as P and the function F(S) is computed by summing the values at all of the locations mapped to by S>. The set S of input vectors is mapped onto the random table locations, which are then mapped by hashing onto a much smaller set weight table. This mapping scheme has the advantage of providing automatic interpolation (generalization) between input states in the space S (similar inputs produce similar outputs).
Briefly speaking, the CMAC can be described as a computing device which accepts an input vector S = (S,,S,, . . ,S,> and produces an output vector P = F(S). To compute the output vector P for a given input state S, pair mapping is performed, namely f:S+A, g:A-+P, where A is called the association cell vector, which is actually a large table of memory addresses. Given an input vector S = (S,,S,,S,, . . . ,S,>, the mapping function f points to some memory addresses (location in the A table); these locations are among the association cells A and are referred to as the active association cells. The number of active association cells for any given input is a fixed parameter of the CMAC module selected by the user. Weight(s), or tabulated values that contribute to the output response, are attached to each association cell. One weight, if output vector P is one-dimensional, and n weights if P is n-dimensional, are attached to each cell. Mapping function g uses the weights associated with the active association cells and generates an output response. Function g may be a simple summa-
ASSUCtATtON VECTOR * GENERAUZATlCN
A MEMORY REDUCWN
-
Fig. 3. The CMAC architecture.
.I. Lee/
Computers
in Industry
tion if the weights are attached to active association cells, or it may vary during the training and storage process. Any input vector is therefore a set of address pointers and the output vector is, in its simplest form, the sum of weights attached to these address pointers. Thus each element of the output P = (p1,p2,. . . ,p,> is computed by a separate CMAC from the formula p(k) =a,(k)*w,(k)
+aa,(k)*wZ(k)
+ a,,(k) *w”(k),
+ ...
n
where A(k) = (a,(kj,a,(k), . . . ,a,(k)) is the association cell vector of the kth CMAC and W(k) = (w,(kh,(k), . . . ,w,,(k))m is the weight vector of the kth CMAC. The value of the output for a given input can be changed by changing the weights attached to the active association cells. A procedure for entering a function in CMAC is as follows: 1. Assume F is the function we want CMAC to compute. Then I’ = F(S) is the desired value of the output vector for each point in the input space. 2. Select a point S in input space where P is to be stored. Compare the current value of the function at that point: P * = F(S).
30 (1996) 193-209
197
3. For every element in P = (p1,p2,. . . ,p,) and in pi >, if lpi - p; 1 < e,, where P* =(pl*,&..., e, is an acceptable error, then do nothing; the desired value is already stored. However, if lpi p; 1 > ei, then add to every weight which contributed to p: the quantity M, = (pi - p; j/N, N is the number of weights which contributed to P,*. In CMAC, the values of a set of inputs act as pointers to a table of random numbers. These numbers are hashed to form a set of addresses in a weight table. The values in the weight table pointed to by these addresses are summed to yield an output. Training consists of adjusting the values in the weight table based on the error between their present summation value and the desired output for the particular values of the input that resulted in these addresses.
where wio
d
= old value of weight at address location i, = desired output value for present input vector,
INPUl VECXIR CONFIDENCE
S (Sl,SZ,SS)
RANCIOM TABLE
A!%SOClATmN -
Fig. 4. CMAC-pattern
WEIGHT TABLE
discrimination
model.
/f\
PM~P
J. Lee/Computers
198
n
wj
m
Win
in Industry
= number of addressed locations in the weight table, = value of weight at address location j, = percentage of correction, = new value of weights at address location i.
30 (19961
193-209
4. Pattern discrimination
model (PDM)
To perform pattern recognition and discrimination tasks, a pattern discrimination model, based on the confidence checking of the CMAC output, is devel-
Initialize
[ Initialize
]
[ File Input
]
+
[ Load_CMAC_fns_Cut [ Scale-Input ] [ Hashing ] [ Sum_Wt ] [ Learn_Rule ]
7 Train
]
value of output
[ Auto-train ] [ Train ] [ Scale_lnput ] [ Hashing ] [ Sum_Wt ]
b if error > acceptable error
p--p-
Address table size (N) Weight table size
-
YES
Training
NO -------
-t
I
I
[ Test File ] [ Real Time Test [ Scale_lnput ] [ Hashing ] [ Sum_Wt ]
Real time
Test file I
I
I
I Monitoring & Detection
Confidence value n/N x 100
[ Performance
n
: sum of locations in weight table
Fig. 5. Flow chart of the training and testing procedure
of CMAC-PDM.
]
]
.I. Lee/Computers
oped. In Fig. 4, a confidence table is used in parallel with the weight table. During the CMAC training, the data stored in l:he weight table will activate a “ 1” value in the confidence table during training. These stored memory locations represent the desired or normal behavior of the machine. During the actual operations, the identical behavior will map to the same memory location in a weight table. A confidence value can be obtained by summing the total “1”s and dividing by the number N, which is the size of the address table. It derives a 100 percent confidence value since the same number of “1”s were used for both training and operating conditions. If there is a behavio:ral change, the change will cause different mapping locations in the weight table. Then, a “0” value will bi? placed in the confidence table when there is memory location change in the weight table. A confidence value can be obtained by summing the total “1”s and “0”s and dividing by the size of the address table. Typically, if the parameters are selected properly, the confidence value is 100% during the normal training and learning process. In case of input pattern changes, the data will be mapped in the locations which are not exactly the same as the ones previously trained or learned. Therefore, a confidence
value is calculated to represent the level of change (or level of degradation) of the behavior. A high percent (%) confidence value shows a high possibility of a good pattern. A low confidence value represents the degraded or fault pattern. The confidence value is calculated based on the following formula: Confidence
Value
= 100 * (Repeatable /(Address
- Weight - Count)
- Array - Size).
For example, in Fig. 4 a good pattern is learned during the training. The data are mapped on four locations in the weight table. The confidence value is 100% (since lOO*(l + 1 + 1 + 1)/4 = 100%). During operations, a fault pattern is observed by the CMAC, and data are mapped on different locations in the weight table. If two locations are different, two “0”s will be placed on two “1”s positions. Then, a 50% (100 * (1 + 1 + 0 + 0)/4 = 50%) confidence value is obtained. The confidence value presented here represents a conditional probability of behavioral degradation. If these values can be monitored in an incremental time interval, then the rate of degradation can be calculated. The rate of changes will suggest the urgency
STEPPING
HP POWER
199
in Industry 30 (1996) 193-209
MOTOR
AMPLIFIER
OPTICAL
TRANSISTORS
ENCODER
(X4) OPTO-ISOLATOR
L
/
PC
Fig. 6. Experimental
setup for encoder position monitoring
and fault detection.
200
J. Lee / Computers in Industry 30 (19%)
193-209
Table 1 Data used for training 14 inputs for CMAC training Motor speed
Encoder position signal pattern (13 patterns)
3.5165 4.0110 4.5055 5.0000 5.4945 6.0440 6.4835 6.9780 7.5275 8.0220 8.5165 9.0110 9.5055
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
CMAC desired output (assigned value) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
of the maintenance. Fig. 5 is a flow chart of the computational sequence of the CMAC-PDM.
5. Experimentation The new machine behavioral learning, degradation monitoring, and fault detection technique was tested and implemented in monitoring the position of a stepping motor encoder and degradation in the accuracy of a robot. To examine the developed CMAC-PDM and to investigate its capability for
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
(RMS)
[thousands]
70
50 40
Number
of Trainings
Fig. 7. Convergence
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
70000.000 80000.000 90000.000 100000.000 I 10000.000
120000.000 13oooo.000 140000.000 150000.000 160000.000 170000.000 180000.000 190000.000
monitoring degradation and detecting faults, several test patterns were generated by setting the faults and degradations through different positions of the input encoder value. Seven different degradation test patterns were generated by adjusting the backlash screws of the robot arms and wrist. The experiments were conducted in the Robotics Laboratory at the University of Maryland-College Park. The following sections examine the capability and sensitivity of the developed technique in monitoring degradation and detecting faults.
Error vs. Number of Trainings Error
1.0 1.0 1.o 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
of CMAC training.
J. Lee / Computers
5. I. Experiment fault detection
I : Encoder position monitoring
in Industry 30 (1996) 193-209
201
and
The encoder is a’ne of the key elements in a motion control system to provide position feedback for motor or positioning tables. For high-precision motion control systems, such as chip inspection and electronics assembly, the assurance of encoder feedback is critical. Tests, were conducted to investigate the feasibility of monitoring degradation and faults by using the developed technique. 5.1.1. Experiment setup In Fig. 6, a stepping motor with an encoder was powered through an HP power amplifier and controlled by a PC. The purpose of this experiment was to examine the developed technique and investigate the relationship among several CMAC parameters, such as address array size, confidence value (performance indicator), and. convergence. There are 13 samples and 14 inputs (one motor speed and 13 position patterns) for CMAC training. Table 1 shows that the encoder has the same input
c
,
Fig. 9. Robot position laser interferometer.
I ” accuracy
I
degradation
measurement
using
patterns for 13 different rotating speeds. The encoder output pattern should be the same at different rotating speeds. This set of data was used to train the CMAC. Fig. 7 shows the convergence curve for different address array sizes. The result shows that the smaller address array size has the faster convergence rate (smaller numbers of training). Fig. 8 shows the detailed convergence curve. In this graph,
Error vs Number ot Tratnlngs
10x10"
0 !jxlO'
Number 01 1.ralnbngs Fig. 8. Detail training convergence
Address Array Size
202
.I. Lee/Computers
in Industry 30 (19961 193-209
errors represent the deviation from the trained CMAC output values. Once the behavior of the encoder is trained, the CMAC learns the behavioral patterns and maps the patterns in the weight table. During training, the PDM always has the correct pattern and thus the confidence value is 100%. During the testing, however, if there is a fault or degradation, the pattern may be changed. The confidence value would also Table 2 Testing data fault detection and degradation
be changed based on the degree of the degradation or fault conditions. A lower value indicates a higher probability of fault or degradation. 5.1.2. Test results To examine the developed CMAC-PDM for degradation and fault detection, several fault conditions were created by setting the fault through differ-
recognition
Fault pattern Data set A
1 2 3 4 5 6 7 8 9 10
5.0000 5.0000 5.0000 5.0000 5.OQOO 5.OOQO 5.OOoo 5.0000 5.0000 5.0000 Degradation
Data set B
Data set C
Data set D
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0
1.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0
0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 1.0 1.0 1.o I.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0
0.0 0.0 1.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
1.0 1.0 1.0 1.0 1.0 1.0 1.0 I.0 0.0
0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 1.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0
0.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0 1.0 1.0
1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 1.0 1.0 I.0 1.0 I.0 1.0 1.0 1.0 1.0 1.0 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
1.0
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
1.o
1.0 1.0 I.0 1.0 1.o 1.0 1.0 1.0 0.0
0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0 1.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0
1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 1.0 1.o 1.0 1.0 I.0 1.0 1.0 1.0 1.0 1.0 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
1.o
patterns 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.0
0.1
0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
J. Lee/Computers
in Industry 30 f 1996) 193-209
ent positions of the input encoder value. Table 2 shows the test fault patterns and degradation patterns. In this table, a “0” represents a fault. Values between “1” and “0” represent degraded conditions. Table 3 shows the confidence values for each fault and degradation condition when different address array sizes were used in the CMAC. It shows that address array sizes 20 and 30 can better detect
203
fault and degradation values. 5.2. Experiment tion
by providing
2: Measurement
lower confidence
of robot degrada-
The developed model was tested and implemented in monitoring degradation in the accuracy of a robot
Table 3 Testing results: Confidence value for different address table sizes Address table size
Data set A
Data set B
Data set C
Data set D
1 2 3 4 5 6 I 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 21 28 29 30 31 32 33 34 35 36 31 38 39 40
20
30
40
50
60
70
80
90
0 5 5 5 0 5 0 10 0 0 100 85 80 65 60 50 40 30 20 10 85 80 65 60 50 40 30 20 10 0 90 75 70 55 50 40 30 20 10 0
36.67 33.33 33.33 36.67 36.67 36.67 40 36.61 33.33 36.67 100 90 86.67 76.67 73.33 66.67 60 53.33 46.67 40 90 86.67 76.67 73.33 70 66.67 60 53.33 46.67 40 93.33 83.33 80 70 66.67 60 53.33 46.67 40 33.33
55 52.5 50 50 5.5 52.5 50 57.5 50 50 100 92.5 90 82.5 80 75 70 65 60 55 92.5 90 82.5 80 77.5 72.5 67.5 62.5 57.5 52.5 95 87.5 85 77.5 75 70 65 60 55 50
70 62 62 64 60 60 62 60 62 62 100 94 92 86 84 80 76 72 70 66 94 92 86 84 80 78 76 72 70 66 96 90 88 82 80 16 72 68 66 62
68.33 68.33 73.33 68.33 68.33 71.67 70 66.67 13.33 68.33 100 95 93.33 88.33 88.33 86.67 83.33 80 76.67 73.33 96.67 95 90 88.33 85 81.67 78.33 75 73.33 71.67 98.33 93.33 91.67 86.67 86.67 85 81.67 78.33 75 70
78.57 72.86 71.43 74.29 74.29 75.7 1 75.71 71.43 72.86 75.71 100 95.71 94.29 90 88.57 85.71 82.86 80 77.14 74.29 97.14 95.71 91.43 90 87.14 84.29 81.43 78.57 75.71 72.86 97.14 92.86 91.43 87.14 85.71 82.86 80 77.14 74.29 71.43
11.5 75 78.75 78.75 78.75 75 76.25 77.5 76.25 77.5 100 96.25 95 91.25 90 87.5 85 82.5 80 77.5 97.5 96.25 92.5 91.25 88.75 86.25 85 82.5 80 77.5 97.5 93.75 92.5 88.75 87.5 85 82.5 80 77.5 75
82.22 78.89 77.78 82.22 78.89 78.89 80 80 81.11 77.78 100 96.67 95.56 92.22 92.22 90 87.78 85.56 83.33 81.11 96.67 95.56 92.22 91.11 88.89 87.78 85.56 83.33 81.11 78.89 98.89 95.56 94.44 91.11 91.11 88.89 86.67 84.44 82.22 78.89
100 84 83 83 82 81 83 80 83 83 81 100 97 96 93 92 90 88 86 85 83 97 96 93 92 90 88 86 84 83 81 98 9.5 94 91 90 88 86 84 83 81
204
J. Lee / Computers in Industry 30 (19961 193-209
r-
1.EACH P‘OINT
TRAVEL DISTANCE 1
POSITION ACCURACY + -
1 2
-0rJD16
-0.000455
TRAVEL DISTANCE 2 POSITION ACCURACY +
STRAIGHTNESS +
40171
-0.05515
6.0008
-
+
no6905
0.00525
POSITION ACCURACY
STRAIGHTNESS
-D.OSB d ODOZ75 -0.0521 -” 00192
0.00014
TRAVEL DISTANCE 3
STRAIGHTNESS +
+
-00025
-00097
-".30146
-006725
-00005
-0.00005
-0.00245
-c.O00645
0.02285
4.00016
O.OM39
0.01417
O."O,,
0.0024
0080.9
0.00092
0.00124
0.0246
0.00025
4
-0.00025
0.00965
5
-0.0015
00041
.O.O151
1000
-0.00535
2000
0.00223
0.08035
3000
000179
0.06405
4000
O.MI55
5000
-_
._.____~ 3
ASSIGNED CMAC VALUE
mJl3&3
"Cal24
0.00244
0.08855
0.00002
0.00075
0.0346
0.00142
005265
0.00006
0.0036
0.009
0.0009
0.03325
0.00005
0.01225
O.W655
0.0011
Fig. 10. Training data sets for robot normal behavior
TRAVEL
DISTANCE
1
TRAVEL
DISTANCE
TRAVEL DISTANCE 3
2
STRAIGHTNESS
STRAIGHTNESS
ADJUStMENT
Fig. 11. Testing data sets for different backlash adjustment
conditions.
205
J. Lee / Computers in Industry 30 (1996) 193-209
(PUMA 560). Position accuracy, position repeatability, and path straightness are often evaluated and stated in statistical terms. This allows robot manufacturers and machine tool manufacturers to predict the positioning behavior of various moving elements to a stated confidence level, and provides the customer with an assurance of the machine’s performance. Typically, poor position accuracy and path straightness are symptoms of the degraded performance which may be due to mechanical wear, backlash of the gear train, vibration, and thermal effects. In this experiment, the position accuracy and path straightness were used as evaluation criteria for measuring robot performance [24-261. To examine the developed CMAC-PDM and to investigate its capability for analyzing and monitoring the performance degradation of a robot, seven different degradation test patterns were generated by adjusting the backla.sh screws of the robot arms and wrist. Fig. 9 shows the experimental setup. The HP 55 18A (two-frequency He-Ne laser) interferometer was used to measure the robot performance (position accuracy and path straightness). To cause degrada-
tion conditions, the backlash screws were adjusted for the arms and the wrist of the robot. Once backlash was introduced, the robot path straightness and position accuracy were measured by the laser interferometer. Before adjusting the backlash of the robot arm, a set of path straightness and position accuracy data was generated to train the CMAC by programming the robot to follow the taught path pattern. Fig. 10 shows the five positions which the robot was taught. Three sets of data were taken for approaches from the both “+” and “-” directions at each position and the average values are reported. There are 13 inputs (one for position number and 12 for position/straightness data) for CMAC training for each position. The CMAC output values (1000, 2000, 3000, 4000, and 5000) were assigned for each training set. Fig. 11 shows the convergence curve of training results for different address array sizes when the weight table size 1024 was used. The result shows that a minimum in learning time occurs at address array size of approximately 30. A degraded test pattern was generated by introducing different levels of backlash adjustment. Seven
Number of Trainings vs. Error Weight table held constant
at 1024
Address Array
Error (RMS)
-100 +90 s-80 *70
*CO *50 *40 *30 O-20
-2
7
12
17
22
27
32
37
Number of Trainings Fig. 12. Number of trainings vs. error. Convergence
of CMAC training.
206
.I. Lee/ Computers
in Industql30
(19961 193-209
Test Set vs. Confidence
Address Array Size
Weight table held constant at 1024 Confidence
(Percentage)
-100
1201
I
+90 *80 *70 *CO ‘50 *40 *30 -020 -13
1
2
3
4
5
6
7
*5
Test Number Fig. 13. Test set vs. confidence.
Sensitivity of CMAC on measuring
machine degradation
Test Number vs. Confidence Address array size held constant at 20 Confidence
(Percentage)
120
Weight Table Size
100
-64
80
+128
60
* 256 *512
40
* 1024 20 _._ 0. 1
1.5
2
2.5
3
3.5
4
4.5
5
5.5
6
Test Number Fig. 14. Test number vs. confidence.
6.5
7
J. Lee / Computers
in Industry 30 (1996)
193-209
207
Test Number vs. Confidence Address array size = 30 Weight table = 1024 Resolutions
Confidence (Percentage)
+20 *30 *40 *co *60 *70 *80 090 -100 1
2
3
4
5
6
7
Test Number Fig. 15. Test number vs. confidence.
adjustments were made in this experiment, with each level of adjustment nspresenting a different degree of degradation (see Fig. 12). These sets of patterns were used to test the CMAC-PDM and to investigate its capability in monitoring degradation. Fig. 13 shows the confidence plot for each level of adjustment. The graph indicates that an array size of 20 clearly shows the measured variation in the accuracy. For address array size 20, the test was repeated for different weight table sizes. Fig. 14 shows the confidence plot for these seven levels of degradation. The graph shows that very similar results were achieved for weight tables of size equal to or greater than 128. Fig. 15 shows that better discrimination can be achieved when a large resolution table is used for input pattern.
6. Conclusion and suggested future work This paper presents a concept for using neural networks to analyze and monitor machine degradation. Since degradation generally occurs before failures, monitoring the trend of machine degradation
allows the degraded behavior or faults to be corrected before they cause failure and machine breakdowns. To demonstrate the proposed concept, an integrated learning, monitoring, and pattern recognition methodology has been developed. A novel proactive implementation technique, namely, a pattern discrimination model (PDM), based on the cerebellar model articulation controller (CMAC) has been developed and refined. The feasibility of this technique was demonstrated through experimentation. In conclusion, the developed methodology has demonstrated a learning capability in providing an active and quantitative pe$ormance indicator that enables maintenance personnel to perform early fault diagnosis and effective maintenance. The new approach developed in this research has demonstrated a useful technique for achieving proactive maintenance. Future research is needed to broaden the applications. The use of both analog and digital data will provide a better understanding of correlated behavioral patterns of machines. The future work can expand the research focus to a system level in integrating multiple sensors and actuators with the CMAC-PDM. A comparative study of
208
J, Lee/Computers
in Industv 30 (1996) 193-209
CMAC-PDM, backpropagation neural networks, and probability neural networks (PNN) may yield important information on the functional differences among these different neural network models.
Acknowledgements This work is supported by the National Science Foundation and the United States Postal Service.
References National Research Council, The Competititeness Edge: Research Priorities for U.S. Manufacturing, National Academy Press, Washington, DC, 1990. @I H. Park, “Assessing machine performance”, American Machinist, June 1992, pp. 39-42. Quality [31 D. Shainin and P. Shainin, “OK isn’t enough”, Magazine, February 1990, pp. 19-22. statistical process [41 E.P. Papadakis, “A computer-automated control method with timely response”, Journal of Engineering Costs and Production Economics, Vol. 18, 1990, pp. 301-310. and optimizing assembly variation”, [51 M. Craig, “Predicting Quality Magazine, June 1991, pp. 16-18. b1 “Equipment should improve through use”, Editorial comments, American Machinist Report, September 1991, pp. 81-100. VI G. Mendelbaum and R. Mizuno, “Directions in maintenance”, Maintenance Technology Magazine, June 1992, pp. 45-48. Bl R.J. Meltzer, “Sensor reliability of MTBF”, Sensor Magazine, January 1992. analysis [91 Netrologic, Inc., “Report on space transportation and intelligent space system”, NASA SBIR NAS9-17995, July 1990. control system diagnostics using [lOI K. Marko, “Automotive neural network for rapid pattern classification of large data sets”, Proceedings of International Neural Net Society Meeting, Washington, DC, June 1989, pp. 13-16. t111 J.A. Franklin, R.S. Sutton and C.W. Anderson, “Application of connectionist learning methods to manufacturing process monitoring”, Proceedings of IEEE 3rd International Symposium on fntelligent Control, August 1988, pp. 709-712. in the 90’s - A plan to support the USPS goal [121 “Maintenance of automation and mechanization”, USPS Report, Washington, DC-HQ, December 1990. [I31 E.F. Pardue, K.R. Piety and R. Moore, “Element of reliability-based machinery maintenance”, Sound and Vibration, May 1992, pp. 14-20. monitoring: The overlooked pre1141 J.C. Fitch, “Contaminant
Maintenance Technology Magazine, dictive maintenance”, June 1992, pp. 41-46. 1151J. Lee, “Review of computer-aided predictive maintenance technology for machinery”, USPS Technology Resource Department Technical Report, January 1991. [I61 W.E. Dietz, E.I. Kiech and M. Ali, “Jet and rocket engine fault diagnosis in real time”, Journal of Neural Network Computing, Summer 1989, pp. 5-19. “Tool wear estimation [171 F. Stevenson and D. Greenwood, using neural networks”, Netrologic, Inc. Report, San Diego, CA, 1992. [181 R.E. Uhrig and Z. Guo, “Use of neural networks to identify transient operating conditions in nuclear plants”, Proceedings of SPIE, Vol. 1095, Application of Artificial Intelligence VII, 1989, pp. 851-856. machine degradation [191 J. Lee and B.M. Kramer, “In-process monitoring and fault detection using a neural network approach”, Doctoral Dissertation, Department of Mechanical Engineering, The George Washington University, September 1992. [201 J.S. Albus, “A new approach to manipulator control: The cerebellar model articulation controller (CMAC)“, Journal of Dynamic System, Measurement, and Control, September 1975, pp. 220-233. [211J.S. Albus, “Data storage in the cerebellar model articulation controller (CMAC)“, Journal of Dynamic System, Measurement, and Control, September 1975, pp. 228-233. WI J.S. Albus, “A theory of cerebellar functions”, Mathematical Bioscience, Vol. 10, 1971, pp. 25-61. WI J.S. Albus, “Theoretical and experimental aspects of a cerebellar model”, Ph.D. Dissertation, University of Maryland, 1972. [241 “M’lTA standard for the position accuracy and repeatability of machining centers and associated numerically controlled machine tools”, The Machine Tool Trade Association Report, London, U.K., 1980. [251 “Definition and evaluation of accuracy and repeatability for numerically controlled machine tools”, NMTBA Report, McLean, VA, August 1972. [261 P. Albertson, “Verifying robot performance”, Robotics Today, October 1983, pp. 33-36.
Jay Lee is program director of the Engineering Education and Centers Division and the Design, Manufacture, and Industrial Innovation Division at the National Science Foundation. Prior to joining NSF, he held several engineering and management positions at Fenn Manufacturing Co.-AMCA International, Robotics Vision Systems, Inc. (a company invested by GM), Anorad Carp, and the Office of Advanced Technology of the U.S. Postal Services-HQ. Working in the broad field of manufacturing engineering, Jay Lee has involved in research and engineering activities in the areas of
.I. Lee/
Computers
in Industry 30 (1996)
robotics, machine vision, machine tools, motion control, factory automation integration, neural networks, and manufacturing policy. His current research work is concentrated on maintenance science and manufacturing globalization strategy. In 1995, he conducted a 6-month research assignment which was hosted by the Mechanical Engineering Lab. of the Ministry of International Trade of Industry (MITI), to study the emerging manufacturing technologies and production practices in Japan under a STA Fellowship. During this petiod he has visited 33 organizations and 12 universities. Jay Lee received his B.S degree from Taiwan, a MS. in ME from the Uriversity of Wiscosin-Madison, a MS. in Industrial Management from the State University of New York at Stony Brook, and a D.!Ic. in ME from the George Washington University. He is an adjunct professor of Johns Hopkins Univer-
193-209
209
sity-Applied Physics Lab. He holds three U.S. patents and has over 50 publications (40 single-authored papers). He authored and edited two books. He was the chairman of the Material Handling Engineering Division during 1993-1994, and currently is the executive committee member of the Manufacturing Engineering Division of the ASME. He received the SME Outstanding Young Manufacturing Award in 1992. Jay Lee is a frequently invited speaker. He has delivered over 100 technical speeches (including six key note speeches) at seminars workshops, short courses, and conferences in the U.S., Germany, Japan, Singapore, Korea, Taiwan, and China. In addition, he has appeared on the NTU satellite broadcasting networks on Modem Manufacturing Technology Series.