Acquiring the constitutive relationship for a thermal viscoplastic material using an artificial neural network

Acquiring the constitutive relationship for a thermal viscoplastic material using an artificial neural network

Jom'aald Materials Processing Technology EL SEVIER Journal of Materials Processing Technology 62 (1996) 206 - 210 Acquiring the constitutive rela...

393KB Sizes 0 Downloads 74 Views

Jom'aald

Materials

Processing

Technology EL SEVIER

Journal of Materials Processing Technology 62 (1996) 206 - 210

Acquiring the constitutive relationship for a thermal viscoplastic material using an artificial neural network Liu Qingbin, Ji Zhong, Liu Mabao, Wu Shichun* Department of' Materials Science and Eng#wering, Northwestern Polyteclmical University. Xi'an 710072, People's r~epublicof Ckina Received 9 May 1995

Industrial summary In conventional constitutive theories, a kind of mathematical model is formulated to represent the plastic behavior of a material. Once the model is set, the behavior of the material can only be expressed approximately by adjusting the parameters in the model. Under conditions of high strain rate and high temperature, to secure more accurate results the model has to be more complicated in mathematical formulation, but then problems related to material parameter identification, numerical stability, etc., arise. An artificial neural network may simulate a biological nervous system and is referred to as parallel distributed processing. It cannot only make decisions based on incomplete and disorderly information, but can also generalize rules from those cases on which it was trained and apply these rules to new stimuli. In back-propagation neural networks, the information contained in the input is recoded into an internal representation by hidden units that perform the mapping from input to output, it has been proven mathematically that a three-layer network can map any function to any required accuracy. A neural network can directly map the behavior for a thermal viscoplastic material. Using a neural network, it is not necessary to postulate a mathematical model and identify its parameters. In this paper, a four-layer back-propagation neural network is built to acquire the constitutive relationship of 12Cr2Ni4A. Temperature, effective strain, and effective strain rate are used as the input vectors of the neural network, the output of the neural network being the flow stress. After the network has been trained with experimental data, it can correctly reproduce the flow stress in the sampled data. Furthermore, when the network is presented with non-sampled data, it also can predict well. The results acquired from the neural network are very encouraging.

Kej,words:Artificialneural networks;Therm.~dviscoplasticmaterial 1. Introduction The constitutive relationship of a material is a foundation in metal-forming theory and technology, and it is also a basic model for the computer control of metal forming. However, during the plastic deformation of the metal, the variation of the structure in the metal is very complex. In hot working, especially under conditions of high strain rate and high temperature, the constitutive model is a highly non-line~r and complex mapping. It is quite difficult to find the constitutive model using a theoretical method, so that most researchers have to seek the constitutive relationship with the help of a large quantity of experimental data. From the time of Hooke to the present, these material models * Corresponding author. 0924-0136/96/$15.00© 1996 ElsevierScienceS.A. All rights reserved 0924-0136(95)02229-5

SSD1

for various behaviors have been developed in the same general way [1]: (i) a material is tested and its behavior observed; (ii) a mathematical model is postulated to explain the observed behavior and the material parameters are determined; (iii) this mathematical model is used to predict yet untested stress and checked against results from existing or new experiments, and (iv) the mathematical model is then modified to account for behavior observed but unexplained by the model. These models consist of mathematical rules and expressions. However, a model that is appropriate for one kind of material may not be appropriate for other kinds. In the process of bulk forming, the most popular constitutive model adopted is the following [2]:

~=Yro~exp(Q/kT)[l+(~)"]

(1)

L. Qh~gbin et al./ Journal of Materials Processing Technology 62 (1996) 206 210

where Q, n, 7 are material parameters, ~ is the strain rate, Yro is the static yield stress at reference temperature To, T is the absolute teriperature, and k is the Boltzmann constant. For a bulk-forming process carried out at high strain rates and high temperatures, a more sophisticated model should be used to secure accurate, results, which model will certainly lead to a more complicated mathematical formulation, with problems related to the identification of material parameters, numerical stability, etc., then arising. The mathematical formulation and the identification of its parameters have an important effect on the constitutive relationship of a thermal viscoplastic material° Artificial neural networks can simulate biological neural systems and are referred to as parallel distributed processing. They are made up of a large number of nodes which are called processing elements (PE). A network made up of a large number of PEs has very complex behavior and is capable of representing various kinds of knowledge. The knowledge is stored in the connection weights of the network, which latter can achieve any non-linear mapping to any required degree of accuracy. Neural networks have been used widely in pattern reeognization, self-adaptive control, natural language processing, etc., but applying neural networks to the metal forming field is a fairly recent development [3-6]. The constitutive relationship of a thermal viscoplastic material can also be 'learned' by a neural network through adequate training from experimental data. With neural networks, it is not necessary to postulate a mathematical model and identify the parameters. In this paper, the authors use a four-layer network to map successfully the constitutive relationship of 12Cr2Ni4A, and check its performance. The network can correctly reproduce the expected values and well predict the flow stress with data for which it has not been trained. The results are very encouraging. The constitutive relationship acquired by the neural network can provide accurate data for bulk-forming process.

2. Artificial neural network

An artificial neural network [1,7] simulates biological nervous systems and is referred tc, as parallel distributed processing. The netwerk consists of a number of computational units kno'/n as neurons connected with a larger number of cr mmunicational links. These neurons have a pattern of connectivity amongst them, the knowledge of which can be represented by the strength of the connections. The pattern of the interconnections, known as weights, is not fixed: instead, they can be modified based on experience. Hence, the system can learn from this experience and is therefore

207

intelligent. Neural networks are also computational methods, the system being able to calculate its output very quickly just as for those using a mathematical formulation. A key characteristic of neural networks is their capability of self-organization or 'learning'. Unlike traditional sequential programming techniques, neural networks are trained with examples of the concepts they are to capture and then they internally organize themselves to be able to reconstruct the presented examples. Other interesting and valuable characteristics are: (i) their ability to produce correct, or nearly correct, responses when presented with partially incorrect or incomplete stimuli; (ii) their ability to generalize rules from the cases on which they are trained and apply these to new stimuli. Both of these latter characteristics stem from the fact that a neural network, through self-organization, develops an internal set of features that it uses to classify the stimuli presented to it and return the expected response. 2. I. Feed-forward neural networks Feed-forward neural networks consist of an input layer, an output layer and hidden layers between them. The information contained in the input is recoded into an internal representation by the hidden units that perform the mapping from input to output. Each unit in this architecture can send its output to the units on the higher layer only and receive its input from the lower layer, no communication being permitted between the processing units within a layer. It has been proven mathematically that a three-layer network call map any function to any required accuracy. Fig. 1 shows a typical feed-torward neural network. Tile processing element units in successive layers of the network are connected by weighted arcs. The output of each processing unit is a non-linear function of the sum of its inputs. The output function has a sigmoid shape. The behavior of the network does not depend critically on

output units

'1 dc

hidden units

input units

Fig. I. A typical feed-forward neural network.

L. Q#tgbin et al./ Journal of Materials Processing Technology 62 (1996) 206-210

208

the details of the sigmoid function, but the explicit function used here is given by:

W°'~(t + 1~,= _~:W(")(t), + q~i(n) S)in

1 S, = P [ E , ] = [1 + exp(--E,)]

O~")(t + 1) = O~")(t) + q6~ ")

(2)

where S~ is the output of the ith unit, and E~ is the total input: (3)

Ej = Z WoSJ J

where Wo is the weight from the jth to the ith element. The weights can have positive or negative real values, representing an excitatory or inhibitory influence. In addition to the weights connecting them, each unit also has a threshold, which can also vary.

+

~(W~')(t)--

- 1)

W('Ott * * i j x- - I))

Neural networks need to be trained in a learning process before they are applied. Back-propagation learning algorithms have been applied to the present problem. The back-propagation techniques uses the difference between the desired output and the estimated value to adjust the weights and thresholds. The backpropagation learning algorithm is as follows: 1. Initialization: well-distributed random numbers between 0 and 1 are given to all the weights and the threshold. 2. Enter a group of inputs (S~°), S(0) ..., ~,~(°),..., SlO~) and correct patterns (S*, S* ..... S~*..... S,*,:) for each unit, so that S~") is the ith unit on the nth layer. The output layer is designated the 0th (input) layer, and nN is the amount of the units on the Nth (output) layer. 3, Compute the output of the network (SI:a, S~.u~..... S~,m) for a given input. All the units on successive layers are updated: S~")= P(E~"))

1 [1 + exp(-E~"))]

(4)

ttn - i

e~") = 5". w~,)S~',- , , - 0~")

(5)

1=1

where 0~ is the threshold of the ith unit on the nth layer. 4. Compute the average squared error between the values of the output units and the correct pattern, S*, provided by a teacher: PtN

Error = ~ (S~* ) - S~m)2

(6)

i=l

5. Update the weights and thresholds. This is accomplished by first computing the weights and thresholds on the output layer, and then propagating it backwards through the network, layer by layer:

(8)

where t is the number of weight updates, ~l is a learning rate, ~ is a smoothing parameter, and 0~") is the error gradient of the ith unit on the nth layer: (i) for the output layer: 6 ~m = (S~ * ) - S~~)P'(EI.m )

(9)

(ii) for the hidden layer: n. + i ~i'ln) = i P t g l T ( ~}], ~ ~~, { n + 1) .W. {,q + 1) ./-- !

(10)

where P'(E~) is the first derivative of the function P(EA: P'(EI .'~) = S~")(I - S~"))

2.2. Learning algorithm

(7)

(! 1)

6. Return to 2. After the weights and thresholds have been adjusted for one set of training data, additional training sets can be used to further adjust all the weights and thresholds.

3. Neural networks constitutive relationship 3.1. Sampled data

Under the conditions of a high strain rate and high temperature bulk-forming process, the constitutive equation of thermal viscoplastic metal materials can be written as the following: rr = a(?., L

T)

(12)

where tr is the flow stress, g is the effective strain, g is the effective strain rate, and T is the deformation temperature. Therefore, the deformation temperature, 7", the effective strain, E, and the effective strain rate, ~, can be used as the input vector of the neural networks; the output of the neural networks being the flow stress. The material chosen in this study is 12Cr2Ni4A in the uni-axial state of stress. In Ref. [8], using homogeneous compression tests, all kinds of data in the bulk-forming process of 12Cr2Ni4A is measured, including T, E, and ~, the geometry of samples being shown in Fig. 2. The lubricant used in the experiments is glass powder. The data reported in Ref. [8] has been judged to be sufficiently comprehensive for the purpose of training a back-propagation neural network, thus all data used in the present study have been selected from that paper. The number of sampled data is 160 sets, the range of data being listed in Table I. In order to spend less time on learning, experimental data are classified into some kinds of training examples individually. As each element represents a non-linear sigmoid function, obvi-

L. Qingbin et a/. Jaurnal of Materials Proce.ssing Teclmolog, 62 (1996) 206 210 0-R

209

|~0.

~15o

I10 /0

Fig. 2. The geometry of samples, dimensions in mm.

30

40

,~O

(a)

ously all the element outputs range between 0 and 1. Hence, all the data should be normalized before being applied to the neural networks so that they are confined between 0.1 and 0.9. All the data, d, are normalized as d,, according to: d,,=[(O.9-O.l)/(d,,~-d,,i,,)](d-d.,~,O+O.l

2o

d4ed,v~ 5ftam ra~ tS-')

~0"

(13)

where dm,~ and dm~. are the maximum and minimum values of the data, d, respectively. 3.2 Train#~g the network n , ,

0

3.2.1. Network architecture The architecture of the neural network must be developed so that it is capable of capturing the information embodied within these data sets. The number of hidden layers and the number of elements in each of these layers needed to capture the information in this training is not generally known a priori, and must be determined through trial and error. For the material model problem, the inputs and outputs are obvious. However, the number and size of the hidden layers are varied until the network can suecessfuUy 'learn' 160 sets of data selected fi'om Ref. [8]. The final network configuration consists of two hidden layers with 15 and 5 elements in the first and second layer, respectively. 3.2.2. Tra#~ing the network Training means to present the network with the experimental data and have it self-organize, or modify its weights, so that it can produce, correct or nearly correct, the flow stress when presented with temperature, effective strain, and effective strain rate. It takes half an hour to train the neural network on a compaTable I Range of sampled data Temperature (°C)

Effective strain

Effective strain rate (s- ~)

950-1100

0,2-0.7

4-50

l0

0

30

~'0

(b) Fig. 3. Comparison of the values acquiring from the (£~l neural network with (,'~) experimental values,

tiable PC-486. After training, the relative errors between the expected values and the values acquired from the neural network are within + 5%. Fig. 3(a) shows the flow stresses acquired fi-om the neural network against those expected and used as training cases, at a temperature of 1100 °C and an effective strain of 0.7, whilst Fig. 3(b) shows the flow stresses acquired fi'om the neural network against those expected and used as training cases at a temperature of 1000 °C and an effective strain of 0.3. 3.3. Test and verification After the neural networks have been trained in the learning process, they can be used for other sets of processing data. When new inputs are presented to the neural networks, the outputs will be predictable. If the process condition are governed by the same underlying mechanism, the performance of the neural networks should be satisfactory. The tests performed involve presenting to the network values that are not part of the training sets and inspecting what flow stresses are predicted. Table 2 shows comparisons between the predicted values acquired from the neural network with the non-sampled

L. Qi~gbin et al. / Journal of Materials Processing Technology 62 (1996) 206-210

210

Table 2 Comparison of the predicted values acquired from the network with non-sampled experimental results

1 2 3 4 5 6 7 8 9 l0 11 12

Effective strain

Temperature (°C)

Effective strain rate i s - i)

Output values from ANN (MPa) a

Experimental values (MPa)

Relative errors (%)

0.3 0.3 0,2 0.2 0.4 0.4 0.5 0.5 0.6 0.6 0.7 0.7

1100 I100 950 I100 950 1050 I100 I100 1000 II00 950 1050

4 30 8 50 10 30 6 l0 10 30 8 l0

103.1 130.4 160.9 130.7 178.9 156.5 114.8 120.5 167.9 148.4 195.1 150.2

105.0 135.5 156 128.0 191.0 163.5 123 132 175.5 151 197.5 151

- 1.4 -3.7 3.2 2.1 -6.3 -4.3 -6.7 -8.7 -4.3 -I.7 - 1.2 -0.5

ANN = Artificial neural network.

experimental results, the relative errors being within _+9%. 4. Discussion and conclusions The use of an artificial neural network for the constitutive relationship looks very encouraging. Once such a robust neural network for acquiring the constitutive relationship of a thermal viscoplastic material has been developed, it can produce a large amount of accurate data which engineers require. In the present study, a simple model was chosen to test the ability of the neural network. To take more factors such as the structure of the material, the geometry of the sample, the lubrication conditions, anisotropy, etc,, into account, it is considered that more input elements can be added, acting as material parameters or 'internal' variables. Further research is needed on the effect of topology of the network on its accuracy.

Acknowledgements The authors are grateful to the National Natural

Science Foundation of China, the Aeronautic Science Foundation of the Head Company of the Aviation Industry of P.R. China and the Youth Science Research Foundation of NPU, for enabling the carrying out of this investigation.

References [1] J. Ghaboussi, J.H. Garrett, Jr. and X. Wu, Material modelling with neural networks, Numerical Method in Engineering, Theory and Applications, Swansea, 1990, pp, 701-717. [2] N. Rebelo and S. Kobayashi, bn. J. Mech. Sei., 22 (1980) 619-718. [3] K. Os,'tk'4da and G. B. Yang, Ann, CIRP, 40 (19911 699-718. [4] L. Guangxin, X,E. Hui and R. Xueyu, Advanced Technology oJ

PlastieiO', Proe. 4th Int. ConL Technology of PlastieiO,, Be(jing, P,R. Ch#la, 1993, pp. 1439~ 1444. [5] K.-i. Manable et al., Advanced Teehnology of Plastieity, Proc. 4th ira. ConJ: Technology of Plasticity, Beijing, P,R. Ch#ta, 1993, pp. 1905-1910. [6] S. ZhongHua, L. Weimin, C. Shauchun, (3. Weibin and Q. Xinmiao, J. Mater. Proe. Technol., 32 (1992)365-370. [7] J. LiCheng, Theory of Nenral Networks, Xi'an Electronics University Press, 1992 (in Chinese). [8] Z. Jihna and (3. Kezhi, Resistance of Metal Deformation, Machine-Building Press, Beijing, P.R. China, 1989 (in Chinese)