Simulation and evaluation of fuzzy differential equations by fuzzy neural network

Simulation and evaluation of fuzzy differential equations by fuzzy neural network

Applied Soft Computing 12 (2012) 2817–2827 Contents lists available at SciVerse ScienceDirect Applied Soft Computing journal homepage: www.elsevier...

481KB Sizes 1 Downloads 35 Views

Applied Soft Computing 12 (2012) 2817–2827

Contents lists available at SciVerse ScienceDirect

Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc

Simulation and evaluation of fuzzy differential equations by fuzzy neural network Maryam Mosleh 1 , Mahmood Otadi ∗ Department of Mathematics, Firoozkooh Branch, Islamic Azad University, Firuozkooh, Iran.

a r t i c l e

i n f o

Article history: Received 27 May 2010 Received in revised form 14 February 2011 Accepted 18 March 2012 Available online 5 April 2012 Keywords: Fuzzy neural networks Fuzzy differentialequations Feedforward neural network Learning algorithm

a b s t r a c t In this paper, a novel hybrid method based on learning algorithm of fuzzy neural network for the solution of differential equation with fuzzy initial value is presented. Here neural network is considered as a part of large field called neural computing or soft computing. The model finds the approximated solution of fuzzy differential equation inside of its domain for the close enough neighborhood of the fuzzy initial point. We propose a learning algorithm from the cost function for adjusting of fuzzy weights. Finally, we illustrate our approach by numerical examples and an application example in engineering. © 2012 Elsevier B.V. All rights reserved.

1. Introduction Proper design for engineering applications requires detailed information of the system-property distributions such as temperature, velocity, and density in space and time domain [8–10]. This information can be obtained by either experimental measurement or computational simulation. Although experimental measurement is reliable, it needs a lot of labor efforts and time. Therefore, the computational simulation has become a more and more popular method as a design tool since it needs only a fast computer with a large memory. Frequently, those engineering design problems deal with a set of differential equations (DEs), which have to numerically solved such as heat transfer, solid and fluid mechanics. Numerical methods of predictor–corrector, Runge-Kutta, finite difference, finite element, finite volume, boundary element, spectral and collocation provide a strategy by which we can attack many problems in applied mathematics, where we simulate a real-word problem with a differential equation, subject to some initial or boundary conditions. In the finite difference and finite element methods we approximate the solution by using the numerical operators of the function’s derivatives and finding the solution at specific preassigned grids [49]. The linearity is assumed for the purposes of evaluating the derivatives. Although such an approximation method is conceptually easy to understand, it has a number of shortcomings. Obviously, it is difficult to apply for systems with irregular geometry or unusual boundary conditions.

∗ Corresponding author. Tel.: +98 912 6964202. E-mail addresses: [email protected] (M. Mosleh), [email protected] (M. Otadi). 1 Tel.: +98 912 6076308. 1568-4946/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2012.03.041

Predictor–corrector and Runge-Kutta methods are widely applied over preassigned grid points to solve ordinary differential equations [31]. In the spectral and collocation approaches a truncated series of the specific orthogonal functions (basis functions) are used for finding the approximated solution of the DE. In the spectral and collocation techniques the role of trial functions as a basis function is important. The trial functions used in spectral methods are chosen from various classes of Jacobian polynomials [18], still the discretization meshes are preassigned. Neural network model is used to approximate the solutions of DEs for the entire domains. In 1990 the authors of [33] used parallel computers to solve a first order differential equation with Hopfield neural network models. Meade and Fernandez [38,39] solved linear and nonlinear ordinary differential equations using feed forward neural networks architecture and B1 -splines. Leephakpreeda [34] applied neural network model and linguistic model as universal approximators for any nonlinear continuous functions. With this outstanding capability, the solution of DEs can be approximated by the appropriate neural network model and linguistic model within an arbitrary accuracy. When a physical problem is transformed into a deterministic initial value problem



dy(x) = f (x, y), dx y(a) = A,

(1)

We usually cannot be sure that this modelling is perfect. The initial value may not be known exactly and the function f may contain unknown parameters. If the nature of errors is random, then instead of a deterministic problem (1) we get a random differential equation with random initial value and/or random coefficients. But if the underlying structure is not probabilistic, e.g., because of subjective

2818

M. Mosleh, M. Otadi / Applied Soft Computing 12 (2012) 2817–2827

choice, then it may be appropriate to use fuzzy numbers instead of real random variables. The topic of Fuzzy Differential Equations (FDEs) has been rapidly growing in recent years. The fuzzy initial value problem have been studied by several authors [1,2,6,7,44,40,11,14,51]. The concept of fuzzy derivative was first introduced by Chang and Zadeh [13], it was followed up by Dubois and Prade [15] who used the extension principle in their approach. Other methods have been discussed by Puri and Ralescu [43] and by Goetschel and Voxman [17]. Fuzzy differential equations were first formulated by Kaleva [28] and Seikkala [46] in time dependent form. Kaleva had formulated fuzzy differential equations, in terms of Hukuhara derivative [28]. Buckley and Feuring [12] have given a very general formulation of a fuzzy first-order initial value problem. They first find the crisp solution, make it fuzzy and then check if it satisfies the FDE. In [41,16], investigated the existence and uniqueness of solution for fuzzy random differential equations with non-lipschitz coefficients and fuzzy differential equations with piecewise constant argument. In this work we propose a new solution method for the approximated solution of fuzzy differential equations using innovative mathematical tools and neural-like systems of computation. This hybrid method can result in improved numerical methods for solving fuzzy initial value problems. In this proposed method, fuzzy neural network model (FNNM) is applied as universal approximator. We use fuzzy trial function, this fuzzy trial function is a combination of two terms. A first term is responsible for the fuzzy initial while the second term contains the fuzzy neural network adjustable parameters to be calculated. The main aim of this paper is to illustrate how fuzzy connection weights are adjusted in the learning of fuzzy neural networks by the back-propagation-type learning algorithms [24,27] for the approximated solution of fuzzy differential equations. Our fuzzy neural network in this paper is a three-layer feedforward neural network where connection weights and biases are fuzzy numbers. The remaining part of the paper is organized as follows. In Section 2, we discuss some basic definitions. Also, we briefly review relevant definition of the architecture of fuzzy neural networks. Section 3 gives details of problem formulation and the way to construct the fuzzy trial function and training of fuzzy neural network for finding the unknown adjustable coefficients. Also, training of partially fuzzy neural network for finding the unknown adjustable coefficients and numerical examples are discussed in this section and conclusion is in final section.

2. Preliminaries In this section the most basic notations used in fuzzy calculus are introduced. We start by defining the fuzzy number. Definition 1. A fuzzy number is a fuzzy set u : R1 −→ I = [0, 1] which satisfies i. u is upper semi-continuous. ii. u(x) = 0 outside some interval [a, d] . iii. There are real numbers b, c : a ≤ b ≤ c ≤ d for which 1. u(x) is monotonic increasing on [a, b], 2. u(x) is monotonic decreasing on [c, d], 3. u(x) = 1, b, ≤ x ≤ c. The set of all the fuzzy numbers (as given by Definition 1) is denoted by E1 . An alternative definition which yields the same E1 is given by Kaleva [28].

Fig. 1. Multiple layer feed-forward FNNM.

Definition 2. A fuzzy number u is a pair (u, u) of functions u(r), u(r); 0 ≤ r ≤ 1 which satisfy the following requirements: i. u(r) is a bounded monotonic increasing left continuous function on (0, 1] and right continuous at 0. ii. u(r) is a bounded monotonic decreasing left continuous function on (0, 1] and right continuous at 0. iii. u(r) ≤ u(r), 0 ≤ r ≤ 1. This fuzzy number space as shown in [50], can be embedded into the Banach space B = C[0, 1] × C[0, 1] where the metric is usually defined as (u, v) = max{ sup |u(r)|, sup |v(r)|}, 0≤r≤1

0≤r≤1

for arbitrary (u, v) ∈ C[0, 1] × C[0, 1]. Artificial neural networks are an exciting form of artificial intelligence which mimic the learning process of the human brain in order to extract patterns from historical data [4,47]. For many years this technology has been successfully applied to a wide variety of real-word applications [42]. Simple perceptrons need a teacher to tell the network what the desired output should be. These are supervised networks. In an unsupervised net, the network adapts purely in response to its inputs. Such networks can learn to pick out structure in their input. Fig. 1 shows typical three-layered perceptron. Multi-layered perceptrons with more than three layers, use more hidden layers [21,29]. Multi-layered perceptrons correspond the input units to the output units by a specific nonlinear mapping [48]. From Kolmogorov existence theorem we know that a three-layered perceptron with n(2n + 1) nodes can compute any continuous function of n variables [22,35]. The accuracy of the approximation depends on the number of neurons in the hidden layer and does not depend on the number of the hidden layers [32]. Before describing a fuzzy neural network architecture, we denote real numbers and fuzzy numbers by lowercase letters (e.g., a, b, c, . . .) and uppercase letters (e.g., A, B, C, . . .), respectively. Our fuzzy neural network is a three-layer feedforward neural network where connection weights, biases and targets are given as fuzzy numbers and inputs are given as real numbers. For convenience in this discussion, FNNM with an input layer, a single hidden layer, and an output layer in Fig. 1 is represented as a basic structural architecture. Here, the dimension of FNNM is denoted by the number of neurons in each layer, that is n × m × s, where m, n and s are the number of neurons in the input layer, the hidden layer and the output layer, respectively. The architecture of the model shows how FNNM transforms the n inputs (x1 , . . ., xi , . . ., xn ) into the s outputs (Y1 , . . ., Yk , . . ., Ys ) throughout the m hidden neurons (Z1 , . . ., Zj , . . ., Zm ), where the cycles represent the neurons in each layer. Let Bj be the bias for neuron Zj , Ck be the bias for neuron Yk ,

M. Mosleh, M. Otadi / Applied Soft Computing 12 (2012) 2817–2827

Wji be the weight connecting neuron xi to neuron Zj , and Wkj be the weight connecting neuron Zj to neuron Yk .

2819

Hidden units: Zj = f (Net j ),

2.1. Operations of fuzzy numbers



j = 1, 2, . . . , m,

(12)

n

We briefly mention fuzzy number operations defined by the extension principle [52,53]. Since input vector of feedforward neural network is fuzzy in this paper, the following addition, multiplication and nonlinear mapping of fuzzy numbers are necessary for defining our fuzzy neural network:

Net j =

oi · Wji + Bj .

(13)

i=1

Output units: Yk = f (Net k ),



k = 1, 2, . . . , s,

(14)

m

A+B (z) = max{A (x) ∧ B (y)|z = x + y},

(2)

AB (z) = max{A (x) ∧ B (y)|z = xy},

(3)

f (Net) (z) = max{Net (x)|z = f (x)},

(4)

where A, B, Net are fuzzy numbers, * (·) denotes the membership function of each fuzzy number, ∧ is the minimum operator and f(·) is a continuous activation function (like sigmoidal activation function) inside hidden neurons. The above operations of fuzzy numbers are numerically performed on level sets (i.e., ˛-cuts). The h-level set of a fuzzy number A is defined as

Net k =

Wkj · Zj + Ck .

(15)

j=1

The architecture of our fuzzy neural network is shown in Fig. 1 where connection weights, biases, and targets are fuzzy and inputs are real numbers. The input–output relation in Eqs. (11)–(15) is defined by the extension principle [52] as in Hayashi et al. [20] and Ishibuchi et al.[26]. 2.3. Calculation of fuzzy output

and [A]0 = h∈(0,1] [A]h . Since level sets of fuzzy numbers become closed intervals, we denote [A]h as

The fuzzy output from each unit in Eqs. (11)–(15) is numerically calculated for real inputs and level sets of fuzzy weights and fuzzy biases. The input–output relations of our fuzzy neural network can be written for the h-level sets: Input units:

[A]h = [[A]Lh , [A]U h ],

oi = xi ,

[A]h = {x ∈ R|A (x) ≥ h}



[A]Lh

for

0 < h ≤ 1,

(5)

(6)

[A]U h

where and are the lower limit and the upper limit of the h-level set [A]h , respectively. From interval arithmetic [5], the above operations of fuzzy numbers are written for h-level sets as follows: U [A]h + [B]h = [[A]Lh + [B]Lh , [A]U h + [B]h ], U L U U [A]h · [B]h = [min{[A]Lh · [B]Lh , [A]Lh · [B]U h , [A]h · [B]h , [A]h · [B]h }, U L U U max{[A]Lh · [B]Lh , [A]Lh · [B]U h , [A]h · [B]h , [A]h · [B]h }],

where f is increasing function. In the case of 0 ≤ can be simplified as



[Zj ]h = f ([Net j ]h ),



j = 1, 2, . . . , m,

(7)

[Net j ]h =

(8)

Output unit:

oi · [Wji ]h + [Bj ]h .

Eq. (8)



k = 1, 2, . . . , s,

(19)

m

[Net k ]h =

[Wkj ]h · [Zj ]h + [Ck ]h .

(20)

j=1

(10)

2.2. Input–output relation of each unit

From Eqs. (16)–(20), we can see that the h-level sets of the fuzzy outputs Yk ’s are calculated from those of the fuzzy weights, fuzzy biases and crisp inputs. From Eqs. (7)–(10), the above relations are rewritten as follows when the inputs xi ’s are nonnegative, i.e., 0 ≤ xi : Input units: oi = xi .

Let us consider a fuzzy three-layer feedforward neural network with n input units, m hidden units and s output units. Target vector, connection weights and biases are fuzzy and input vector is real number. In order to derive a crisp learning rule, we restrict connection weights and biases by four types of (real numbers, symmetric triangular fuzzy numbers, asymmetric triangular fuzzy numbers and asymmetric trapezoidal fuzzy numbers) while we can use any type of fuzzy numbers for fuzzy targets. For example, an asymmetric triangular fuzzy connection weight is specified by its three L , w C , w U ). parameters as Wkj = (wkj kj kj When an n-dimensional input vector (x1 , . . ., xi , . . ., xn ) is presented to our fuzzy neural network, its input–output relation can be written as follows, where f : Rn −→ E s : Input units: i = 1, 2, . . . , n.

(18)

i=1

U L U U [A]h · [B]h = [min{[A]Lh · [B]Lh , [A]Lh · [B]U h }, max{[A]h · [B]h , [A]h · [B]h }].

oi = xi ,

(17)

n

(9) [B]U h,

(16)

Hidden units:

[Yk ]h = f ([Net k ]h ),

L U f ([Net]h ) = f ([[Net]Lh , [Net]U h ]) = [f ([Net]h ), f ([Net]h )],

[B]Lh

i = 1, 2, . . . , n.

(11)

(21)

Hidden units: L U [Zj ]h = [[Zj ]Lh , [Zj ]U h ] = [f ([Net j ]h ), f ([Net j ]h )],

(22)

where f is increasing function. [Net j ]Lh =

n 

oi · [Wji ]Lh + [Bj ]Lh ,

(23)

i=1

[Net j ]U h =

n 

U oi · [Wji ]U h + [Bj ]h .

(24)

i=1

Output units: L U [Yk ]h = [[Yk ]Lh , [Yk ]U h ] = [f ([Net k ]h ), f ([Net k ]h )],

(25)

2820

M. Mosleh, M. Otadi / Applied Soft Computing 12 (2012) 2817–2827

where f is increasing function. [Net k ]Lh =



[Wkj ]Lh · [Zj ]Lh +



j∈a

[Net k ]U h =



L [Wkj ]Lh · [Zj ]U h + [Ck ]h ,

j∈b

U [Wkj ]U h · [Zj ]h +

j∈c



L U [Wkj ]U h · [Zj ]h + [Ck ]h ,

(26)

j∈d

L L L for [Zj ]U h ≥ [Zj ]h ≥ 0, where a = {j | [Wkj ]h ≥ 0}, b = {j | [Wkj ]h < U 0}, c = {j | [Wkj ]U h ≥ 0},d = {j | [Wkj ]h < 0}, a ∪ b = {1, . . ., m} and c ∪ d = {1, . . ., m}.

3. Fuzzy differential equations 3.1. First-order equations We are interested in finding the solution to the first-order ordinary differential equation [28] dy(x) = f (x, y) dx

(27)

where y is a fuzzy function of x, f(x, y) a fuzzy function of the crisp variable x and the fuzzy variable y, and y is the fuzzy derivative of y [43]. If an initial value y(a) = A is given, we obtain a fuzzy Cauchy problem of first order:



dy(x) = f (x, y), dx y(a) = A,

x ∈ [a, b],

(28)

Theorem 1. Let f satisfy x ≥ 0, y, y˜ ∈ R,

Net j = x · Wj + Bj ,

where g : R+ × R+ −→ R+ is a continuous mapping such that r −→ g(x, r) is nondecreasing, the initial value problem u (x) = g(x, u(x)),

u(0) = u0 ,

1 . 1 + e−x

(31)

Here, the dimension of FNNM is 1 × m × 1. For every entry x the input neuron makes no changes in its input, so the input to the hidden neurons is

0 < h ≤ 1.

| f (x, y) − f (x, y˜ |≤ g(x, | y − y˜ |),

FNNM with some weights and biases is considered and we train in order to compute the approximate solution of problem (28). Let us consider a three-layered FNNM (see Fig. 2) with one unit entry x, one hidden layer consisting of m activation functions and one unit output N(x, P). In this paper, we use the sigmoidal activation function f(·) for the hidden units of our fuzzy neural network: f (x) =

where A is a fuzzy number in E with h-level sets [A]h = [[A]Lh , [A]U h ],

Fig. 2. Three layer fuzzy neural network with one input and one output.

(29)

has a solution on R+ for u0 > 0 and that u(x) = 0 is the only solution of (29) for u0 = 0 . Then the initial value problem (28) has a unique fuzzy solution. Proof. See [46]. Let us assume that a general approximation solution to Eq. (28) is in the form yT (x, P) for yT as a dependent variable to x and P, where P is an adjustable parameter involving weights and biases in the structure of the three-layered feed forward fuzzy neural network (see Fig. 2). The fuzzy trial solution yT is an approximation solution to y for the optimized value of unknown weights and biases. Thus the problem of finding the approximated fuzzy solution for Eq. (28) over some collocation points in [a, b] by a set of discrete equally spaced grid points a = x1 < x2 < . . . < xg = b,

Zj = s(Net j ),

j = 1, . . . , m,

(33)

where s is the activation function which is normally nonlinear function, the usual choices of the activation function [19] are the sigmoid transfer function, and the output neuron make no change its input, so the input to the output neuron is equal to output N = V1 Z1 + . . . + Vj Zj + . . . + Vm Zm ,

(34)

where Vj is a weight parameter from jth unit in the hidden layer to the output layer. From Eqs. (21)–(26), we can see that the h-level sets of the Eqs. (32)–(34) are calculated from those of the fuzzy weights, fuzzy biases and crisp inputs. For our fuzzy neural network, we can derive the learning algorithm without assuming that the input x is nonnegative. For reducing the complexity of the learning algorithm, input x usually assumed as non-negative in fully fuzzy neural networks, i.e., 0 ≤ x [24]: Input unit: o = x.

yT (x, P) = ˛(x) + ˇ[x, N(x, P)],

Hidden units:

where the first term in the right hand side does not involve with adjustable parameters and satisfies the fuzzy initial condition. The second term in the right hand side is a feed forward three-layered fuzzy neural network consisting of an input x and the output N(x, P). The crisp trial function was used in [37]. In the next subsection, this

(32)

where Wj is a weight parameter from input layer to the jth unit in the hidden layer, Bj is an jth bias for the jth unit in the hidden layer. The output, in the hidden neurons is

is equivalent to calculate the functional yT (x, P). In order to obtain fuzzy approximate solution yT (x, P), we solve unconstrained optimization problem that is simpler to deal with, we define the fuzzy trial function to be in the following form: (30)

j = 1, . . . , m,

(35)

L U [Zj ]h = [[Zj ]Lh , [Zj ]U h ] = [s([Net j ]h ), s([Net j ]h )],

(36)

[Net j ]Lh = o · [Wj ]Lh + [Bj ]Lh ,

(37)

[Net j ]U h

(38)

=

o · [Wj ]U h

+ [Bj ]U h.

M. Mosleh, M. Otadi / Applied Soft Computing 12 (2012) 2817–2827 g

Output unit: [N]h = [[N]Lh , [N]U h ],



[N]Lh =

(39)

[Vj ]Lh · [Zj ]Lh +



j∈a

[N]U h

=

[Vj ]Lh · [Zj ]U h,

j∈b



[Vj ]U h

· [Zj ]U h

+

j∈c



[Vj ]U h

· [Zj ]Lh ,

(40)

where {xi }i=1 are discrete points belonging to the interval [a, b] and L and eU can be viewed as the squared in the cost function (44), eih ih errors for the lower limits and the upper limits of the h-level sets, respectively. It is easy to express the first derivative of N(x, P) in terms of the derivative of the sigmoid function, i.e.,

 ∂[Zj ]Lh ∂[Net j ]Lh ∂[N]Lh [Vj ]Lh · · = ∂x ∂x ∂[Net j ]Lh a

j∈d

[Vj ]U h

[Vj ]U h

{j | ≥ 0}, d = {j | < 0}, a ∪ b = {1, . . ., m} and c ∪ d = {1, . . ., m}. A FNN4 (fuzzy neural network with crisp set input signals, fuzzy number weights and fuzzy number output) [23] solution to Eq. (28) is given in Fig. 2. How is the FNN4 going to solve the fuzzy differential equations? The training data are a = x1 < x2 < . . . < xg = b for input. We propose a learning algorithm from the cost function for adjusting weights. Consider the following fuzzy initial value problem for a first order differential equation (28), the related trial function will be in the form yT (x, P) = A + (x − a)N(x, P),

(41)

this solution by intention satisfies the initial condition in (28). In [24], the learning of our fuzzy neural network is to minimize the difference between the fuzzy target vector B = (B1 , . . ., Bs ) and the actual fuzzy output vector O = (O1 , . . ., Os ). The following cost function was used in [24,3] for measuring the difference between B and O:

e=

eh =



h



 s     s  U 2  [Bk ]Lh − [Ok ]Lh 2  [Bk ]U h − [Ok ]h +

2

h

k=1

2

,

k=1

(42) where eh is the cost function for the h-level sets of B and O. The squared errors between the h-level sets of B and O are weighted by the value of h in (42). In [25], it is shown by computer simulations that their paper, the fitting of fuzzy outputs to fuzzy targets is not good for the h-level sets with small values of h when we use the cost function in (42). This is because the squared errors for the h-level sets are weighted by h in (42). Krishnamraju et al. [30] used the cost function without the weighting scheme: e=



eh =

h

 s  2   [Bk ]Lh − [Ok ]Lh 2 h

k=1

+

 U 2  s  [Bk ]h − [Ok ]U h 2

.

k=1

(43) In the computer simulations included in this paper, we mainly use the cost function in (43) without the weighting scheme. The error function that must be minimized for problem (28) is in the form e=

g  i=1



+

L L L for [Zj ]U h ≥ [Zj ]h ≥ 0, where a = {j | [Vj ]h ≥ 0}, b = {j | [Vj ]h < 0}, c =



ei =

2821

g   i=1

h

eih =

g   i=1

L U {eih + eih },

(44)

h

where

∂[Zj ]Uh

[Vj ]Lh ·

∂[Net j ]Uh

b

L eih

(45)

U eih

U 2 ([dyT (xi , P)/dx]U h − [f (xi , yT (xi , P))]h ) , = 2

(46)

∂[Net j ]Uh ∂x

(47)

 ∂[Zj ]Uh ∂[Net j ]Uh ∂[N]Uh [Vj ]U · = h · ∂x ∂x ∂[Net j ]Uh c

+



[Vj ]U h ·

d

∂[Zj ]Lh ∂[Net j ]Lh

·

∂[Net j ]Lh ∂x

(48)

where a = {j | [Vj ]Lh ≥ 0}, b = {j | [Vj ]Lh < 0}, c = {j | [Vj ]U h ≥ 0},d = {j | [Vj ]U h < 0}, a ∪ b = {1, . . ., m} and c ∪ d = {1, . . ., m} and

∂[Net j ]Lh = [Wj ]Lh , ∂x ∂[Net j ]Uh = [Wj ]U h, ∂x

∂[Zj ]Lh ∂[Net j ]Lh ∂[Zj ]Uh ∂[Net j ]Uh

= [Zj ]Lh · (1 − [Zj ]Lh ),

U = [Zj ]U h · (1 − [Zj ]h ).

Now differentiating from trial function yT (x, P) in (45) and (46), we obtain

∂[yT (x, P)]Lh ∂[N(x, P)]Lh = [N(x, P)]Lh + (x − a) · , ∂x ∂x ∂[yT (x, P)]Uh ∂[N(x, P)]Uh = [N(x, P)]U , h + (x − a) · ∂x ∂x thus the expression in (47) and (48) is applicable here. A learning algorithm is derived in Appendix A. 3.2. Partially fuzzy neural networks One drawback of fully fuzzy neural networks with fuzzy connection weights is long computation time. Another drawback is that the learning algorithm is complicated. For reducing the complexity of the learning algorithm, we propose a partially fuzzy neural network (PFNN) architecture where connection weights to output unit are fuzzy numbers while connection weights and biases to hidden units are real numbers [25]. Since we had good simulation results even from partially fuzzy three-layer neural networks, we do not think that the extension of our learning algorithm to neural networks with more than three layer is an attractive research direction. The input–output relation of each unit of our partially fuzzy neural network in (35)–(40) can be rewritten for h-level sets as follows: Input unit: o = x.

([dyT (xi , P)/dx]Lh − [f (xi , yT (xi , P))]Lh )2 = , 2

·

(49)

Hidden units: zj = s(net j ),

j = 1, . . . , m,

net j = o · wj + bj .

(50) (51)

2822

M. Mosleh, M. Otadi / Applied Soft Computing 12 (2012) 2817–2827 Table 1 The initial values of weights.

Output unit:

⎡ ⎤ m m   ⎣ [Vj ]Lh · zj , ⎦ [N]h = [[N]Lh , [N]U [Vj ]U h] = h · zj . j=1

i

(52)

j=1

The error function that must be minimized for problem (28) is in the form e=

g 

ei =

g  

i=1

i=1

eih =

g  

h

i=1

L U {eih + eih },

(53)

L eih =

U eih =

− [f (xi , yT (xi , P))]Lh )2 2

([dyT (xi , P)/dx]U h

wi bi

,

2 − [f (xi , yT (xi , P))]U h)

2

(54)

,

m

0

0

0

0

0.5 0 0

0.5 0 0

0.5 0 0

0.5 0 0

0.5 0 0

1

v(1) i v(2) i v(3) i

−0.0033

2 1.3589

3 0.1712

−1.2045

4

5

0.1709

3.3579

3.2430

3.7247

0.8372

0.3534 3.1268 −1.2037

4.1001 0.8666 −1.1763

4.3200 −0.4003 −4.9043

9.5713 0.5611 −2.4606

1.2107 −2.0061 −1.9617

0.1956

Approximate Exact

0.9 0.8 0.7 0.6

(56)

[Vj ]U h ·

∂zj ∂net j  · [Vj ]U = h · zj · (1 − zj ) · wj . ∂net j ∂x

0.5 0.4

m

(57)

j=1

Now differentiating from trial function yT (x, P) in (54) and (55), we obtain

∂[yT (x, P)]Lh ∂[N(x, P)]Lh = [N(x, P)]Lh + (x − a) · , ∂x ∂x ∂[yT (x, P)]Uh ∂[N(x, P)]Uh = [N(x, P)]U , h + (x − a) · ∂x ∂x thus the expressions in (56) and (57) are applicable here. A learning algorithm is derived in Appendix B. 3.3. Numerical results To illustrate the technique proposed in this paper, the following examples are considered. Example 3.1 [36]. Growth/decay model Consider the initial value problem (28) over the interval [0, 1]



0

(55)

j=1

m

j=1

5 −0.5

m

j=1

∂x

4 −0.5

1

∂zj ∂net j  ∂[N]Lh  [Vj ]Lh · · [Vj ]Lh · zj · (1 − zj ) · wj , = = ∂x ∂net j ∂x



3 −0.5

i

g

=

2 −0.5

Table 2 The values of weights for example 3.1.

wi bi

where {xi }i=1 are discrete points belonging to the interval [a, b] and L and eU can be viewed as the squared in the cost function (53), eih ih errors for the lower limits and the upper limits of the h-level sets, respectively. It is easy to express the first derivative of N(x, P) in terms of the derivative of the sigmoid function, i.e.,

∂[N]Uh

1 −0.5

h

where ([dyT (xi , P)/dx]Lh

v(1) i v(2) i v(3) i

dy(x) = y, x ∈ [0, 0.5], dx y(0) = (0.25, 1.25, 2).

We know that y(x) = (0.25, 1.25, 2) exp (x). The fuzzy trial function for this problem is yT (x, P) = (0.25, 1.25, 2) + x · N(x, P). Here, the dimension of PFNNM is 1 × 5 × 1. The error function for the m = 5 sigmoid units in the hidden layer and for g = 6 equally spaced points inside the interval [0, 0.5] is trained. In the computer

0.3 0.2 0.1 0 0

0.5

1

1.5

2

2.5

3

3.5

Fig. 3. Analytical solution and approximate solution by PFNNM for example 3.1.

simulation of this section, we use the following specifications of the learning algorithm. (1) (2) (3) (4) (5)

Number of hidden units: five units. Stopping condition: 100 iterations of the learning algorithm. Learning constant:  = 0.3. Momentum constant: ˛ = 0.2. Initial value of the weights and biases of PFNNM are shown in (1) (2) (3) Table 1, that we suppose Vi = (vi , vi , vi ) for i = 1, . . ., 5.

We apply the proposed method to the approximate realization of solution of problem (28). Fuzzy weights from the trained fuzzy neural network are shown in Table 2. Analytical solution and fuzzy trial function is shown in Fig. 3 for x = 0.5. Example 3.2. Consider the nonlinear differential equation (28) over the interval [0, 1]



dy(x) = exp(−y2 (x)), dx y(0) = (0.75, 1, 1.5).

x ∈ [0, 1],

Since the exact solution cannot be calculated analytically, we obtain fuzzy neural network approximation for y(1). The fuzzy trial function for this problem is yT (x, P) = (0.75, 1, 1.5) + x · N(x, P).

M. Mosleh, M. Otadi / Applied Soft Computing 12 (2012) 2817–2827 Table 4 The values of weights for example 3.3.

Table 3 The values of weights for example 3.2. i

1

2

v(1) i v(2) i v(3) i

−1.006

10.0789

10.4149

11.3400

3.9762

−0.9504

10.4519

10.6509

12.4322

4.0697

−0.1002 7.7483 0.3710

11.1001 −0.1238 −1.8912

10.8400 0.0826 −15.0217

12.9713 2.5070 −9.5468

5.2107 −7.7486 −8.6859

wi bi

3

4

5

Here, the dimension of PFNNM is 1 × 5 × 1. The error function for the m = 5 sigmoid units in the hidden layer and for g = 6 equally spaced points inside the interval [0, 1] is trained. In the computer simulation of this section, we use the following specifications of the learning algorithm. (1) (2) (3) (4) (5)

Number of hidden units: five units. Stopping condition: 100 iterations of the learning algorithm. Learning constant:  = 0.3. Momentum constant: ˛ = 0.2. Initial value of the weights and biases of PFNNM are shown in (1) (2) (3) Table 1, that we suppose Vi = (vi , vi , vi ) for i = 1, . . ., 5.

We apply the proposed method to the approximate realization of solution of problem (28). Fuzzy weights from the trained fuzzy neural network are shown in Table 3. By using Table 3, the fuzzy approximate solution is (1.1449, 1.2903, 1.6014). Example 3.3 [12]. Example in engineering application Consider the following mixing problem. The tank initially contains 300 gals of brine which has dissolved in it c lbs of salt. Coming into the tank at 3 gals/min is brine with concentration k lbs salt/gals and the well stirred mixture leaves at the rate 3 gals/min. See Fig. 4. Let y(x) = lbs of salt in the tank at any time x ≥ 0. Then



dy(x) 1 + y = 3k, 100 dx y(0) = c.

2823

x ∈ [0, 0.5],

The initial condition being uncertain is modeled as fuzzy number c = (1, 2, 3) and k = (1, 2, 4) . The solution is [y(x)]Lh = 300[k]Lh + ([c]Lh − 300[k]Lh )exp(−0.01x), U U U [y(x)]U h = 300[k]h + ([c]h − 300[k]h )exp(−0.01x).

Fig. 4. Mixing problem for example 3.3.

i

1

v(1) i v(2) i v(3) i wi bi

2

3

4

5

0.8767

5.2131

11.2142

3.2852

12.9864

0.9485

5.6913

11.6816

3.6599

13.1629

1.1235 −0.4305 9.4410

6.1242 1.3566 −0.1003

11.9859 −1.5876 −17.7304

3.9857 −2.0633 0.5164

13.4563 −5.1036 −5.8914

The fuzzy trial function for this problem is yT (x, P) = (1, 2, 3) + x · N(x, P). Here, the dimension of PFNNM is 1 × 5 × 1. The error function for the m = 5 sigmoid units in the hidden layer and for g = 6 equally spaced points inside the interval [0, 0.5] is trained. In the computer simulation of this section, we use the following specifications of the learning algorithm. (1) (2) (3) (4) (5)

Number of hidden units: five units. Stopping condition: 100 iterations of the learning algorithm. Learning constant:  = 0.3. Momentum constant: ˛ = 0.2. Initial value of the weights and biases of PFNNM are shown in (1) (2) (3) Table 1, that we suppose Vi = (vi , vi , vi ) for i = 1, . . ., 5.

We apply the proposed method to the approximate realization of solution of example 3.3. Fuzzy weights from the trained fuzzy neural network are shown in Table 4. By using Table 4, the fuzzy approximate solution is (2.4918, 4.9829, 8.9705) for x = 0.5. 4. Summary and conclusions Solving fuzzy differential equations (FDEs) by using universal approximators (UA), that is, FNNM is presented in this paper. The problem formulation of the proposed UAM is quite straightforward. To obtain the “Best-approximated” solution of FDEs, the adjustable parameters of FNNM are systematically adjusted by using the learning algorithm. In this paper, we derived a learning algorithm of fuzzy weights of tree-layer feedforward fuzzy neural networks whose input–output relations were defined by extension principle. The effectiveness of the derived learning algorithm was demonstrated by computer simulation on numerical examples and we proposed application example in engineering. Our computer simulation in this paper were performed for three-layer feedforward neural networks using the back-propagation-type learning algorithm. If we use other learning algorithms, we may have different simulation results. For example, some global learning algorithms such as genetic algorithms may train non-fuzzy connection weights much better than the back-propagation-type learning algorithm for fuzzy mappings of triangular-shape fuzzy numbers with increasing fuzziness. The use of more general network architectures, however, makes the back-propagation-type learning algorithm much more complicated. Since we had good simulation result even from partially fuzzy three-layer neural networks, we do not think that the extension of our learning algorithm to neural networks with more than three layers is an attractive research direction. Good simulation result was obtained by this neural network in shorter computation times than fully fuzzy neural networks in our computer simulations. This paper is one of the first attempts to derive learning algorithms of fuzzy neural networks with real input, fuzzy output, fuzzy weights, asymmetric triangular fuzzy weights, asymmetric trapezoidal and real number weights. Extensions to the case of more

2824

M. Mosleh, M. Otadi / Applied Soft Computing 12 (2012) 2817–2827

general fuzzy weights are left for future studies. This paper will be utilized as a good starting point for such extensions.

Therefore,

∂[Vj ]Lh ∂v(1) j

Appendix A. Derivation of a learning algorithm in fuzzy neural networks Let us denote the fuzzy connection weight Vj by its parameter (1) (q) (r) (vj , . . . , vj , . . . , vj )

values as Vj = where Vj is a weight parameter from jth unit in the hidden layer to the output layer. The amount of modification of each parameter value is written as [23,25,45]

v(q) (t j

(q) v(q) (t) + vj (t), j

+ 1) =

q vj (t)

ih (q) ∂ v j i=1

(q) + ˛ · vj (t

− 1),

where t indexes the number of adjustments,  is a learning rate (positive real number) and ˛ is a momentum term constant (positive real number). (q) Thus our problem is to calculate the derivatives ∂eih /∂vj . Let us rewrite

∂eih ∂v(q) j

=

∂eih /∂v(q) j

(q)

(3)

(4)

(q)

(q)

(i.e., Vj = (vj , vj , vj , vj ), as shown in Fig. 3), ∂[Vj ]Lh /∂vj

∂[Vj ]Uh /∂v(q) j ∂[Vj ]Lh ∂v(1) j ∂[Vj ]Lh ∂v(2) j ∂[Vj ]Lh ∂v(3) j ∂[Vj ]Lh ∂v(4) j

= 1 − h,

∂v(1) j

∂[Vj ]Uh

= h,

∂v(2) j ∂[Vj ]Uh

= 0,

∂v(3) j ∂[Vj ]Uh ∂v(4) j

= 0,

[Vj ]Lh =

∂v(3) j

h , 2

=1−

h , 2

(2)

and vj (t + 1) is updated by the following rule: (3) v(1) (t + 1) + vj (t + 1) j

2

.

On the other hand, the derivatives ∂eih /∂[Vj ]Lh and ∂eih /∂[Vj ]U h are independent of the shape of the fuzzy connection weight. They can be calculated from the cost function eih using the input–output relation of our fuzzy neural network for the h-level sets. When we use the cost function with the weighting scheme in (44), ∂eih /∂[Vj ]Lh and ∂eih /∂[Vj ]U h are calculated as follows:

L

∂[Zj ]h ∂eih = ıL · ([Zj ]Lh + (xi − a) · L ∂x ∂[Vj ]h −

and

∂[f (x, yT (xi , P))]Lh ∂[yT (xi , P)]Lh

· (xi − a) · [Zj ]Lh ),

where = 0, ıL =

= 0,



dyT (xi , P) dx

L h

 − [f (xi , yT (xi , P))]Lh

.

(ii) If [Vj ]Lh < 0 then U

= h,

∂[Zj ]h ∂eih = ıL · ([Zj ]U h + (xi − a) · ∂x ∂[Vj ]Lh −

= 1 − h.

These derivatives are calculated from the following relation between the h-level set of the fuzzy connection weight Vj and its parameter values (see Fig. 3): (1) (1 − h) · vj

=

∂v(1) j

∂[Vj ]Uh

h , 2

are calculated as follows:

∂[Vj ]Uh

∂[Vj ]Uh

(i) If [Vj ]Lh ≥ 0 then

In this formulation, ∂[Vj ]Lh /∂vj and ∂[Vj ]U are easily calcuh /∂vj lated from the membership function of the fuzzy connection weight Vj . For example, when the fuzzy connection weight Vj is trapezoidal (2)

h , 2

[Calculation of ∂eih /∂[Vj ]Lh ]

as follows:

L U ∂eih ∂[Vj ]h ∂eih ∂[Vj ]h · + · . L U (q) (q) ∂[Vj ]h ∂vj ∂[Vj ]h ∂vj

(1)

∂v(3) j

=

v(2) (t + 1) = j

g  ∂e

= −

∂[Vj ]Lh

=1−

∂[f (x, yT (xi , P))]Lh ∂[yT (xi , P)]Lh

· (xi − a) · [Zj ]U h ).

[Calculation of ∂eih /∂[Vj ]U h] (i) If [Vj ]U h ≥ 0 then

(2) + h · vj , U

(3)

[Vj ]U h = h · vj

(4)

+ (1 − h) · vj .

∂[Zj ]h ∂eih = ıU · ([Zj ]U h + (xi − a) · U ∂x ∂[Vj ]h

When the fuzzy connection weight Vj is a symmetric triangular fuzzy number, the following relations hold for its h-level set [Vj ]h =



[[Vj ]Lh , [Vj ]U h] : [Vj ]Lh = (1 − [Vj ]U h =

h h (3) (1) ) · vj + · vj , 2 2

h (1) h (3) · v + (1 − ) · vj . 2 j 2

∂[f (x, yT (xi , P))]Uh ∂[yT (xi , P)]Uh

· (xi − a) · [Zj ]U h ),

where U

ı =



dyT (xi , P) dx

U h

 − [f (xi , yT (xi , P))]U h

.

M. Mosleh, M. Otadi / Applied Soft Computing 12 (2012) 2817–2827

(ii) If [Vj ]U h < 0 then

Appendix B. Derivation of a learning algorithm in partially fuzzy neural networks

∂eih = ıU × ∂[Vj ]Uh

Let us denote the fuzzy connection weight Vj to the output unit (1)





[Zj ]Lh + (xi − a) ·

∂[Zj ]Lh ∂[f (x, yT (xi , P))]Uh · (xi − a) · [Zj ]Lh . − ∂x ∂[yT (xi , P)]Uh

In our fuzzy neural network, the connection weights and biases to the hidden units are updated in the same manner as the parameter values of the fuzzy connection weights Vj as follows: (q)

(q)

g  ∂eih

q

(q)

∂wj(q)

i=1

+ ˛ · wj (t − 1).

∂eih

=

∂wj(q)

∂eih ∂[Wj ]Lh

·

as follows:

∂[Wj ]Lh ∂wj(q)

+

∂eih ∂[Wj ]Uh

·

∂[Wj ]Uh ∂wj(q)

(q)

(ii) If

(q)

[Calculation of ∂eih /∂[Vj ]Lh ]

∂[N(xi , P)]Lh ∂[Vj ]Lh

∂[f (x, yT (xi , P))]Lh

ı =

∂[yT (xi , P)]Lh





· (xi − a) · [Vj ]Lh · [Zj ]Lh · (1 − [Zj ]Lh ) · xi

.



+ (xi − a) ·

∂zj ∂[f (x, yT (xi , P))]Lh ∂[yT (xi , P))]Lh · , − ∂x ∂[yT (xi , P)]Lh ∂[Vj ]Lh

where L

dyT (xi , P) dx

∂[yT (xi , P))]Lh ∂[Vj ]Lh

< 0 then

L h

 − [f (xi , yT (xi , P))]Lh

= (xi − a) · zj ,

,

∂N(xi , P)]Lh ∂[Vj ]Lh

= zj .

[Calculation of ∂eih /∂[Vj ]U h]

L L U L 2 +(xi − a) · xi · [Vj ]U h · [Zj ]h · (1 − [Zj ]h )wj − (xi − a) · [Vj ]h · ([Zj ]h )

−2(xi − a) · xi · [Vj ]U h −

L U ∂eih ∂[Vj ]h ∂eih ∂[Vj ]h · + · . L U ∂[Vj ]h ∂v(q) ∂[Vj ]h ∂v(q) j j

and ∂eih /∂[Vj ]U h are calculated as follows:

= ıL · [Vj ]Lh · [Zj ]Lh · (1 − [Zj ]Lh ) · xi + (xi − a) · [Vj ]Lh · [Zj ]Lh

 ∂eih L L U L = ıU · [Vj ]U h · [Zj ]h · (1 − [Zj ]h ) · xi + (xi − a) · [Vj ]h · [Zj ]h ∂[Wj ]Lh



=

On the other hand, the derivatives ∂eih /∂[Vj ]Lh and ∂eih /∂[Vj ]U h are independent of the shape of the fuzzy connection weight. They can be calculated from the cost function eih using the input–output relation of our fuzzy neural network for the h-level sets. When we use the cost function with the weighting scheme in (53), ∂eih /∂[Vj ]Lh



−2(xi − a) · xi · [Vj ]Lh · ([Zj ]Lh )2 (1 − [Zj ]Lh )wj

[Vj ]U h

∂v(q) j



+(xi − a) · xi · [Vj ]Lh · [Zj ]Lh · (1 − [Zj ]Lh )wj − (xi − a) · [Vj ]Lh · ([Zj ]Lh )2



∂eih

as follows:

∂eih = ıL × ∂[Vj ]Lh

(i) If [Vj ]Lh ≥ 0 then



i=1

+ ˛ · vj (t − 1),

(q)

(q)

∂[Wj ]Lh

(q)

ih

∂v(q) j

are easily calcuIn this formulation, ∂[Vj ]Lh /∂vj and ∂[Vj ]U h /∂vj lated from the membership function of the fuzzy connection weight Vj .

.

In this formulation, ∂[Wj ]Lh /∂wj and ∂[Wj ]U are easily calh /∂wj culated from the membership function of the fuzzy connection weight Wj . Derivatives ∂eih /∂[Wj ]Lh and ∂eih /partial[Wj ]U h can be calculated from the cost function eih using the input–output relation of our fuzzy neural network for the h-level sets. When we use the cost function with the weighting scheme in (44), ∂eih /∂[Wj ]Lh is calculated as follows: [Calculation of ∂eih /∂[Wj ]Lh ]

∂eih

g  ∂e

q

vj (t) = −

(q)

(q)

(q)

(r)

(q) (q) v(q) (t + 1) = vj (t) + vj (t), j

us rewrite ∂eih /∂vj

Thus our problem is to calculate the derivatives ∂eih /∂wj . Let us rewrite ∂eih /∂wj

(q)

by its parameter values as Vj = (vj , . . . , vj , . . . , vj ). The amount of modification of each parameter value is written as [23,45]

where t indexes the number of adjustments,  is a learning rate (positive real number) and ˛ is a momentum term constant (positive real number). (q) Thus our problem is to calculate the derivatives ∂eih /∂vj . Let

(q)

wj (t + 1) = wj (t) + wj (t),

wj (t) = −

2825

· ([Zj ]Lh )2 (1 − [Zj ]Lh )wj

∂[f (x, yT (xi , P))]Uh ∂[yT (xi , P)]Uh

L L · (xi − a) · [Vj ]U h · [Zj ]h · (1 − [Zj ]h ) · xi

∂eih = ıU ∂[Vj ]Uh



×

∂N(xi , P)]Uh ∂[Vj ]Uh

 .



+ (xi − a) ·

∂zj ∂[f (x, yT (xi , P))]Uh ∂[yT (xi , P))]Uh , · − ∂x ∂[yT (xi , P)]Uh ∂[Vj ]Uh

where U

ı =



dyT (xi , P) dx

U h

 − [f (xi , yT (xi , P))]U h

,

∂eih /∂[Wj ]Uh

and the fuzzy biases to the hidIn the other cases, den units are updated in the same manner as the fuzzy connection weights to the hidden units and fuzzy connection to the output unit.

∂[yT (xi , P))]Uh ∂[Vj ]Uh

= (xi − a) · zj ,

∂N(xi , P)]Uh ∂[Vj ]Uh

= zj .

2826

M. Mosleh, M. Otadi / Applied Soft Computing 12 (2012) 2817–2827

In our partially fuzzy neural network, the connection weights and biases to the hidden units are real numbers. The non-fuzzy connection weight wj to the jth hidden unit is updated in the same manner as the parameter values of the fuzzy connection weight Vj as follows: wj (t + 1) = wj (t) + wj (t), g  ∂e

ih

wj (t) = −

i=1

∂wj

+ ˛wj (t − 1).

The derivative ∂eih /∂wj can be calculated from the cost function eih using the input–output relation of our partially fuzzy neural network for the h-level sets. When we use the cost function with the weighting scheme in (44), ∂eih /∂wj is calculated as follows:

∂eih = ıL · ∂wj



∂[N(xi , P)]Lh ∂wj

+(xi − a) · [Vj ]Lh · zj + (xi − a) · xi · [Vj ]Lh · zj (1 − zj )wj −(xi − a) · [Vj ]Lh · zj2 − 2(xi − a) · xi · [Vj ]Lh · zj2 (1 − zj )wj





+

∂[f (xi , yT (xi , P))]Lh ∂[yT (xi , P)]Lh · ∂wj ∂[yT (xi , P))]Lh

∂[f (xi , yT (xi , P))]Lh ∂[yT (xi , P)]Uh · ∂wj ∂[yT (x, P))]Uh



+ıU ·



∂N(xi , P)]Uh + (xi − a) · [Vj ]U h · zj ∂wj

U 2 +(xi − a) · xi · [Vj ]U h · zj (1 − zj )wj − (xi − a) · [Vj ]h · zj 2 −2(xi − a) · xi · [Vj ]U h · zj (1 − zj )wj





+

∂[f (xi , yT (xi , P))]Uh ∂[yT (xi , P)]Lh · ∂wj ∂[yT (xi , P))]Lh

∂[f (xi , yT (xi , P))]Uh ∂[yT (xi , P)]Uh · ∂wj ∂[yT (xi , P))]Uh

 ,

where

∂[N(xi , P)]Lh ∂zj ∂net j ∂[N(xi , P)]Lh = · · = [Vj ]Lh · zj · (1 − zj ) · xi , ∂zj ∂net j ∂wj ∂wj ∂[N(xi , P)]Uh ∂[N(xi , P)]Uh ∂zj ∂net j = · · = [Vj ]U h · zj · (1 − zj ) · xi , ∂wj ∂zj ∂net j ∂wj ∂[yT (xi , P))]Lh ∂[N(xi , P)]Lh = (xi − a) · , ∂wj ∂wj ∂[yT (xi , P))]Uh ∂[N(xi , P)]Uh = (xi − a) · . ∂wj ∂wj The non-fuzzy biases to the hidden units are updated in the same manner as the non-fuzzy connection weights to the hidden units. References [1] S. Abbasbandy, J.J. Nieto, M. Alavi, Tuning of reachable set in one dimensional fuzzy differential inclusions, Chaos, Solitons & Fractals 26 (2005) 1337–1341. [2] S. Abbasbandy, T. Allaviranloo, O. Lopez-Pouso, J.J. Nieto, Numerical methods for fuzzy differential inclusions, Computers & Mathematics with Applications 48 (2004) 1633–1641.

[3] S. Abbasbandy, M. Otadi, Numerical solution of fuzzy polynomials by fuzzy neural network, Applied Mathematics and Computation 181 (2006) 1084–1089. [4] S. Abbasbandy, M. Otadi, M. Mosleh, Numerical solution of a system of fuzzy polynomials by fuzzy neural network, Information Sciences 178 (2008) 19481960. [5] G. Alefeld, J. Herzberger, Introduction to Interval Computations, Academic Press, New York, 1983. [6] T. Allahviranloo, E. Ahmady, N. Ahmady, Nth-order fuzzy linear differential eqations, Information Sciences 178 (2008) 1309–1324. [7] T. Allahviranloo, N. Ahmady, E. Ahmady, Numerical solution of fuzzy differential eqations by predictor–corrector method, Information Sciences 177 (2007) 1633–1647. [8] H.Md. Azamathulla, A.Ab. Ghani, An ANFIS-based approach for predicting the scour depth at culvert outlet, ASCE, Journal of Pipeline Systems Engineering Practice 35 (2011). [9] H.Md. Azamathulla, A.Ab. Ghani, N.A. Zakaria, ANFIS based approach for predicting maximum scour location of spillway, Water Management, ICE London 162 (6) (2012) 399–407. [10] H.Md. Azamathulla, C.C. Kiat, A.Ab. Ghani, Z.A. Hasan, N.A. Zakaria, An ANFISbased approach for predicting the bed load for moderately-sized rivers, Journal of Hydro-Environment Research, Elsevier & KWRA 3 (2009) 35–44. [11] B. Bede, I.J. Rudas, A.L. Bencsik, First order linear fuzzy differential eqations under generalized differentiability, Information Sciences 177 (2007) 1648–1662. [12] J.J. Buckley, T. Feuring, Fuzzy differential equations, Fuzzy Sets and Systems 110 (2000) 69–77. [13] S.L. Chang, L.A. Zadeh, On fuzzy mapping and control, IEEE Transactions on Systems, Man and Cybernetics 2 (1972) 30–34. [14] M. Chen, C. Wu, X. Xue, G. Liu, On fuzzy boundary value problems, Information Sciences 178 (2008) 1877–1892. [15] D. Dubois, H. Prade, Towards fuzzy differential calculus. Part 3. Differentiation, existence and uniqueness of solution for fuzzy random, Fuzzy Sets and Systems 8 (1982) 225–233. [16] W. Fei, Existence and uniqueness of solution for fuzzy random differential equations with non-lipschitz coefficients, Information Sciences 177 (2007) 4329–4337. [17] R. Goetschel, W. Voxman, Elementary fuzzy calculus, Fuzzy Sets and Systems 18 (1986) 31–43. [18] D. Gottlieb, S.A. Orszag, Numerical analysis of spectral methods: theory and applications, in: CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 26, SIAM, Philadelphia, 1977. [19] M.T. Hagan, H.B. Demuth, M. Beale, Neural Network Design, PWS publishing company, Massachusetts, 1996. [20] Y. Hayashi, J.J. Buckley, E. Czogala, Fuzzy neural network with fuzzy signals and weights, International Journal of Intelligent Systems 8 (1993) 527–537. [21] S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall, New Jersey, 1999. [22] K. Hornick, M. Stinchcombe, H. White, Multilayer feedforward networks are universal approximators, Neural Networks 2 (1989) 359–366. [23] H. Ishibuchi, K. Kwon, H. Tanaka, A learning algorithm of fuzzy neural networks with triangular fuzzy weights, Fuzzy Sets and Systems 71 (1995) 277–293. [24] H. Ishibuchi, K. Morioka, I.B. Turksen, Learning by fuzzified neural networks, International Journal of Approximate Reasoning 13 (1995) 327–358. [25] H. Ishibuchi, M. Nii, Numerical analysis of the learning of fuzzified neural networks from fuzzy if-then rules, Fuzzy Sets and Systems 120 (2001) 281–307. [26] H. Ishibuchi, H. Okada, H. Tanaka, Fuzzy neural networks with fuzzy weights and fuzzy biases, in: Proceedings of ICNN, vol. 93, San Francisco, 1993, pp. 1650–1655. [27] H. Ishibuchi, H. Tanaka, H. Okada, Fuzzy neural networks with fuzzy weights and fuzzy biases, in: Proceedings of 1993 IEEE International Conferences on Neural Networks, 1993, pp. 1650–1655. [28] O. Kaleva, Fuzzy differential equations, Fuzzy Sets and Systems 24 (1987) 301–317. [29] T. Khanna, Foundations of Neural Networks, Addison-Wesley, Reading, MA, 1990. [30] P.V. Krishnamraju, J.J. Buckley, K.D. Relly, Y. Hayashi, Genetic learning algorithms for fuzzy neural nets, in: Proceedings of 1994 IEEE International Conference on Fuzzy Systems, 1994, pp. 1969–1974. [31] J.D. Lamber, Computational Methods in Ordinary Differential Equations, John Wiley & Sons, New York, 1983. [32] A. Lapedes, R. Farber, How neural nets work? in: D.Z. Anderson (Ed.), Neural Information Processing Systems, AIP, 1988, pp. 442–456. [33] H. Lee, I.S. Kang, Neural algorithms for solving differential equations Journal of Computational Physics 91 (1990) 110–131. [34] T. Leephakpreeda, Novel determination of differential-equation solutions: universal approximation method, Computational and Applied Mathematics 146 (2002) 443–457. [35] R.P. Lippmann, An introduction to computing with neural nets, IEEE ASSP Magazine (1987) 4–22. [36] M. Ma, M. Friedman, A. Kandal, Numerical solutions of fuzzy differential equations, Fuzzy Sets and Systems 105 (1999) 133–138. [37] A. Malek, R. Shekari Beidokhti, Numerical solution for high order differential equations using a hybrid neural network—optimization method, Applied Mathematics and Computation 183 (2006) 260–271.

M. Mosleh, M. Otadi / Applied Soft Computing 12 (2012) 2817–2827 [38] A.J. Meade Jr., A.A. Fernandez, The numerical solution of linear ordinary differential equations by feedforward neural networks, Mathematical and Computer Modelling 19 (12) (1994) 1–25. [39] A.J. Meade Jr., A.A. Fernandez, Solution of nonlinear ordinary differential equations by feedforward neural networks, Mathematical and Computer Modelling 20 (9) (1994) 19–44. [40] M.T. Mizukoshi, L.C. Barros, Y. Chalco-Cano, H. Román-Flores, R.C. Bassanezi, Fuzzy differential equations and the extention principle, Information Sciences 177 (2007) 3627–3635. [41] G. Papaschinopoulos, G. Stefanidou, P. Efraimidis, Existence, uniquencess and asymptotic behavior of the solutions of a fuzzy differential equation with piecewise constant argument, Information Sciences 177 (2007) 3855–3870. [42] P. Picton, Neural Networks, second ed., Palgrave, Great Britain, 2000. [43] M.L. Puri, D.A. Ralescu, Differentials of fuzzy functions, Journal of Mathematical Analysis and Applications 91 (1983) 552–558. [44] R. Rodriguez-Lopez, Comparison results for fuzzy differential eqations, Information Sciences 178 (2008) 1756–1779.

2827

[45] D.E. Rumelhart, J.L. McClelland, the PDP Research Group, Parallel Distributed Processing, vol. 1, MIT Press, Cambridge, MA, 1986. [46] S. Seikkala, On the fuzzy initial value problem, Fuzzy Sets and Systems 24 (1987) 319–330. [47] R.J. Schalkoff, Artificial Neural Networks, McGraw-Hill, New York, 1997. [48] J. Stanley, Introduction to Neural Networks, third ed., Sierra Mardre, 1990. [49] J. Store, R. Bulirsch, Introduction to Numerical Analysis, second ed., SpringerVerlag, New York, 1993. [50] Wu Congxin, Ma Ming, On embedding problem of fuzzy number space, Part 1, Fuzzy Sets and Systems 44 (1991) 33–38. [51] C. Wu, S. Song, Approximate solutions, existence and uniqueness of the cauchy problem of fuzzy differential equations, J. Mathematical Analysis and Applications 202 (1996) 629–644. [52] L.A. Zadeh, The concept of a liguistic variable and its application to approximate reasoning. Parts 1–3, Information Sciences 8 (1975) 199–249, 301–357; 9 (1975) 43–80. [53] L.A. Zadeh, Is there a need for fuzzy logic? Information Sciences 178 (2008) 2751–2779.