Levenberg–Marquardt method for solving systems of absolute value equations

Levenberg–Marquardt method for solving systems of absolute value equations

Accepted Manuscript Levenberg–Marquardt method for solving systems of absolute value equations Javed Iqbal, Asif Iqbal, Muhammad Arif PII: DOI: Refere...

188KB Sizes 40 Downloads 196 Views

Accepted Manuscript Levenberg–Marquardt method for solving systems of absolute value equations Javed Iqbal, Asif Iqbal, Muhammad Arif PII: DOI: Reference:

S0377-0427(14)00543-3 http://dx.doi.org/10.1016/j.cam.2014.11.062 CAM 9907

To appear in:

Journal of Computational and Applied Mathematics

Received date: 13 March 2013 Revised date: 15 November 2013 Please cite this article as: J. Iqbal, A. Iqbal, M. Arif, Levenberg–Marquardt method for solving systems of absolute value equations, Journal of Computational and Applied Mathematics (2014), http://dx.doi.org/10.1016/j.cam.2014.11.062 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Manuscript Click here to view linked References

Levenberg-Marquardt method for solving systems of absolute value equations Javed Iqbal a , Asif Iqbal b and Muhammad Arif c

a

Department of Mathematics, COMSATS Institute of Information Technology, Kamra Road Attack, Pakistan b Department of Computer Science, Virtual University Raiwind Road, Lahore, Punjab, Pakistan c

Department of Mathematics, Abdul Wali Khan University Mardan, KPK, Pakistan

E-mail: [email protected] ( J. Iqbal ) E-mail: [email protected] (A. Iqbal) E-mail: [email protected] ( M. Arif )

Abstract. In this paper, we suggest and analyze the Levenberg-Marquardt method for solving system of absolute value equations Ax − x = b, where A ∈ R n×n , b ∈ R n and x ∈ R n is unknown. We present different line search methods to convey the main idea

and the significant modifications. We discuss the convergence of the proposed method. We consider numerical examples to illustrate the implementation and efficiency of the method. Results are very encouraging and may stimulate further research in this direction. MSC: 65K10, 65H10 Keywords: Absolute value equations; Levenberg-Marquardt method; Goldstein line search. 1 Introduction We consider the system of absolute value equations of the type

F ( x) = Ax − x − b = 0,

(1.1)

where A ∈ R n×n , b ∈ R n and

x , will denote the vector in R n , with absolute values of

component of x. The system (1.1) is the special case of generalized system of absolute value equations of the form

Ax + B x = b,

(1.2)

where B ∈ R n×n . The system of absolute value equation (1.2) was introduced by Rohn [15] and further studied in [4-8, 11-14]. The importance of system of absolute value equations (1.1) arise from the fact that several mathematical problems including linear programming, bimatrix games can be reduced to system of absolute value equations. The Levenberg-Marquardt method [3, 9] was proposed for solving nonlinear problems. This method can be seen as a combination of steepest descent and the Gauss-Newton method. The classical Levenberg-Marquardt method computes the search direction d k by −1

d k = − ( J ( xk )T J ( xk ) + µ k I ) J ( xk )T F ( xk ),

(1.3)

where J ( xk ) = F ′( xk ) denote the Jacobian, µk > 0 is a parameter and I is the identity matrix of order n. In our case, F ( x) denote system of absolute value equations defined by (1.1). In this paper, we suggest Levenberg-Marquardt method for solving system of absolute value equations (1.1). We consider the line search method with Levenberg-Marquardt direction. We discuss the convergence of the proposed method in section 2. The comparison with other methods is given in the last section. For x ∈ R n , sign( x) will denote a vector with components equal to 1, 0, − 1 depending on whether the corresponding component of x is positive, zero or negative. The diagonal matrix D ( x) corresponding to sign( x) is defined as D ( x) = ∂ x = diag ( sign ( x)),

where ∂ x represent the generalized Jacobean of x based on a subgradient. The line search direction method is defined by xk +1 = xk + α k d k ,

k = 0, 1, 2, … ,

(1.4)

where α k ∈ R is a step size and d k is the search direction. The line search may be chosen in different ways. Let R n be an n-dimensional Euclidean space and f : R n → R be a continuously differentiable function. The commonly used line searches are

(i)

Minimization. The line search α k is defined by f ( xk + α k d k ) ≤ f ( xk ).

The above line search is discussed in detail by Noor et al. [12] for solving absolute value equations (1.1).

(ii)

Armijo [1]. Consider scalars ck = −

g ( xk )T d k L dk

2

, L, β > 0 and ρ ∈ ( 0,1) . The line

search is the largest one in {ck , ck ρ , ck ρ 2 ,… , } such that

f ( xk ) − f ( xk +1 ) ≥ −σα k g ( xk )T d k , where σ ∈ ( 0, 12 ) , α ∈ R and g ( xk ) = f ′( xk ) . This line search is used in [4] for solving system of absolute value equations (1.1).

(iii)

Wolfe [20]. α k is chosen to satisfy simultaneously

f ( xk ) − f ( xk +1 ) ≥ −σα k g ( xk )T d k

(1.5)

∇f ( xk + α k d k )T d k ≥ β g ( xk )T d k ,

(1.6)

where σ ∈ ( 0, 12 ) , β ∈ (σ ,1) . and g ( xk ) = f ′( xk ) .

(iv)

Goldstein [2]. The Goldstein line search is defined as

(1 − σ ) α k g ( xk )T d k + f ( xk ) ≤

f ( xk + α k d k ) ≤ σα k g ( xk )T d k + f ( xk ),

(1.7)

where σ ∈ ( 0, 12 ) and g ( xk ) = f ′( xk ) .

2

Proposed Method

We suggest Levenberg-Marquardt method with Goldstein line search for solving system of absolute value equations (1.1). Let ϕ ( x) be given by

ϕ ( x) =

1 F ( x) 2

2

(2.1)

Here the Jacobian J ( x) = F ′( x) is Lipschtz continuous on some neighbor of x ∈ X , that is there exists a constant L > 0 such that

J ( x) − J ( y ) = A − D( x) − A + D( y ) = D( x) − D( y )

(2.2) x, y ∈ N ( x ),

≤ L x− y , where X is the solution set of (1.1). Using the above facts we suggest the next Algorithm.

Algorithm 2.1 Choose an initial guess x0 ∈ R n to (1.1)

β ∈ [1, 2] , σ ∈ ( 0, 12 ) and ω ∈ ( 0,1) For k = 0,1,… if

∇ϕ = 0, then stop

else µk = Axk − xk − b

β

−1

d k = − ( J ( xk )T J ( xk ) + µ k I ) J ( xk )T F ( xk ), if d k satisfies F ( xk + d k ) ≤ ω F ( xk ) else xk +1 = xk + α k d k

α k is Goldstein line search stopping criteria End for k. In next section, we prove the convergence of the Algorithm 2.1.

3

Convergence Analysis

Theorem 3.1. If d kT ∇ϕ ( x) < 0, then for Goldstein line search satisfies ∞

∑ k =1

(∇ϕ ( xk )T d k ) dk

2

< +∞.

(3.1)

Proof. From (1.7) with g ( xk ) = J ( xk ) and f ( xk ) = ϕ ( xk ) , we have

(1 − σ ) α k J ( xk )T d k ≤ ϕ ( xk + α k d k ) − ϕ ( xk ).

(3.2)

From the mean value theorem, we have

ϕ ( xk + α k d k ) − ϕ ( xk ) = α k ∇ϕ ( xk + θ k d k )d k ,

(3.3)

where

θ k ∈ ( 0, α k ) .

(3.4)

From (2.2), (3.2) and (3.3), we have 2

−σ∇ϕ ( xk )T d k ≤ Lθ k d k .

(3.5)

From (3.4) and (3.5), we have

αk ≥

−σ∇ϕ ( xk )T d k

L dk

(3.6)

2

From (1.7) and (3.6), we obtain

ϕ ( xk + α k d k ) ≤ ϕ ( xk ) −

σ 2 ( ∇ϕ ( xk )T d k ) L dk

2

2

.



That completes the proof. In the next result, we prove the convergence of the proposed method.

Theorem 3.2. Suppose that

{ xk } is

a sequence defined by (1.4) with Levenberg-

Marquardt search direction and satisfying (1.5). Then the accumulation point x of { xk } is a stationary point of ϕ ( x).

Proof. If ∇ϕ ( xk ) ≠ 0, then d k ≠ 0. From (1.3) we have

(

)

∇ϕ ( xk )T d k = − ( J ( xk )T J ( xk ) + µk I ) d k d k < 0.

Then from (1.7) and F ( xk + d k ) ≤ ω F ( xk ) , we conclude that {ϕ ( xk )} is monotonically decreasing. If F ( xk ) → 0, then any accumulation point of is a solution of ϕ ( xk ). If

F ( xk ) → η > 0, then (3.1) is satisfied for all large k . Using (3.1) and Algorithm (2.1), we have

( F (x )

T

k

J ( xk )d k ) ≥ η 2 β d k . 4

(3.7)

From (3.1) and (3.7) we have

lim d k = 0,

(3.8)

k →∞



which implies that J ( x ) F ( x ) = 0. Hence the result.

4

Numerical Results

In this section, we consider several examples to show the implementation of the proposed method. All the experiments are performed with Intel(R) Core (TM) 2 × 2.1GHz, 1GB RAM, and the codes are written in Matlab 7.

We compare the proposed method with Minimization Method [12] and Interval Algorithm [18].

Example 4.1 [18]. We chose a random A according to the following structure: A = round(100 *(eye (n, n) − 0.02 * (2 * rand (n, n) − 1))) Then we chose a random x ∈ R n . Finally we computed b = Ax − |x|. The computational results are given in table 4.1.

Table 4.1.

n

Minimization Method Interval Algorithm

Algorithm 2.1

50

7

3

3

500

11

5

4

1000

15

6

4

2000

20

6

4

In Table 4.1, n dente the problem size and for each method the numbers of iterations are given. The Algorithm 2.1 converges rapidly to the solution of (1.1) in a few iterations. From Table 4.1, we conclude that Levenberg-Marquardt method is very effective for solving large size problems.

In the next example, we compare Algorithm 2.1 with Verification method (VM) by Wang et al. [19].

Example 4.2[19]. Let 3  0 A = 0   2 

2 2 … 2  3 2 … 2 0 3 … 2,      2 2 … 3 

T

b = ( −2, 2, − 2,… , −2, 2 ) .

The comparison is given in Table 4.2.

Table 4.2. Dim

VM

Algorithm 2.1

6

NI 18

Error 7.7125502e − 006

NI 10

Error

10

27

7.5990390e − 007

12

1.8310267e − 015

20

50

1.5107868e − 009

15

1.9484335e − 015

30

73

2.9403146e − 012

17

2.5989654e − 016

40

97

4.1214587e − 011

19

6.4584069e − 016

1.5059797e − 015

In Table 4.2, NI and Error denote number of iterations and min ( Axk − xk − b )



respectively. Algorithm 2.1 requires less number of iterations to converge as compare to VM [17]. Table 4.2 also shows the accuracy of Algorithm 2.1.

Example 4.3 [13]. Consider the ordinary differential equation of the form d2x − x = (1 − t 2 ), dt 2

The exact solution is

0 ≤ x ≤ 1, x(0) = 0 x(1) = 1.

(4.1)

0.7378827425sin(t ) − 3cos(t ) + 3 − t 2 x(t ) =  −0.7310585786e −t − 0.2689414214et + 1 + t 2

x < 0, x > 0.

We take n = 10, the matrix A is given by

ai , j

−242,   = 121  0, 

for j = i  j = i + 1, for   j = i − 1, otherwise.

i = 1, 2, … , n − 1 i = 2, 3, … , n

The constant vector b is given by

T

 120 117 112 105 96 85 72 57 40 −14620  b= , , , , , , , , ,  .  121 121 121 121 121 121 121 121 121 121 

Here A is not an L -matrix. Comparison is given in figure 4.1. 4

10

2

10

GAOR SSOR

0

Algorithm 2.1

10

−2

2 norm of residual

10

−4

10

−6

10

−8

10

−10

10

−12

10

−14

10

0

100

200

300

400

Number of iterations

Figure 4.1

500

600

700

Comparison of Algorithm 2.1 with GAOR and SSOR methods

5

Conclusion

In this paper, we have studied line search methods for solving systems of absolute value equations. The convergence of the proposed method was proved using LevenbergMarquardt type search direction. Comparison with other methods showed the efficiency of our method. The further work is required to extend the idea of line search methods for generalized systems of absolute value equations.

References [1]

L. Armijo, Minimization of functions having Lipschitz continuous first order partial derivatives, Pacific J. Math. 16(1) (1966) 1-3.

[2]

A. A. Goldstein, On steepest descent. SIAM J. Control 3 (1965) 147-151.

[3]

K. Levenberg, A method for the solution of certain nonlinear problems in least squares, Quart. Appl. Math. 2 (1944) 164-166.

[4]

S. Ketabchi and H. Moosaei, An efficient method for optimal correcting of absolute value equations by minimal changes in the right hand side, Comput. Math. Appl. (Article in press).

[5]

S. Ketabchi and H. Moosaei, Minimum norm solution to the absolute value equation in the convex case, J. optim. Thy. Appl. 154(3)(2012) 1080-1087.

[6]

S. Ketabchi, H. Moosaei and S. Fallhi, Optimal error correction of the absolute value equations using a genetic algorithm, Comp. math. Mod. (Article in press).

[7] O. L. Mangasarian, Absolute value equation solution via concave minimization, Optim. Letters,1(2007) 3-8, [8]

O. L. Mangasarian, R. R. Meyer, Absolute value equations, Lin. Alg Appl., 419 (2006) 359–367.

[9]

D. W. Marquardt, An algorithm for least squares estimation of nonlinear inequalities, SAIM J. Appl. Math. 11 (1963) 431-441.

[10]

J. Iqbal and M. Arif, Symmetric SOR method for absolute complementarity problems, J. Appl. Math., DOI:10.1155/ 2013/172060.

[11] M. A. Noor, J. Iqbal, S. Khattri and E. Al-Said, A new iterative method for solving absolute value equations, Int. J. Phy. Sci. 6 (2011) 1793-1797. [12]

M. A. Noor, J. Iqbal, K. I. Noor and E. Al-Said, On an iterative method for solving absolute value equations, Optim. Lett. 6(5) (2012) 1027-1033.

[13] M. A. Noor, J. Iqbal and E. Al-Said, Residual iterative method for solving absolute value equations. Abst. Appl. Anal., DOI:10.1155/2012/406232. [14] M. A. Noor, J. Iqbal, K. I. Noor and E. Al-Said, Generalized AOR method for solving absolute complementarity problems. J. Appl. Math., DOI:10.1155/ 2012/ 743861. [15] J. Rohn, A theorem of the alternatives for the equation Ax + B x = b, Lin. Multilin. Algebra, 52 (2004) 421-426. [16]

J. Rohn, An algorithm for computing all solutions of an absolute value equation, Optim. Lett. 6(2012) 851-856.

[17]

A. V. Schmidt, Analysis of reaction-diffusion systems by the method of linear determining equations, Comut. Math. Math. Phy. 47 (2007) 249-261.

[18]

A. Wang, H. Wang and Y. Deng, Interval algorithm for absolute value equations, Cent. Eur. J. Math. 9(5) (2011) 1171-1184.

[19]

H. Wang, H. Liu and S. Cao. A verification method for enclosing solutions of absolute value equations, Collect. Math. DOI 10.1007/s13348-011-0057-5.

[20] P. Wolfe, Convergence condition for ascent methods. SIAM Rev. 11 (1969) 226-