Modified Conjugate Gradient Algorithm with Proposed Conjugancy Coefficient

In this paper, we present modified conjugancy coefficient for the conjugate gradient method based on the (Liu and Storey) method to solve non-linear programming problems. We proved the sufficient descent and the global convergence properties for the proposed algorithm for three cases and we get very good numerical results especially for the large scale optimization problem. حرتقم قفارت لماعم عم قفارتملا جردتلا ةیمزراوخ نیسحت صخلملا قفا رت لماعم قاقــــتشا ثحبلا يف مت نسحم ةقفا رتملا تاهجــــتملا ةقیرطل أ هساس ةغیص Liu و Storey یغ ةجمربلا لئاسم لحل ةیطخلا ر متو إ تابث ةیصاخ يفاكلا رادحــــنلاا (sufficient descent) لماشلا براقتلا ةیصاخو تلااح ثلاثب ةحرتقملا ةیمزراوخلل ، امك مت لا ةدیج ةیددع جئاتن ىلع لوصح ادج و ةصاخ ل ا لئاسم لأ يلاعلا سایقلا تاذ ةیلثم . Keyword: conjugate gradient, conjugancy coefficient, nonlinear programming, unconstrained optimization .


Introduction
In unconstrained optimization, we minimize an objective function which depends on real variables with no restrictions on the values of these variables.The unconstrained optimization problem is: , the value of k b is determined according to the algorithm of Conjugate Gradient (CG), and its known as a conjugate gradient parameter, ( ) , consider . is the Euclidean norm and . The termination conditions for the conjugate gradient line search are often based on some version of the Wolfe conditions.The standard Wolfe conditions: ( ) where k d is a descent search direction and , where k b is defined by one of the following formulas: ]) May not often have better computational performances.In order to exploit the attractive feature of each set, the so called new conjugate gradient method has been proposed as:

The Proposed Conjugate Gradient Algorithm:
The proposed algorithm generates the iterate By using the relation ( ; where ɵ is the angle between the vectors u and v) we get: where 2 1 ,q q are the angles between where: Then we have three cases: x , Else, go to step(3).
Step(3): The line search: We compute the value of k l by Cubic method and that satisfies the Wolfe conditions in Eqs. ( 4),( 5) and go to step(4).
Step , and go to step (4).

The Convergence Analysis: 4.1 Sufficient Descent Property:
We will show in this section that the proposed algorithm which is defined in the equations ( 14) and (3) satisfies the sufficient descent property which satisfies the convergence property.

Theorem (4.1.1):
The search direction k d that is generated by the proposed algorithm of modified CG satisfies the descent property for all k , when the step size k l satisfied the Wolfe conditions (4), (5) .Proof: we will use the indication to prove the descent property, for T , then we proved that the theorem is true for 0 = k , now assume that the theorem is true for any k i.e

The Fourth Scientific Conference of the College of Computer Science
, now we will prove that the theorem is true for Case1: Multiply both sides of the above equation by ) ( Then by using the Wolfe condition we get: ) sufficient descent satisfied.
Case2: Multiply both sides of the above equation by ) ( Then by using the Wolfe condition we get: . Then the sufficient descent is satisfied.) ( Then by using the Wolfe condition and by using the relation (where ɵ is the angle between the vectors u and v), we have:

Global Convergence property: Assumption:
We assume that: Under these assumptions on f, there exists a constant , In [1,3] it is proved that, for any conjugate gradient method with strong Wolfe line search, the following general result holds.

Lemma 4.3.1:
Let assumptions (i) and (ii) hold and consider any conjugate gradient method ( 2) and ( 3 For uniformly convex functions which satisfy the above assumptions, we can prove that the norm of d k+1 given by ( 3) is bounded above.Assume that the function f is a uniformly convex function.
Using lemma 4.3.1 the following result can be proved.

Theorem 4.3.2:
Suppose that the assumptions (i) and (ii) hold.Consider the algorithm ( 2), (15).If k d tends to zero and there exists nonnegative constants 1 h and 2 h such that: and f is a uniformly convex function, then Proof: Case1: In this case we have: From eq.(25), we get: Case 2: In this case we have: From eq.( 25), we get:

Computational and Results:
In this section, we reported some numerical results obtained w i t h t h e i m p l e m e n t a t i o n o f t h e n e w a l g o r i t h m o n a s e t o f unconstrained optimization test problems.We have selected (10) large scale unconstrained optimization problems in extended or generalized form, for each test function we have considered numerical experiment with the number of variable n=1000,5000,10000.
Using the standard Wolfe line search conditions (4), (5) , the stopping criteria is the We compare our method namely (MLS) with the FR method (7).The preliminary numerical results of our tests are reported in Tables (1),( 2) and (3).The first column "test fun."The names of test functions, the second column "NOI" denoted the number of iterations, the third column "NOF" denoted the number of calculated functions and the fourth column "MIN" denoted the minimum value.We compute: differentiable function, bounded from below.A nonlinear conjugate gradient method generates a sequence { } k x , k is integer number, 0 ³ k .Starting from an initial point 0 x , t h e v al u e of k x is calculated by the following equation: to the Wolfe conditions (4) and(5), and the direction k d are generated by the equation (3): The Liu and Storey Conjugate Gradient algorithm is one of the best methods to solve the large scale non-linear optimization The Fourth Scientific Conference of the College of Computer Science & Mathematics ] 204 [ problems, we proposed the new algorithm by using the Liu and Storey formula as: Since set the optimal solution is k ), where d k is a descent direction and k l is obtained by the strong Wolfe line search.If The Fourth Scientific Conference of the College of Computer Science & Mathematics

b
By the similar way and by use the absolute value: