A New sufficient descent Conjugate Gradient Method for Nonlinear Optimization

In this paper, a new conjugate gradient method based on exact step size which produces sufficient descent search direction at every iteration is introduced. We prove its global convergence, and give some results to illustrate its efficiency by comparing with the Polak and Ribiere method.

is a smooth function with a continuous gradient , : which is assumed to be available.
In connection with problem ) but other different formulas can be considered (see, [4], [5] and [13] ) The global convergence properties of the FR, PRP and HS methods without regular restarts have been studied by many researchers, including Zoutendijk [7], Al-Baali [8] and Gilbert and Nocedal [9].The conjugate gradient method with regular restart was stated in [10].To establish the convergence results of these methods, it is usually required that steplength k  should satisfy the strong Wolf conditions: .On the other hand, many other numerical methods for unconstrained optimization are proved to be convergent under the standard Wolfe conditions: For example, see Nocedal and Wright [10].
The paper is organized as follows.In section (1) is the introduction.In section (2) we present the new formula New k  and the descent algorithm.Section (3) shows that the search direction generated by this proposed algorithm at each iteration satisfies the sufficient descent condition.Section (4) establishes the global convergence analysis for uniformly convex function property for the ] A New sufficient descent Conjugate Gradient Method …….37 [ new formula New k  .Section (5) establishes some numerical results to show the effectiveness of the proposed CG-method and Section (6) gives a brief conclusions and discussions.

A New Conjugate Gradient Method :
In this section, we derive a new conjugate gradient method based on steplength which is defined in ) 5 ( .From ) 5 ( and ), 2 ( we get :

Now assume that we want a matrix
and which satisfies This formula defines the most popular Barzilai-Borwin method [14].Namely method for unconstrained minimization is of the form ) 2 ( , at each iteration, For the new algorithm, we implemented numerical calculations for k s with different of the vector, for example  can be written as : ) 18 ( .......... Since Newton direction are conjugate gradient with exact line searches we get :

Now we can obtain the new descent conjugate gradient algorithms, as follows :
The Descent Algorithm Step 1. Initialization: and set the initial guess Step 2.Test for continuation of iterations.If

The Sufficient Descent Property :
Below we have to show the sufficient descent property for our proposed new conjugate gradient methods, denoted by New k  .For the sufficient descent property to be hold :

Assumption(1):
; In some neighborhood N of , S f is continuously differentiable and its gradient is Lipshitz continuous, there exist

Convergence analysis for uniformly convex function : ]
A New sufficient descent Conjugate Gradient Method …….

3; [
Next we will show that CG method with New k  converges globally.We study the convergence of suggested methods using uniformly convex function, then there exists a constant

Proposition:
Under Assumption1 and equation

Numerical Results :
In this section, we reported some numerical results obtained with the implementation of the new methods on a set of unconstrained optimization test problems taken from (Andrie, 2008) [1].
We selected (15) large scale unconstrained optimization test problems.
For each test function we have considered 10 numerical experiments with number of variables n=100,1000.We use

Conclusions and Discussions :
In this paper, we have proposed a new nonlinear CG-algorithms based on steplength defined by (20) under some assumptions the new algorithm has been shown to be globlly convergent for uniformly convex, functions and satisfies the sufficient descent property.The computational experiments show that the new kinds given in this paper are successful .

4 (
known conjugate gradient method of Hestenes and Stiefel (HS)[2], which determines the minimizer of ) in n iterations at most.More details can be found in[6 ].Various extensions to the general (non quadratic) case have been proposed, by replacing ) 5 ( with a one dimensional search and by deriving formulas for the computation of k  that do not contain explicitly the Hessian ....... Whereas in the case of the conjugate gradient (CG) method, we have line search condition (11) and (12) and update the variables ....... On the other hand, under Assumption(1), It is clear that there exist positive constants B, such .......

 3 2
is obtained by the Wolfe line search.If Suppose that Assumption (1) and equation ) 36 ( and the descent condition hold.Consider a conjugate gradient method in the form     is computed from Wolf line search condition   11 and   12 .If the objective function is uniformly convex on Sneed simplify our new New k  , So that our convergence proof will be much easier.Subsisting   20 into   21 , we obtain : written in double precision FORTRAN Language with F90 default compiler settings.We record the number of iterations calls (NOI), and the number of restart calls (IRS) for the purpose our comparisons.If NOI exceeded 2000 then denote F*.

Table ( 5
.1) gives a comparison between the new-algorithm and the Polak-Ribiere (PR) algorithm for convex optimization, this table indicates, see

A New sufficient descent Conjugate Gradient Method ……. 45 [ Table(6.1):
Relative efficiency of the new Algorithm