A Nonlinear Conjugate Gradient Methods Based on a Modified Secant Condition

In this paper, a new nonlinear conjugate gradient methods based on the modified secant condition is derived which are given by Li and Fukushima (Li and Fukushima, 2001). These methods showed global convergent under some assumptions. Numerical results indicate the efficiency of these methods to solve the given test problems.


Introduction
There are now many conjugate gradient schemes for solving unconstrained optimization problem of the form : where R R f n : is a continuously differentiable function whose gradient is denoted by . where k x is the current iterate, k is a positive scalar and called the steplength which is determined by some line search, and k d is the search direction generated by the rule x , and k is a parameter such that the method reduces to the linear conjugate gradient method in the case when f is strictly convex quadratic function and the line search is exact. Well-known conjugate gradient methods include the Fletcher-Reevess (FR) method (Fletcher and Reeves, 1964), the Polak-Ribiere-Polyak (PRP) method (Polak and Ribiere, 1969), the Hestenes-Stiefel (HS) method (Hestenes and Stiefel, 1952), the Conjugate descent (CD) method (Fletcher, 1989), the Dai-Yuan (DY) method (Dai and Yuan, 1999) and the Liu-Storey (LS) method (Liu and Storey, 1991)  In the convergence analysis and implementation of conjugate gradient method, one often requires the exact and inexact line search such as the Wolfe conditions or the strong Wolfe conditions. The Wolfe line search is to find k such that The strong Wolf line search is to find k such that In order to introduce our method, let us simply recall the conjugcy condition proposed by Dai  is usually assumed in the analyses and implementations, Zhi-Feng Dai (Zhi, 2011).
The structure of the paper is as follows. In section (2) we present the new formulas descent algorithm. Section (4) establishes the global convergence property for the new CG-methods. Section (5) establishes some numerical results to show the effectiveness of the proposed CGmethod and Section (6) gives a brief conclusions and discussions.

The new formulas and Algorithms :
In this section, we derive a new conjugacy condition based on modified secant condition. From modified update formule, we have the following secant condition is true for all positive and symmetric matrix, in practical true for diagonal matrix then we can assume On the other hand, from the equation and continue with step2 .

The Descent Property of the New Methods :
Below we have to show the sufficient descent property for our proposed new conjugate gradient methods, denoted by

Assumption(1):
Assume f is bound below in the level set

Proof :
The inequality

Global convergence :
Next we will show that CG method with

Numerical Results:
In this section, we compare the performance of new formals These two new versions are compared with well-known CG-algorithm, the polak-Ribe`re (PR) algorithm. All these algorithms are implemented with standard Wolf line search conditions (8) and (9)

Conclusions and Discussions:
In this paper, we have proposed new a nonlinear CGalgorithms based on the modified secant condition defined by (38) and (43) respectively under some assumptions the two new algorithms have been shown to be globlly convergent for uniformly convex and satisfies the sufficient descent property. The computational experiments show that the new two kinds given in this paper are successful .