A New Type of Conjugate Gradient Method with a Sufficient Descent Property

This paper presents the development and implementation of a new unconstrained optimization method; based on the inexact line searches. Our new proposed Conjugate Gradient (CG) method always produces descent search directions and has been shown to be a global convergence. Our numerical results are promising in general by implementing ten nonlinear different test functions with different dimensions. م دیدج عون ن قئارط لا جردت لا قفارتم دیدشلا رادحنلاا ةیصاخ عم ملا لختس ص امعتساو ثحبلا اذه مدقی ل ةقیرط ةدـیقملا رـیغ ةـیلثملاا لاـجم يـف ةدـیدج اـهریوطتو دـمتعت ةقیقدلا ریغ ةیطخلا تاهاجتلاا ىلع . هذھ تطعأ ةقیرطلا اھجتم ب ت ا ھل نإ ت نیبو ،يرادحنا ثح ابراقت ُ لاماش . و ةیطخ ریغ لاود رشع مادختساب اھتءافك ةیددعلا جئاتنلا تتبثأ ذ و تا ةفلتخم داعبإ .

It is a well-known, CG-method which is a line search method that takes the form: where k d is a descent direction of x and k a is a step-size chosen by some kind of line search method and satisfies the Strong Wolfe (SW) conditions: x is the current iterate, we denote x [4,8].In order to guarantee the global convergence, we sometimes require k d to satisfy a sufficient descent condition: where c is a constant [7].In line search methods, the well-known CGmethod has the form ) 2 ( in which This method is called FRCG-method [5].The above mentioned CGmethod is equivalent to each other for minimizing strong convex quadratic functions under exact line searches; they have different performance when using them to minimize non-quadratic functions or using inexact line searches.For non-quadratic objective functions, the FRCG method has a global convergence property when exact line searches are used or Strong Wolfe line search [2,3] is used. The structure of the paper is as follows.In section (2) we modify the standard FRCG-method and show that the search direction generated by this proposed FRCG-method at each iteration satisfies the sufficient descent condition.Section (3) establishes the global convergence property for the new class of CG-methods with .Section (4) establishes some numerical results to show the effectiveness of the proposed CGmethod and Section (5) gives brief conclusions and discussions.

Modified Conjugate Gradient Method.
In this section, we propose a modified FRCG-method in which the parameter k b is defined on the basis of FR k b as follows : Step 1 : Computing k g ; if e £ k g then stop ; else continue .

Theorem (2.2)
Consider any iterative CG-method of the form ) 2 ( and )

Global convergence
In this section, we come to study the global convergence property of the new proposed Algorithm (2.1).For this, we are going to verify that Algorithm (2.1) is well defined.For the proof of the global convergence property, the following Assumption is needed.
is continuously differentiable and its gradient id Lipschitz continuous, namely, there exists a constant We will see that it is possible to obtain the global convergence property if the parameter k b is appropriately bounded in magnitude.We consider a method of the form )

Proof :
From Theorem (2.2) we have

Further, from Assumption (3.1, i) w e h a v e f r o m
i s a decreasing sequence and has a bound below in L , and shows

Numerical Results
In this section, we have reported some numerical results obtained with the implementation of the new Algorithm (2.1) o n a s e t o f unconstrained optimization test problems.We have selected (10) large scale unconstrained optimization problems in extended or generalized form, for each test function we have considered numerical experiment with the number of variable n=100-1000.Using the strong Wolfe line search condition ) .The programs are written in Fortran 90.The test functions are commonly used for unconstrained test problems with standard starting points and a summary of the results of these test functions was given in Table (4.1).We tabulate for comparison of these algorithms, the Number Of Function evaluations (NOF) and the Number Of Iterations (NOI) .

2 (. 2 )
Note that Zoutendijk's condition holds in this case.We show that any method of the form ) the details of this theorem see[9].Theorem (3.Suppose that Assumption (3.1) h o l d s .L e t { }