A Modified Curve Search Algorithm for Solving Unconstrained Optimization Problems

In this paper, we present a modified algorithm with curve search rule for unconstrained minimization problems. At each iteration, the next iterative point is determined by means of a curve search rule. It is particular that the search direction and the step-size are determined simultaneously at each iteration of the new algorithm. ةروطملا ثحبلا ينحنم ةيمزراوخ ةديقملا ريغ ةيلثملأا لئاسم لحل ماسر فسوي نازوس ناميلس دومحم ةءارب يتايبلا سنوي سابع تايضايرلا مسق تايضايرلاو تابساحلا مولع ةيلك ةعماج لصوملا صخلملا ةدـيقملا ريغ ةيلثملأا لئاسم يف ثحبلا ينحنم ةدعاق عم ةروطم ةيمزراوخ مدقنس ثحبلا اذه يف . مـت راركت لك دنع ةيلاتلا ةيراركتلا ةطقنلا داجيإ ثحبلا ةدعاق مادختساب . هاجتا داجيإ متيس ةصاخ ةروصبو يف ةوطخلا مجحو ثحبلا آ ةيمزراوخلا يف راركت لك دنع دحاو ن ةديدجلا .


‫‪‬ﺭﺓ‬ ‫ﺍﻟﻤﻁﻭ‬ ‫ﺍﻟﺒﺤﺙ‬ ‫ﻤﻨﺤﻨﻲ‬ ‫ﺨﻭﺍﺭﺯﻤﻴﺔ‬ ‫ﺍﻟﻤﻘﻴﺩﺓ‬ ‫ﻏﻴﺭ‬ ‫ﺍﻷﻤﺜﻠﻴﺔ‬ ‫ﻤﺴﺎﺌل‬ ‫ﻟﺤل‬
is a continuously differentiable function.Most of the wellknown iterative algorithms for solving (UP) take the form where k d is a search direction, and k α is a positive step-size parameter.If k x is the current iterative point, then we denote , then the corresponding method is called Steepest method.This method has low convergence rate in many situations, and often yields zigzag phenomenon.However, it does not require computing and strong some matrices associated with Hessian of objective functions.
If we take , where the k H is a matrix that approximates the inverse of the Hessian of f at k x , the related methods are called Newton- like methods.It needs to store and compute the matrix associated with Hessian of f , but it has faster convergence rate than steepest descent method and conjugate gradient methods in many situations.
Generally, the conjugate gradient method is a useful technique for solving large scale minimization problems because it avoids, like steepest descent method, the computation and storage of some matrices.The conjugate gradient method has the form where k β is a parameter that defines the different conjugate gradient methods.However, many conjugate gradient methods have no global convergence e.g., (Bertsekas, 1982), (Evtushenko, 1985), (Grippo and Lucidi, 1997), (Hestens, 1980), (Nocedal, 1999), (Powell, 1977) and (Powell, 1976).
Miele and Cantrell (Miele and Cantrell, 1969) studied memory gradient method for (UP).Cantrell (Cantrall, 1969) showed that the memory gradient method and the Fletcher-Reeves algorithm (Fletcher and Reeves, 1964) were identical in the particular case of a quadratic function.
Cragg and Levy proposed a super-memory gradient method which is a generalization of Miele and Cantrell's method.(Cragg and Levy, 1969) Wolfe and Viazminsky (Wolfe and Viazminsky, 1976) investigated a supermemory descent method for (UP).Other literatures on super-memory gradient method can be seen in e.g., (Qui and Shi, 2000), (Shi, 2003) and (Shi, 2000).

The Second Scientific Conference of Mathematics-Statistics& Information _______
[49] Both memory gradient method and super-memory gradient method are more efficient than conjugate gradient methods e.g., (Grippo and Lucidi, 1997), (Gilbert and Nocedal, 1992), (Powell, 1977) and (Powell, 1976) because they use more previous iterative information and add freedom of choosing parameters.
However, the convergence results of memory gradient methods and super memory gradient methods for non-quadratic objective functions are barely seen in recent literatures.It is of significance to investigate an efficient convergent super-memory gradient algorithm for solving large scale minimization problems, especially the problems in which the objective function is nonquadratic or even non-convex.As we know, the ODE methods (or dynamic methods) for unconstrained minimization are curve search methods (Botsaris, 1978), (Schropp, 1997), (Syman, 1982), (Van Wyk, 1984) and (Wu, Xia and Ouyang, 2002).It is required to solve some ordinary differential equations so as to approximate the minimizer of (UP).Ford et al. (Ford and Tharmlikit, 2003), (Ford and Moghrabi, 1996a) and (Ford and Moghrabi, 1996b) studied a new class of multi-step quasi-Newton methods for unconstrained minimization.However, it is required to store some matrices at each iteration.These methods are suitable to solve small and intermediate problems.
To accelerate the convergence rate and avoid the evaluation and storage of matrices, we present a new descent algorithm and prove its global convergence and linear convergence rate.At each iteration, the next iterative point is determined by means of a curve search rule that resembles Wolfe's line search rule.The algorithm, similarly to conjugate gradient methods, avoids the computation and storage of some matrices associated with the Hessian of objective functions.Though the algorithm in the paper has no as fast convergence rate as Newton-like methods, it is suitable to solve large scale minimization problems.The new algorithm is similar to Cragg and Levy's algorithm (Cragg and Levy, 1969), but is superior to it in the aspect of convergence.The algorithm is not a line search method, we may call it a curve search method.

New Algorithm
We assume that (H 1 ): The function f has lower bound on { } , where 0 x is available. (H 2 ): The gradient g is Lipschitz continuous in an open convex set B that contains L 0 , i.e., there exists L > 0 such that There are a lot of rules for choosing step-size ak e.g., (Cohen, 1981) and (Vrahatis, 2000) etc.We use curve search rule, which is similar to Wolfe's line search rule.Curve search rule: At each iteration, fixed 0 > k s , the step-size k α satisfies where 1 2 , and . In this case we have δ For simplicity, we sometimes denote ( ) It is obvious that the above search rule is not a line search rule though it is similar to Wolfe's line search rule.We may call it a modified curve search rule, in which the search direction and step size are determined at the same time.It is different from the traditional line search methods in which one first defines a descent direction and then finds a step-size along the direction.In the curve search method, search direction is a vector function of step-size.In fact, at the kth iteration, we find a new iterative point 1 then stop, else go to step 2; and k α is chosen by modified curve search rule; Step 4: Traditional line search methods consist of two stages.The first one is to find a descent direction and the second is to define a step-size along the search direction.In the above algorithm, the search direction and the step-size are determined at the same time at each iteration and the search is along a curve.
With respect to the above algorithm, the first problem is whether k α exists at each iteration.To solve this problem, we have the following result.( ) By (H 2 ), Cauchy-Schwarz inequality and (6), we have

Numerical Result
We tested the FR method, PR method and New method.The test problems are drawn from (Andrie, 2004).The Numerical results of our tests are reported in Table 2.1.Each problem was tested with three different values of n ranging from n=100 to n=1000.The numerical results are given in the form of NOI / NOF, where NOI, NOF denote the number of iteration and function evaluations, respectively.The stopping condition is ( ) From Table 2.1, we see that for these problems, NEW method really performs much better than the FR method and PR method.