A new hybrid scaling parameter for a VM-algorithm

In this paper we have proposed a modified hybrid conjugate direction algorithm which combined a well-known CG-method which was based on a non-quadratic model and a well-known VM-method which based on the quadratic model. The new modified algorithm was treated both theoretically and numerically and proved to be stable and its convergence was super-linear and it uses an exact line search. Our numerical results indicate that the modified hybrid method performs well compared to the two well-known used method.


1-Introduction
We will consider the problem of computing a point which is a good approximation to a local minimum of a nonlinear twice differentiable function f(x). To solve a particular problem of this type, one commonly used either a Conjugate Gradient (CG) algorithm or a Variable Metric (VM) algorithm. Each has its advantages. In general, a CG-algorithm requires more iterations than a VM one to obtain an equally good local minimum, but on the other hand a CG requires little storage for implementation.
We try to solve the unconstrained minimization problem (1) where f is twice continuously differentiable function. This problem is usually solved iteratively. Starting with an initial estimate 1 x of the minimum point, each subsequent point where 0> 1 c > 2 c >1. Conjugate Gradient (CG) method is one of the few practical methods for solving large dimensionality problems because it does not require matrix storage and its iteration cost is very low. Normally the initial direction 1 d is given by β is a constant parameter obtained by (Fletcher & Reeves, 1964), or by (Polak-Rib'ere, 1969), or by (Hestenes & Stiefel, 1952), or by (Dixon, 1975).

2-The Extended CG-Method
Standard method for solving the problem (1) includes the CG-method which requires 4n locations of computer storage to implement. The CGmethod is iterative and it generates a sequences of approximations to the minimizer nim x of f(x).
We can define the classical CG-algorithm as follows (Jongen, et al 2004): Algorithm (2.1): , an initial estimate of the minimizer * x , Step (2): Set
In fact, many attempts have been made to investigate more functions than the quadratic one as a basis for the CG-method. Over years, various authors have published works to solve this problem, for many sorts of objective function, see for example (Fried, 1971), (Goldfard, 1972), (Boland,et al, 1979a(Boland,et al, , 1979b, (Tassopoulos& Story, 1984a, 1984b, (Al-Bayati, 1993), (Al-Bayati & Al-Naemi, 1995), (Andrei, 2006) The most popular extended CG-algorithm which based on the logarithmic model

Algorithm (2.2): (An extended non-quadratic model algorithm)
For general function f(x) with gradient g(x) and for any starting point n R x ∈ 1 , follow these steps : Step (2): Set Step ( Step (4): Compute Step (5): Test for convergence ; if not continue.
Step (6): If i=n or any other restarting criterion is satisfied, go to step (1) else, set i=i+1 and go to step (2). Now, to ensure that the extended CG-method produces an identical sequence of approximations as a standard CG-algorithm, let us consider the following theorem:

Theorem (2.1)
Given an identical starting point 1 x , the method of (Fleacher and Reeves, 1964) defined by , 1 , is the Euclidean norm applied to f(x)=q(x) and the extended CGmethod using the following search direction: And applied to f(q(x)) generate identical conjugate direction (within a It assumed that the one-dimensional searches are exact. The vectors n * * 1 , i g g are gradients of f(q(x)) at 1 x and i x , respectively.

Proof:
The theorem is true for i=1, because (14) Now for i=2, we have . ' i

3-The Self-Scaled VM-Algorithm
VM algorithm begin an estimate i x to the minimizer min x and a numerical estimate 1 H of the inverse Hessian matrix where i H is updated by a correction of rank-2 matrix of family, i.e. the BFGS update The self scaled updating can be defined 1991). Now, the outlines of the standerd self-scaling Vmalgorithm are:

Algorithm (3.1): (A self-scaling quadratic model algorithm)
Start with any initial point 1 x .
Step (1): Set i=1 and choose 1 H to be any positive definite matrix (usually 1 H =I).
Step (4): Test for convergence: if not put i=i+1 and go to step (2).

Theorem (3.1)
Assume That f(x) be quadratic function and that line searches are exact: if H is any symmetric positive definite matrix and we define an updating

Proof:
The update (23) can be written as:

4-A Modified CD-Algorithm (based on mixed quadratic & nonquadratic model)
The fundamental strategy which we wish to present the following. If it is based on combining the self-scaling VM-restarts of the form (21) which subsequent ECG-steps will be defined in (5).
The new self-scaled updating can be redefined as: is convex combination of
Step (1): Set i=1 and choose 1 H to be any positive definite matrix (usually 1 H =I).

6-The Numerical Results
In this section we have compared our new proposed CD-algorithm against the standard well-known BFGS algorithm which was known as the best and effective VM-algorithm. Of course, the scalar