Two New Approaches for PARTAN Method

In This paper, we suggest two approaches for the parallel tangent (PARTAN) method. First is to combine PARTAN method with Perry algorithm and second is to combine PARTAN method with Al-Bayati-Ahmed, 1996 algorithm. The new suggested methods are tested to solve unconstrained optimization problems by using statistical tests and the results of the new suggested methods are better than the original PARTAN method with respect to time and the accuracy.


Introduction:
This paper is concerned with the unconstrained minimization problem Min f(x): (1) where f is a reasonably smooth function. Some of the best methods for solving eq.(1) are the quasi-Newton methods (QN), since they rely on matrix computations difficulties with computer storage arise when the dimension of the problem becomes large. A number of attempts has been made to overcome this situation either by modifying the QN-methods themselves or by improving conjugate gradient methods.
The advantage of conjugate gradient methods is of course, that they depend on vector computations only (see Khoda and Storey, 1992).
CG-algorithms are iterative techniques with generating a sequence of approximations to the minimizer x* (of a scalar function f(x)) of the vector variable x. The sequence x k is defined by where g k is the gradient of f(x), k is a positive scalar chosen to minimize f(x) along the search direction d k and k is a coefficient, given by one of the following expressions.

Conjugate Gradient Algorithms as a Memoryless QN-Algorithms:
This type of CG-algorithm was suggested for the first time by Perry (1978) and further analyzed by Shanno (1978a). These algorithms are generating descent directions even if ILS are used since: Multiplying eq.(9) by g k T yields Since H k is positive definite and the second term is positive implies that k d is a descent direction. H k is updated through the formula of BFGS update. (see Bazarra et al, 2000).
Given some approximation H k to the inverse Hessian matrix, we compute the search direction We now want to construct a matrix Also this type of algorithms does not need to update the matrix H explicitly (i.e. this matrix reduces a vector of order n).

Perry's Conjugate Gradient Algorithm:
Among the most efficient CG-algorithms was the Perry-CG algorithm. In eq.(3) the scalar k was chosen to make d k and d k+1 conjugate using an exact line search. In general, line searches are not exact, Perry relaxed this requirement and he rewrote eq.(3) where k is defined by eq.(4), but assuming inexact line search; thus he obtained 1 1 ] [ But this matrix is not of full rank; hence he modified eq.(14) as 1 1 ] [

Algorithm (Perry):
An algorithm based on the search direction given in eq.(14) is as follows: Step 1: Let x (1) be an estimate of a minimizer x* of f. and let D be a tolerance Number.
Step 2: Set k=1 and compute Step 3: Line search :Compute , where k is a scalar chosen in such away that f k+1 <f k .
Step 5: If k=n or . Then compute the new search direction defined by ) ( 1 1 . Set k=1 and go to step 3. Else k=k+1 Step 6: Compute the new search direction defined by , go to step 2

Single-Step Variable-Storage Conjugate Gradient Algorithm:
Al-Bayati and Ahmed in 1996, developed a variablestorage CG-algorithm as follows: the above formula generates positive definite matrices. Now since It is clear that if and by using exact line search, then eq. (19a) becomes Two New Approaches… ______________________________ which is the standard HS-CG-algorithm and therefore has n-step convergence to the minimum of a quadratic function. Thus this CG-algorithm as defined precisely by the new VM-update eq. (3), where the approximate of inverse Hessian is reset to the identity matrix at every step.
Step 2: Set k=1, Step 3: Set where k is a scalar chosen in such a way that f k+1 <f k .
Step 4: Check for convergence i.e. if < +1 k g where is small positive tolerance, stop.
Step 6: Compute the new search direction defined by

The Parallel Tangent Method (PARTAN):
This procedure proceeds to the minimum of differentiable objective function f on successive straight lines. The path directions are alternately determined by positions of points already reached or by certain gradient directions. This method does not involve the explicit construction of mutually conjugate direction vectors although vectors can be constructed from the direction vectors that are mutually conjugate. This property underlies the convergence of the (PARTAN) method.

A General Outlines of the PARTAN Algorithm:
Starting procedure: For the first step, (22) Then, the fourth point is generated by moving in direction that is collinear with (x 3 -x 1 ) so that d 3 =-(x 3 -x 0 ) (23) This is referred to as an acceleration step. Continuing the procedure: After determining x 4 , the procedure is continued by successively alternating gradient and acceleration steps.
(25) This method will reach the minimum of an n dimensional quadratic surface in no more than 2n steps. The d i that are generated are not mutually conjugate but the following properties are true: 1- The search direction are descent i.e. d i T g i <0.
PARTAN Algorithm stops when 1 + k g is sufficiently small and in perfect arithmetic should terminate in at most n iterations, whatever the choice of x 0 . In particular, the algorithm will converge in k (<n) iterations if the Hessian matrix of the function f has only k distinct eigenvalues. These properies follow because the recurrence relation of d i is designed to ensure that the search directions are conjugate with respect to the Hessian matrix of f. Scalar products appear in the expressions for d i and the step length q.
The behavior of PARTAN algorithm in finite precision arithmetic will depend on how accurately these scalar products are computed.

The Outlines of the Modified (PARTAN) Algorithm (1):
Step (1): Set the initial point x 0 Step (2) Step (5): Compute: , where k is the conjugancy coefficient. Step 6: If k=n or 1  Computational cost appears at each iteration of the new algorithm with accurate scalar products is approximately ten times as expensive as one without, i.e. if the cost of a "normal" iteration is sn 2 the cost of one with accurate scalar products is about 10sn 2 . This penalty should be set against the fact that accurate scalar products will sometimes allow less iteration to be taken.
Step 6: If k=n or Else set k=k+1. Step 7: Compute the new search direction defined by

Duncan Test:
We used Duncan test to compare the difference between the means and depending on the value of Least Significant Range (L.S.R.) (Ronald , 1971), by: 1.
Estimate the scalar error value for any coefficient i.e.: . . 4. Arrangement efficient means decreasing or increasing. 5. Compared differences means with L.S.R value to discaste it is significant or not. If the difference is less than L.S.R, we say it is significant and the reverse is true.

Results and Conclusions:
In order to asses the performance of the new proposed algorithm NEW, three algorithms are tested over 8 generalized All the algorithms in this paper use the same exact line search strategy which is the cubic fitting technique directly adapted from Bunday (1984). Also all the algorithms have convergence when < +1 k g where 5 10 1 × = .
The numerical results are presented in the following two tables. In table (1), we have compared all our CG-algorithms by using eight well-known test functions and for dimensions n=100. In table (2) we have compared all our CG-algorithms by using eight well-known test functions and for dimensions n=1000.