A Modified Super-linear QN-Algorithm for Unconstrained Optimization

ABSTRACT In this paper, we have proposed a modified QN-algorithm for solving a self-scaling large scale unconstrained optimization problems based on a new QN-update. The performance of the proposed algorithm is better than that used by Wei, Li, Yuan algorithm. Our numerical tests show that the new proposed algorithm seems to converge faster as compared with a standard similar algorithm in many situations .


Introduction
This paper analyzes the convergence properties of self-scaling QN-methods for solving the unconstrained optimization problem where f is twice continuously differentiable function.The convergence of QN-methods for unconstrained Optimization has been the subject of much analysis.The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is generally considered to be the most effective among other variable metric methods for unconstrained Optimization problem.One interesting property of BFGS method is its self correcting mechanism (A detailed explanation for example, in Nocedal (see [11]) with this self correcting property, Powell see [13] shows that the BFGS method with an inexact line search satisfies Wolfe conditions is globally super-linearly convergent for convex problem, and Byrd, Nocedal and Yuan (see [5]) extend Powell's analysis to the restricted Broyden class excluding the DFP method.AL-Bayati's (see [1]) presented a new self-scaling variablemetric algorithm which was based on a known two-parameter family of rank-two updating formulae.The best of these algorithm are also modified to employ inexact line searches with marginal effect thus Wei, Li and Qi (see [15]) have proposed some modified BFGS that the average performance of their algorithm was better than standard BFGS algorithm.
Wei, et al. (see [14]) proved the super-linear convergence of Wei, Iraqi Journal of Statistical Science (18) 2010 ___________ [3] Li and Qi (see [15]) algorithm under some suitable conditions.In this paper, a new modified QN-algorithm is proposed.The Basic idea is based on the new QN-equation where * k y is the sum of k y and A and k A is some matrix.
This paper is organized as follows in the next section; we represent some basic properties of the modified BFGS algorithm.In section3 we prove the super-linear convergence for the modified QN-algorithm under some reasonable conditions.The search direction in a VM-method is the solution of the system of equations where the matrix k H is an approximate to H is chosen to take account of this new curvature information which is done by satisfying the condition where k ζ is a scalar, generally for the QN-methods C is therefore the update to k H there are an infinite number of possible rank-two updates which satisfy the QNcondition but our main interest is in updates which form the Broyden one-parametric class (see [3]).The matrix ) ( (7) where k θ is a scalar chosen such that (see [7]) is defined as equation( 4) with where the Broyden-Fletcher-Goldfarb-Shanno(BFGS) update corresponds to 1 = k θ (see [6] and [9]).Oren (see [13]) found that a proper scaling of the objective function improve the performance of algorithms that use Broyden family of updated.Hence Oren'sُ family of self-scaling VM-updates can be expressed as: where This choice for the scalar parameter k η was made primarily because in this case k η requires the quotient of two quantities which are already computed in the updating formula.Al-Bayati (see [2]) found another interesting family of VM-updates by further scaling of Oren'sُ family of updates with a scalar

So that the updating formulas becomes
Iraqi Journal of Statistical Science (18) 2010 ___________ [5] ) ( For more details see [8]

Modified BFGS Algorithm:
Wei, Li and Qi proposed a new QN-equation (see [15] where

and k
A is some matrix.By using equation( 12)they gave BFGS type updates * Τ where 13) and the following Wolf Powell step-size rule where

Outline of the Modified BFGS Algorithm (MBFGS):
Corresponding (MBFGS) the outliers of MBFGS algorithms may be listed where

Some Properties of the MBFGS Algorithm:
The global convergence of the MBFGS algorithm needs the following three assumptions Assumption2.2.1:The level set ,where G denotes the Hessian matrix of f .

Theorem2.1:
x be generated by MBFGS algorithm then we have (see [11]) The super-linear convergence analysis of the MBFGS algorithm needs the following assumptions:

Assumption2.2.5: G is holder continuous at *
x that is, there exists constant is a decreasing sequence also the sequence x generated by MBFGS is contained in Ω and there exists a constant * f such that

A new Modified QN-Algorithm:
In this section we propose a new QN-method based on the following QN-condition where and k A is some matrix defined by using equation(20)and taking 1 + k H as Al-Bayati update (see [1]).
____________ A Modified Super-linear QN-Algorithm … [8] ) ( ) ( and using also the following Armijo condition in (MBFGS) yields a new QN-algorithm given as below:

Outline of the Modified QN-Algorithm (NEW):
The outliers of the new algorithm may be given as: Step1: choose an initial point where ) ( and (newly defined) (23) Step6:set , go to step1.

Some Theoretical Properties of the New Algorithm:
To show the global and super-linear convergence of the new algorithm use the same assumption 2.2.1,2.2.2,2.2.3 and 2.2.4 where used for all x in neighborhood of * is a decreasing sequence also the sequence { k x }generated by new algorithm is contained in Ω and that there exists a constant * f such that The new algorithm has the following QN-condition and preserve positive definiteness of the matrices .Note that the second condition in (21) guarantees that x } be generated by the new algorithm then we have and tailors formula we have To prove (27) using the equation ( 26) ,for all from equation ( 28),(29) we get x } be generated by the new algorithm then k x tends to x super-linearly.(see[2]) be generated by the new algorithm and that G is continuous at * x then we have Proof:-By using Taylor'sُ formula, we have .from the definition of A and lemma3.2.1 we get

lemma3.2.4: let{ k
x } be generated by the new algorithm denoted where is the forbenius norm of a matrix and k W is defined as It is the dual form of the DFP type algorithm in the sense that ( ) , for some constant where k W is defined as more over ,there exists a constant 7 b such that for all k large enough the above inequality implies that there is a constant C such that when k is sufficiently large we can deduce that there are positive constants 1 M and 2 M such that for all large k ] summing the above inequality over k ,we get where 0 k is sufficiently large index such that (31) holds for all 0 k k ≥ .In particular, we have on the other hand, from equation (36)and lemma 3.2 we have from the above inequality (35) and (36) we conclude that the Dennis-More condition holds.□

Let { k
x } be a sequence generated by the new algorithm then the k x tends to x super-linearly. Proof:- We will verify that

Numerical Results:
In this section, we compare the numerical behavior of the new algorithm with the MBFGS algorithm for different dimensions of test functions.Comparative tests were performed with (41)(specified in the Appendices 1 and 2) well-known test functions (see [10]).All the results are show in Table ( 1), (2) while Table (3) give the percentage of NOI and NOF.All the results are obtained with newly-programmed FORTRAN routines which employ double precision.The comparative performances of the algorithms taken in the usual way by considering both the total number of function evolutions (NOF) and the total number of iterations required to solve the problem (NOI) .In each case the convergence criterion is that the value of the Armijo fitting by Frandsen (see [2]) and Powell line search (see [3]) used as the common linear search subprogram.
Each of the function was solved using the following algorithms (1) MBFGS Algorithm : (2)The new algorithm The important thing is that the new algorithm needs less iteration, fewer evaluations of ) (x f and ) (x g than MBFGS.We can see that other algorithm may fail in some cases while the new algorithm always converges.Moreover numerical experiments also show that the new algorithm always convergence stabiles.Namely ____________ A Modified Super-linear QN-Algorithm … [20] there are about (60-87) % improvements of NOI for all dimensions Also there are (30 -78) % improvements of NOF for all test functions.
Table (1):Comparison between the New algorithm and MBFGS algorithms using different value of 12< N <4320 for 1 st test function.Table (3): Percentage performance of the new algorithm against MBFGS algorithm for 100% in both NOI and NOF.

Conclusions:
In this Paper, a new modified QN-algorithm for solving a self-scaling algorithm for solving large-scale unconstrained optimization problems is proposed .The new algorithm is a selfscaling QN-algorithm.The basic idea is based on a new QNupdate proved to have super-linear convergence property.Our numerical results supports our claim and also indicate that the new algorithm may be competitive with the MBFGS algorithm in most cases of test function.Appendix1:

N Costs
All the test functions used in Table (1) for this paper are from general literature: 1. Generalized Shallow Function: 4. Generalized Edger Function:

Appendix2:
All the test functions used in Table (2) for this paper are from general literature: been gained about f only in one dimension (along k d ), 1 + k H is allowed to differ from k H by a correction matrix k C of at most rank two, i.e.

Step1
k d by Wolf-Powell step-size rule k d by Armijo line search step-size rule for all large k ( )

τ 2 . 5 :
from which and (33),we get (31)lemma3.Let { kx }be generated by the new algorithm then the following Dennis More condition holds for the new technique 0 of the Armijo equation(21) for all k sufficiently large on the other hand

Table ( 2
):Comparison between the New algorithm and MBFGS algorithms using different value of 12< N <4320 for 2 nd test function .