Learning rate for the back propagation algorithm based on modified scant equation
IRAQI JOURNAL OF STATISTICAL SCIENCES,
2014, Volume 14, Issue 1, Pages 1-11
AbstractThe classical Back propagation method (CBP) is the simplest algorithm for training feed-forward neural networks. It uses the steepest descent direction with fixed learning rate to minimize the error function E, since is fixed for each iteration this causes slow convergence for the CBP algorithm. In this paper we suggested a new formula for computing learning rate , using modified secant equation to accelerate the convergence of the CBP algorithm. Simulation results are presented and compared with other training algorithms.
- Article View: 167
- PDF Download: 85