A new learning rate based on Andrei method for training feed-forward artificial neural networks
Main Article Content
Abstract
In this paper we developed a new method for computing learning rate for Back-propagation algorithm to train a feed-forward neural networks. Our idea is based on the approximating the inverse Hessian matrix for the error function originally suggested by Andrie. Experimental results show that the proposed method considerably improve the convergence rate of the Back-propagation algorithm for the chosen test problem.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
Tikrit Journal of Pure Science is licensed under the Creative Commons Attribution 4.0 International License, which allows users to copy, create extracts, abstracts, and new works from the article, alter and revise the article, and make commercial use of the article (including reuse and/or resale of the article by commercial entities), provided the user gives appropriate credit (with a link to the formal publication through the relevant DOI), provides a link to the license, indicates if changes were made, and the licensor is not represented as endorsing the use made of the work. The authors hold the copyright for their published work on the Tikrit J. Pure Sci. website, while Tikrit J. Pure Sci. is responsible for appreciate citation of their work, which is released under CC-BY-4.0, enabling the unrestricted use, distribution, and reproduction of an article in any medium, provided that the original work is properly cited.
References
[1] Abbo K. and Hind M.(2012) 'Improving the learning rate of the Back - propagation Algorithm Aitken process'. Iraqi Journal of the statistical sciences, accepted (to appear)
[2] Abbo K. and Zena T.(2012) 'Minimization algorithm for training feed-forward neural Networks'. J. of Education and Sci. (to appear).
[3] Brazilia J and Brownie M.(1988)'Tow point step- size gradient methods'. SIMA. Journal of Numerical Analysis, 8.
[4] Andrei, N,(2005) A New Gradient Descent Method with Anticipative Scalar Approximation of Hessian for Unconstrained Optimization, Scrieri Matematice1, Romania.
[5] Gong L., Liu G., Li Y. and Yuan F. (2012) 'Training Feed- forward Neural Networks Using the gradient descent method with optimal Step size, J. of Computational Information Systems 8:4.
[6] Hertz J., Krogh A .and Palmer R .(1991) 'Introduction to the theory of Neural computation'. Addison-Wesley, Reading , MA .
[7] Jacobs R (1988) 'Increased rates of convergence through learning rate adaptation' .Neural Networks , vol. 1, no.4.
[8] Kostopoulos A. Sotiropoulos D. and Grapsa T. (2004). "A new efficient learning rate for Perry's spectral conjugate gradient Training method",1st International Conference. From Scientific Computing to Computational Engineering'. 1st IC-SCCE. Greece.
[9] Livieris I. and Pintelas R. (2011). "An advanced conjugate gradient training algorithm based on a modified secant equation", Technical Report NO.TR11-03. University of Patras Department of Mathematics, Patras, Greece.
[10] Nguyen D. and Widrow B. (1990). "Improving the learning speed of 2-layer neural network by choosing initial values of the adaptive weights", Biological Cybernetics, 59:
[11] Plagianakos V., Magoulas G., and Vrahatis M. (2002) ' Determing non-monotone strategies for effective training of multi-layer perceptrons'. IEEE Transactions on Neural Networks, 13(6).
[12] Rumelhart D., Hinton G. and Williams R (1986) 'Learning representations by back-propagation errors' Nature,32
[13] Sotirpoulos D., Kotsiopoulos A and Grapsa T.(2004) 'training neural networks using two point step-size gradient methods'. International conference of numerical Analysis andApplied Mathematics. Patras, Greece.