A New Hybrid Grasshopper Optimization - Backpropagation for Feedforward Neural Network Training
Main Article Content
Abstract
The Grasshopper optimization algorithm showed a rapid converge in the initial phases of the global search, however while being around the global optimum, the searching process became so slow. On the contrary, the gradient descending method around achieved faster convergent speed global optimum, and the convergent accuracy was showed to be higher at the same time. As a result, the proposed hybrid algorithm combined Grasshopper optimization algorithm (GOA) along with the back-propagation (BP) algorithm, also referred to as GOA–BP algorithm, was introduced to provide training to the weights of the feed forward neural network (FNN), the proposed hybrid algorithm can utilize the strong global searching ability of the GOA, and the intense local searching ability of the Back-Propagation algorithm. The results of experiments showed that the proposed hybrid GOA–BP algorithm was better and faster in convergent speed and accuracy than the Grasshopper optimization algorithm (GOA) and BP algorithm.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
Tikrit Journal of Pure Science is licensed under the Creative Commons Attribution 4.0 International License, which allows users to copy, create extracts, abstracts, and new works from the article, alter and revise the article, and make commercial use of the article (including reuse and/or resale of the article by commercial entities), provided the user gives appropriate credit (with a link to the formal publication through the relevant DOI), provides a link to the license, indicates if changes were made, and the licensor is not represented as endorsing the use made of the work. The authors hold the copyright for their published work on the Tikrit J. Pure Sci. website, while Tikrit J. Pure Sci. is responsible for appreciate citation of their work, which is released under CC-BY-4.0, enabling the unrestricted use, distribution, and reproduction of an article in any medium, provided that the original work is properly cited.
References
[1] McC ulloch WS, Pitts W(1943). A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5[4]:115–133
[2] Heidari AA, Abbaspour RA (2018). Enhanced chaotic grey wolf opti-mizer for real-world optimization problems: a comparative study. In: Handbook of research on emergent applications of optimization algorithms. IGI Global, pp 693–727
[3] Aljarah I, Faris H, Mirjalili S (2016). Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Comput 22:1–15
[4] Faris H, Aljarah I, Mirjalili S (2016b). Training feedforward neural networks using multi-verse optimizer for binary classification problems. Appl Intell 45(2):322–332
[5] Trujillo MCR, Alarcón TE, Dalmau OS, Ojeda AZ (2017). Segmentation of carbon nanotube images through an artificial neural network. Soft Comput 21(3):611–625
[6] Krogh A (2008). What are artificial neural networks? Nat Biotechnol 26(2):195–197
[7] Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639):115–118
[8] Almonacid F, Fernandez EF, Mellit A, Kalogirou S (2017). Review of techniques based on artificial neural networks for the electrical characterization of concentrator photovoltaic technology. Renew Sustain Energy Rev 75:938–953
[9] Ata R (2015). Artificial neural networks applications in wind energy systems: a review. Renew Sustain Energy Rev 49:534–562
10- Ding S, Li H, Su C, Yu J, Jin F (2013). Evolutionary artificial neural networks: a review. Artif Intell Rev 39(3):251–260
[11] Lee S, Choeh JY (2014). Predicting the helpfulness of online reviews using multilayer perceptron neural networks. Expert Syst Appl 41(6):3041–3046
[12] Chaudhuri BB, Bhattacharya U (2000). Efficient training and improved performance of multilayer perceptron in pattern classification. Neurocomputing 34(1):11–27
[13] Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639):115–118
[14] Faris H, Aljarah I, Al-Madi N, Mirjalili S (2016a). Optimizing the learning process of feedforward neural networks using lightning search algorithm. Int J Artif Intell Tools 25(06):1650033
[15] Yi-Chung H (2014). Nonadditive similarity-based single-layer perceptron for multi-criteria collaborative filtering. Neurocomputing 129:306–314
[16] Ojha VK, Abraham A, Snášel V (2017). Metaheuristic design of feed-forward neural networks: a review of two decades of research. Eng Appl Artif Intell 60:97–116
[17] Chen J-F, DoQH, Hsieh H-N (2015). Training artificial neural networks by a hybrid PSO-CS algorithm. Algorithms 8(2):292–308
[18] Cybenko G (1989). Approximation by super
positions of a sigmoidal function. Math Control Signals Syst (MCSS) 2(4):303–314
[19] Hamidzadeh J, Namaei N (2018). Belief-based chaotic algorithm for support vector data description. Soft Comput 1–26
[20] Sadeghi R, Hamidzadeh J (2018). Automatic support vector data description. Soft Comput 22[1]:147–158
[21] Hamidzadeh J, Sadeghi R, Namaei N (2017). Weighted support vector data description based on chaotic bat algorithm Appl. Soft comput. 60:540-551
[22] Hamidzadeh J, Monsefi R, Yazdi HS (2012) DDC: distance based decision classifier. Neural Comput Appl 21(7):1697–1707
[23] Hamidzadeh J, Monsefi R, Yazdi HS (2015). IRAHC: instance reduction algorithm using hyperrectangle clustering. Pattern Recognit 48[5]:1878–1889
[24] Hamidzadeh J, Monsefi R, Yazdi HS (2016). Large symmetric margin instance selection algorithm. Int J Mach Learn Cybern 7[1]:25–45
[25] Hamidzadeh J, Zabihimayvan M, Sadeghi R (2018). Detection of web site visitors based on fuzzy rough sets. Soft Comput 22(7):2175– 2188
[26] Moghaddam VH, Hamidzadeh J (2016). New hermite orthogonal poly-nomial kernel and combined kernels in support vector machine classifier. Pattern Recognit 60:921–935
[27] Hamidzadeh J, Moradi M (2018). Improved one-class classification using filled function. Appl Intell 1–17
[28] Hamidzadeh J, Monsefi R, Yazdi HS (2014). LMIRA: large margin instance reduction algorithm. Neurocomputing 145:477–487
[29] Wang L, Zeng Y, Chen T (2015). Back propagation neural network with adaptive differential evolution algorithm for time series forecast-ing. Expert Syst Appl 42(2):855–863
[30] Zhang J-R, Zhang J, Lok T-M, Lyu MR (2007). A hybrid particle swarm optimization–back-propagation algorithm for feed forward neural network training. Appl Math Comput 185(2):1026–1037
[31] Heidari AA, Pahlavani P (2017). An efficient modified grey wolf optimizer with lévy flight for optimization tasks. Appl Soft Comput 60:115–134
[32] Faris H, Mafarja MM, Heidari AA, Aljarah I, Al-Zoubi AM, Mirjalili S, Fujita H (2018a). An efficient binary salp swarm algorithm with crossover scheme for feature selection problems. Knowl Based Syst 154:43–67
[33] Mirjalili S (2015). How effective is the grey wolf optimizer in training multi-layer perceptrons. Appl Intell 43[1]:150–161
[34] Faris H, Aljarah I, Mirjalili S (2016b). Training feedforward neu-ral networks using multi-verse optimizer for binary classification problems. Appl Intell 45(2):322–332
[35] Mallipeddi R, Suganthan PN, Pan QK, Tasgetiren MF (2011). Differential evolution algorithm with ensemble of parameters and mutation strategies. Appl Soft Comput 11(2):1679–1696
[36] Hansen N, Müller SD, Koumoutsakos P (2003). Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol Comput 11(1):1–18
[37] Ilonen J, Kamarainen J-K, Lampinen J (2003). Differential evolution training algorithm for feed-forward neural networks. Neural Pro-cess Lett 17[1]:93–105
[38] Slowik A, Bialko M (2008). Training of artificial neural networks using differential evolution algorithm. In: 2008 Conference on human system interactions. IEEE, pp 60–65
[39] Wienholt W (1993). Minimizing the system error in feedforward neural networks with evolution strategy.In:ICANN93.Springer,pp490– 493
[40] Wdaa ASI [2008] Differential evolution for neural networks learning enhancement. PhD thesis, Universiti Teknologi Malaysia
[41] Jordehi AR, Jasni J (2013). Parameter selection in particle swarm optimization: a survey. J Exp Theor Artif Intell 25(4):527–542
[42] Karaboga D, Basturk B (2007). A powerful and efficient algorithm for numerical function optimization: artificial bee colony [ABC]algorithm. J Glob Optim 39(3):459–471
[43] Dorigo M, Birattari M, Stutzle T (2006). Ant colony optimization. IEEE Comput Intell Mag 1(4):28–39
[44] Jianbo Y, Wang S, XiL (2008). Evolving artificial neural networks using an improved PSO and DPSO. Neurocomputing 71(4):1054–1060
[45] Braik M, Sheta A, Arieqat A [2008] A comparison between GAs and PSO in training ANN to model the TE chemical process reactor. In: AISB 2008 convention communication, interaction and social intelligence, vol 1, p 24.
[46] Blum C, Socha K. (2005). Training feed-forward neural networks with ant colony optimization: an application to pattern classification. In: Fifth international conference on hybrid intelligent systems, 2005. HIS’05. IEEE, p 6
[47] Socha K, Blum C (2007). An antcolony optimization algorithm for con-tinuous optimization: application to feed-forward neural network training. Neural Comput Appl 16(3):235–247
[48] Wolpert DH, Macready WG [1997] No free lunch theorems for opti-mization. IEEE Trans Evol Comput 1(1):67–82
[49] Mafarja M, Aljarah I, Heidari AA, Hammouri AI, Faris H, Al-Zoubi AM, Mirjalili S (2018). Evolutionary population dynamics and grasshopper optimization approaches for feature selection problems. Knowl Based Syst 145:25–45
[50] Saremi S, Mirjalili S, Lewis A (2017). Grasshopper optimisation algorithm: theory and application. Adv Eng Softw 105:30–47
[51] Aljarah I, Al-Zoubi AM, Faris H, Hassonah MA, Mirjalili S, Saadeh H (2018a) Simultaneous feature selection and support vector machine optimization using the grasshopper optimization algorithm. Cognit Comput 10:1–18
[52] Rumelhart, D. E., Hinton, G. E., and Williams, R. J., “Learning representations by back- propagating Errors Nature”, 323, 533-536, 1986.
[53] http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/Dictionary/contents/G/generalizeddr.html.