PERFORMANCE REFINEMENT OF CONVOLUTIONAL NEURAL NETWORK ARCHITECTURES FOR SOLVING BIG DATA PROBLEMS
Main Article Content
Abstract
The use of more examples than contrasted ones to compare neural network frameworks through using the MNIST database is considered a good research method. This is because this database is the subject of active research at the moment and has produced excellent results. However, in order to be trained and deliver accurate results, neural networks need a sizeable amount of sample data, as will be covered in more detail later. Because of this, big data experts frequently encounter problems of this nature. Therefore, two of the most well-liked neural network frameworks, Theano and TensorFlow, were compared in this study for how well they performed on a given problem. The MNIST database was used for this specific problem, represented by the recognition of handwritten digits from one to nine. As the project description implied, this study would not present a standard comparison because of this; instead, it would present a comparison of these networks' performance in a Big Data environment using distributed computing. The FMNIST or Fashion MNIST database and CIFAR10 were also tested (using the same neural network design), extending the scope of the comparison beyond MNIST. The same code was used with the same structure thanks to the use of a higher-level library called Keras, making use of the aforementioned support (in our case, Theano or TensorFlow). There has been a surge in open-source parallel GPU implementation research and development as a result of the high computational cost of training CNNs on large data sets. However, there are not many studies on assessing the performance traits of those implementations. In this study, these implementations were compared carefully across a wide range of parameter configurations, in addition to investigating potential performance bottlenecks, and identifying a number of areas that could use more fine-tuning.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
Tikrit Journal of Pure Science is licensed under the Creative Commons Attribution 4.0 International License, which allows users to copy, create extracts, abstracts, and new works from the article, alter and revise the article, and make commercial use of the article (including reuse and/or resale of the article by commercial entities), provided the user gives appropriate credit (with a link to the formal publication through the relevant DOI), provides a link to the license, indicates if changes were made, and the licensor is not represented as endorsing the use made of the work. The authors hold the copyright for their published work on the Tikrit J. Pure Sci. website, while Tikrit J. Pure Sci. is responsible for appreciate citation of their work, which is released under CC-BY-4.0, enabling the unrestricted use, distribution, and reproduction of an article in any medium, provided that the original work is properly cited.
References
[1] Jmour, N., Zayen S., & Abdelkrim, A. (2018) Convolutional neural networks for image classification. The 2018 International Conference on Advanced Systems and Electric Technologies (IC_ASET), pp. 397-402, Electronic ISBN:978-1-5386-4449-2, DOI: 10.1109/ASET.2018.8379889 [2] Shatnawi, A., Al-Bdour, G., Al-Qurran, R. & Al-Ayyoub, M. (2018). A comparative study of open source deep learning frameworks. The 2018 9th International Conference on Information and
Communication Systems (ICICS), pp. 72-77, doi: 10.1109/IACS.2018.8355444. Electronic ISBN:978-1-5386-4366-2
[3] Yu, L., Li, B., & Jiao, B. (2019). Research and implementation of CNN based on TensorFlow. IOP Conference Series: Materials Science and Engineering. 490. 042022. 10.1088/1757-899X/490/4/042022.
[4] Elshawi, R., Wahab, A., Barnawi, A., & Sakr, S. (2021). DLBench: A comprehensive
experimental evaluation of deep learning frameworks. Cluster Computing, 24, 2017–2038. https://doi.org/10.1007/s10586-021-03240-4
[5] Karaman, D., Gözüacik, N., Alagöz, M. O., İlhan, H., Çağal, U., & Yavuz, O. (2015). Managing 6LoWPAN sensors with CoAP on Internet. The 23nd Signal Processing and Communications Applications Conference (SIU), IEEE, Malatya, Turkey, 1389-1392, doi: 10.1109/SIU.2015.7130101.
[6] Yapıcı, M., Tekerek, A., & Topaloglu, N. (2019). Performance comparison of convolutional neural network models on GPU. IEEE 13th International Conference on Application of Information and Communication Technologies (AICT), Baku, Azerbaijan, 1-4, doi: 10.1109/AICT47866.2019.8981749
[7] Khan, A., Sohail, A., Zahoora, U., & Qureshi, A. (2020). A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Review,53, 5455–5516. https://doi.org/10.1007/s10462-020-09825-6.
[8] Rahman, M. M., Islam, M. S., Sassi, R., & Aktaruzzaman, M. (2019). Convolutional neural networks performance comparison for handwritten Bengali numerals recognition. SN Applied Sciences,1, 1660, https://doi.org/10.1007/s42452-019-1682-y
[9] Rahman, N. R., Hasan, M. A. M., & Shin, J. (2020). Performance comparison of different convolutional neural network architectures for
plant seedling classification. The 2nd International Conference on Advanced Information and Communication Technology (ICAICT), Page 146-150.
[10] Prilianti, K. R., Brotosudarmo, T. H. P., Anam, S., Suryanto, A. (2019). Performance comparison of the convolutional neural network optimizer for photosynthetic pigments prediction on plant digital image. AIP Conference Proceedings, 2084(1), https://doi.org/10.1063/1.5094284
[11] Tan, Y., Li, Y., Liu, H., Lu, W., & Xiao, X. (2020). Performance comparison of data classification based on modern convolutional neural network architectures. The 39th Chinese Control Conference (CCC), Shenyang, China, 815-818, doi: 10.23919/CCC50068.2020.9189237.
[12] Gambo, F. L., Wajiga, G. M., Shuib, L., Garba, E. J., Abdullahi, A. A., Bisandu, D. B. (2021). Performance comparison of convolutional and multiclass neural network for learning style detection from facial images. EAI Endorsed Transactions on Scalable Information Systems, 9(35), 1-13, doi: 10.4108/eai.20-10-2021.171549
[13] Ghafoorian, M., Karssemeijer, N., Heskes, T., van Uden, I. W. M., Sanchez, C. I., Litjens, G., ...& Platel, B. (2017). Location sensitive deep convolutional neural networks for segmentation of white matter hyperintensities. Scientific Reports, 7, 5110. https://doi.org/10.1038/s41598-017-05300-5