Speaker Age and Gender Estimation Based on Deep Learning Bidirectional Long-Short Term Memory (BiLSTM)

Main Article Content

Aalaa Ahmed Mohammed
Yusra Faisal Al-Irhayim

Abstract

Estimating the age and gender of the speaker has gained great importance in recent years due to its necessity in various commercial, medical and forensic applications. This work estimates the speakers gender and ages in small range of years where every ten years has been divided into two subcategories for a span of years extending from teens to sixties. A system of speaker age and gender estimation uses Mel Frequency Cepstrum Coefficient (MFCC) as a features extraction method, and Bidirectional Long-Short Term Memory (BiLSTM) as a classification method.  Two models of two deep neural networks were building, one for speaker age estimation, and the other for speaker gender estimation. The experimental results show that the deep neural network model of age estimation achieves 94.008 % as accuracy rate, while the deep neural network model of gender estimation achieves 90.816% as accuracy rate.

Article Details

How to Cite
Aalaa Ahmed Mohammed, & Yusra Faisal Al-Irhayim. (2021). Speaker Age and Gender Estimation Based on Deep Learning Bidirectional Long-Short Term Memory (BiLSTM). Tikrit Journal of Pure Science, 26(4), 76–84. https://doi.org/10.25130/tjps.v26i4.166
Section
Articles

References

[1] Alkhawaldeh, R. S. (2019). DGR: Gender recognition of human speech using one-dimensional conventional neural network. Hindawi. Scientific Programming, (2019)7213717:1-12.

[2] Sedaaghi, M. H. (2009). A comparative study of gender and age classification in speech signals. Iranian Journal of Electrical & Electronic Engineering, (5) 1:1-12.

[3] Erokyar H. (2014). Age and gender recognition for speech applications based on support vector machine. M.Sc. thesis, University of South Florida, USA: 69 pp.

[4] Younis H. A. (2011). Speaker age detection using eign values. M.Sc. thesis, University of Mosul, Mosul, Iraq: 101 pp.

[5] Bahari M. H., Hamme H. V. (2011). Speaker age estimation and gender detection based on supervised non-negative matrix factorization. Workshop on Biometric Measurements and Systems for Security and Medical Application (BIOMS), Milan, Italy: p. 27-32.

[6] Přibil J., Přibilová A., Matouš J. (2017). GMM-based speaker age and gender classification in Czeck and Slovak. Journal of Electrical Engineering, 68(1): 3-12.

[7] Muda L., Begam M., Elamvazuthi I. (2010). Voice recognition algorithms using mel frequency cepstral coefficient (MFCC) and dynamic time warping (DTW) techniques. Journal of Computing, 2(3): 138-143.

[8] Kim H., Bae K., Yoon H. (2007). Age and gender classification for a home-robot service. 16th IEEE International Conference on Robot & Human Interactive Communication, 26 – 29 Aug, 2007, Korea. Jeju: p. 122-126.

[9] Faek F. K. (2015). Objective gender and age recognition from speech sentences. ARO-The Scientific Journal of Koya University, III(2) :06.

[10] Qawaqneh Z., Abu Mallouh A., Buket D. (2017). DNN-based models for speaker age and gender classification. The 10th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC), Porto, Portugal: p. 106-111.

[11] Piel, L. K. (2018). Speech-based identification of children's gender and age with neural networks. M.Sc. thesis, Tallinn University of Technology, Tallinn, Estonia: 85 pp.

[12] Osman H. M., Mustafa B. S., and Faisal Y. (2021). QDAT: A data set for reciting the quran. International Journal on Islamic Applications Computer Science And Technology, 9(1): 1-9.

[13] Hong Z. (2017). Speaker gender recognition system. M.Sc. thesis, University of Oulu, Oulu, Finland: 54 pp.

[14] Rehmam B., Halim Z., Abbas Gh., and Muhammad T. (2015). Artificial neural network-based speech recognition using DWT analysis applied on isolated words from oriental language. Malaysian Journal of Computer Science, 28(3): 242-262.

[15] Ranjan, R. and Thakur, A. (2019). Analysis of feature extraction techniques for speech recognition system. International Journal of Innovative Technology and Exploring Engineering, (8)7C2: 197-200.

[16] Muda L., Begam M., Elmavazuthi I. (2010). Voice recognition algorithms using mel frequency cepstral coefficients (MFCC) and dynamic time warping (DTW) techniques. Journal of Computing, 2(3): 138-143.

[17] Yusnita M. A., Paulraj M. P., Yaacob S., Yusuf R., Shahriman A. B. (2013). Analysis of accent-sensitive words in multi-resolution mel-frequency cepstral coefficients for classification of accents in Malaysian English. International Journal of Automotive and Mechanical Engineering (IJAME), 7: 1033-1073.

[18] Kulkarni A. G., Qureshi M. F., and Jha M. (2014). Discrete fourier transform: approach to signal processing. International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering, 3(10): 12341- 12348.

[19] Iliadi K. (2016). Bio-inspired voice recognition for speaker identification. Ph.D. thesis, University of Southampton, Southampton, United Kingdom: 203 pp.

[20] Yu Y., Si X., Hu C., Zhang J. (2019). A review of recurrent neural networks: LSTM cells and network architectures. Neural Computation, 31(7): 1235-1270.

[21] Shrestha A., Mahmood A. (2019). Review of deep learning algorithms and architectures. IEEE Access, 7: 53040-53065.

[22] Apaydin H., et. al. (2020). Comparative analysis of recurrent neural network architectures for reservoir inflow forecasting. Water Journal, 12(1500): 1-18.

[23] Zhang A. , Lipton Z. C. , Li M. , Smola A. J. (2020). Dive into Deep Learning. E-book available from: https://d2l.ai. [24] Basaldella M., Antolli E., Serra G., Tasso C. (2018) Bidirectional LSTM Recurrent Neural Network for Keyphrase Extraction. In: Serra G., Tasso C. (eds) Digital Libraries and Multimedia Archives. IRCDL 2018. Communications in Computer and Information Science, 806, Springer, Cham.