Cascaded Convolutional Neural Network Architecture for Speech Emotion Recognition in Noisy Conditionsby Youngja Nam 1 andChankyu Lee 1,2,*
1 Humanities Research Institute, Chung-Ang University, Seoul 06974, Korea 2 Department of Korean Language and Literature, Chung-Ang University, Seoul 06974, Korea * Author to whom correspondence should be addressed. Submission received: 25 May 2021 / Revised: 20 June 2021 / Accepted: 24 June 2021 / Published: 27 June 2021
AbstractConvolutional neural networks (CNNs) are a state-of-the-art technique for speech emotion recognition. However, CNNs have mostly been applied to noise-free emotional speech data, and limited evidence is available for their applicability in emotional speech denoising. In this study, a cascaded denoising CNN (DnCNN)–CNN architecture is proposed to classify emotions from Korean and German speech in noisy conditions. The proposed architecture consists of two stages. In the first stage, the DnCNN exploits the concept of residual learning to perform denoising; in the second stage, the CNN performs the classification. The classification results for real datasets show that the DnCNN–CNN outperforms the baseline CNN in overall accuracy for both languages. For Korean speech, the DnCNN–CNN achieves an accuracy of 95.8%, whereas the accuracy of the CNN is marginally lower (93.6%). For German speech, the DnCNN–CNN has an overall accuracy of 59.3–76.6%, whereas the CNN has an overall accuracy of 39.4–58.1%. These results demonstrate the feasibility of applying the DnCNN with residual learning to speech denoising and the effectiveness of the CNN-based approach in speech emotion recognition. Our findings provide new insights into speech emotion recognition in adverse conditions and have implications for language-universal speech emotion recognition. |