In this research, a study of cross-linguistic speech emotion recognition is performed. For this purpose, emotional data of different languages (English, Lithuanian, German, Spanish, Serbian, and Polish) are collected, resulting in a cross-linguistic speech emotion dataset with the size of more than 10.000 emotional utterances. Despite the bi-modal character of the databases gathered, our focus is on the acoustic representation only. The assumption is that the speech audio signal carries sufficient emotional information to detect and retrieve it. Several two-dimensional acoustic feature spaces, such as cochleagrams, spectrograms, mel-cepstrograms, and fractal dimension-based space, are employed as the representations of speech emotional features. A convolutional neural network (CNN) is used as a classifier. The results show the superiority of cochleagrams over other feature spaces utilized. In the CNN-based speaker-independent cross-linguistic speech emotion recognition (SER) experiment, the accuracy of over 90% is achieved, which is close to the monolingual case of SER.
Autorzy
- dr Gintautas Tamulevicius,
- Grazina Korvel,
- Anil Bora Yayak,
- dr Povilas Treigys,
- Jolita Bernataviciene,
- prof. dr hab. inż. Bożena Kostek link otwiera się w nowej karcie
Informacje dodatkowe
- DOI
- Cyfrowy identyfikator dokumentu elektronicznego link otwiera się w nowej karcie 10.3390/electronics9101725
- Kategoria
- Publikacja w czasopiśmie
- Typ
- artykuły w czasopismach
- Język
- angielski
- Rok wydania
- 2020