Emotion Identification from Spontaneous Communication

dc.contributor.advisorGetahun, Fekade(PhD)
dc.contributor.authorKebede, Mikiyas
dc.date.accessioned2018-06-21T08:34:39Z
dc.date.accessioned2023-11-29T04:06:46Z
dc.date.available2018-06-21T08:34:39Z
dc.date.available2023-11-29T04:06:46Z
dc.date.issued2016-03
dc.description.abstractThis thesis work aimed to design a model for automatic identification of emotion from spontaneous communication using the acoustic characteristics of human speech. For this purpose, an experimental setup to collect and annotate call center Amharic telephone dialogs containing natural emotions is presented. These dialogs, involving 35 subjects (18 male and 17 female), are first manually decomposed into speaker turns and then segmented into intermediate chunks to be used as the analysis unit for feature calculation. Open class annotation is carried out by 3 professional human experts and the various emotional states are mapped onto 4 cover classes before a Majority Voting (MV) technique is applied to decide the perceived emotion in each chunk. Then, a total of 170 acoustic features consisting of prosodic, spectral and voice quality features are extracted from each chunk. An optimal feature set is selected through the use of generic algorithm and used to train Multilayer Perceptron Neural Network (MLPNN) classifier. The classification performance is based on extracted features. The experimental results showed that a combined feature vector containing 33 features conveys more emotional information in a natural and spontaneous speech communication. Our speech emotion recognition model exhibits an accuracy of 72.4% in identifying Anger, Fear, Positive and Sadness emotions. Hence, it can be used for real world emotion recognition applications or can be used in combination with other speech processing technologies such as speech recognition and speaker identification to improve their performance. To demonstrate this, the proposed speech emotion recognition model is implemented using a prototype application that performs emotion identification close to real-time. Keywords: Speech emotion recognition; Spontaneous speech emotion; Acoustic features; Feature extraction; Feature selection; Classifier; Multilayer Perceptron Neural Networken_US
dc.identifier.urihttp://etd.aau.edu.et/handle/123456789/2639
dc.language.isoenen_US
dc.publisherAddis Ababa Universityen_US
dc.subjectSpeech Emotion Recognition; Spontaneous Speech Emotion; Acoustic Features; Feature Extraction; Feature Selection; Classifier; Multilayer Perceptron Neural Networken_US
dc.titleEmotion Identification from Spontaneous Communicationen_US
dc.typeThesisen_US

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Mikiyas Kebede.pdf
Size:
1.12 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Plain Text
Description: