Human coversations contain many emotions, and identifying these emotions correctly is essential for interactive technology (like talking aids, social robots) to respond to them. Many emotion recognition studies focus on identifying/classifying stronger emotions like anger, happiness, sad etc. But in real-life conversations people use these strongers emotions less, and tend to use subtle emotions more. These subtle emotions are called secondary emotions. The aim of this project will be to identify the secondary emotions from speech. The speech waveform will be considered, and machine learning approaches will be used to identify the emotions from speech. The system will also be tested statistically. This system has the potential to be used in human-computer interaction applications where the system has to respond to human emotions.
A emotion identification/recognition system for secondary emotions with potential application in human-robot interaction.
Lab allocations have not been finalised