Human beings have the ability to recognise emotions in others, but the same cannot be said for robots. Although perfectly capable of communicating with humans through speech, robots and virtual agents are only good at processing logical instructions, which greatly restricts human-robot interaction (HRI). Consequently, a great deal of research in HRI is about emotion recognition from speech. But first, how do we describe emotions?
Categorical emotions such as happiness, sadness, and anger are well-understood by us but can be hard for robots to register. Researchers have focused on ‘dimensional emotions’, which constitute a gradual emotional transition in natural speech. ‘Continuous dimensional emotion can help a robot capture the time dynamics of a speaker’s emotional state and accordingly adjust its manner of interaction and content in real time,’ explains Professional Masashi Unoki from Japan Advanced Institute of Science and Technology (JAIST), who works on speech recognition and processing.
Studies have shown that an auditory perception model simulating the working of a human ear can generate what are called ‘temporal modulation cues’, which faithfully capture the time dynamics of dimensional emotions. Neural networks can then be employed to extract features from these cues that reflect this time dynamics. However, due to the complexity and variety of auditory perception models, the feature extraction part turns out to be pretty challenging.
In a new study published in Neural Networks, Professor Unoki and his colleagues, including Zhichao Peng, from Tianjin University, China (who led the study), Jianwu Dang from Pengcheng Laboratory, China, and Professor Masato Akagi from JAIST, have now taken inspiration from a recent finding in cognitive neuroscience suggesting that our brain forms multiple representations of natural sounds with different degrees of spectral (i.e., frequency) and temporal resolutions through a combined analysis of spectral-temporal modulations.
Accordingly, they have proposed a novel feature called multi-resolution modulation-filtered cochleagram (MMCG), which combines four modulation-filtered cochleagrams (time-frequency representations of the input sound) at different resolutions to obtain the temporal and contextual modulation cues. To account for the diversity of the cochleagrams, researchers designed a parallel neural network architecture called ‘long short-term memory’ (LSTM), which modeled the time variations of multi-resolution signals from the cochleagrams and carried out extensive experiments on two datasets of spontaneous speech.
The results were encouraging. The researchers found that MMCG showed a significantly better emotion recognition performance than traditional acoustic-based features and other auditory-based features for both the datasets. Furthermore, the parallel LSTM network demonstrated a superior prediction of dimensional emotions than that with a plain LSTM-based approach.
Professor Unoki is thrilled and contemplates improving upon the MMCG feature in future research. ‘Our next goal is to analyse the robustness of environmental noise sources and investigate our feature for other tasks, such as categorical emotion recognition, speech separation, and voice activity detection,’ he concludes.
Disclaimer: Psychreg is mainly for information purposes only. Materials on this website are not intended to be a substitute for professional advice, diagnosis, medical treatment, or therapy. Never disregard professional psychological or medical advice nor delay in seeking professional advice or treatment because of something you have read on this website. Read our full disclaimer here.