Emotions play an essential role in our daily communication. The emotional interaction between the speaker and the listener is a prerequisite for a successful dialogue. During the communication, the speaker uses a set of cues (i.e., verbal, vocal, and visual) to signal the intended emotions. In turn, the intended emotional responses are induced by the listeners after hearing the signals. The expression and perception of emotions in speech is complex in nature such that no consensus has been reached regarding the encoding and decoding mechanism of emotions in speech as well as the link between them. In addition, the relative contribution and significance of universal and linguistic specificity in interpreting emotions remain inconclusive.
Accordingly, as part of my dissertation research, this project tries to answer three research question: (1) What are the specific acoustic characteristics of vocal emotions? (2) What acoustic characteristics do languages share to express the same vocal emotions? (3) What is the relative weighting of cues in the perception of emotions in speech? To answer these three question, I’m currently analyzing a large set of acoustic features extracted from the emotional utterances of Japanese, American English, and Mandarin Chinese. After the acoustic analyses, I plan to conduct a perceptual experiment to explore the effect of modality and L1 on emotion perception in speech.
More findings are coming out soon and please stay tuned.
- ASAAcoustic properties of vocal emotions in American English and Mandarin ChineseIn The 184th Meeting of Acoustical Society of America, 2023
- ASAThe acoustic profiles of vocal emotions in Japanese: A corpus study with generalized additive mixed modelingIn The 183rd Meeting of Acoustical Society of America, 2022