Abstract:A growing body of evidence indicates that intensity plays a role in emotion perception. However, only a few databases have been explicitly designed to provide emotional stimuli that are expressed at varying intensities. We developed and validated a Korean audio-only database of emotional expressions. Eighteen actors were recorded using twenty-five sentences with strong and moderate intensities for “neutral,” “happiness,” “sadness,” “anger,” “fear,” and “boredom” emotions. Twenty-five native Korean-speaking adults completed the emotion identification and naturalness rating tasks. All listeners were presented with the full set of 5400 recordings in a six-alternative forced-choice paradigm, yielding 135000 judgements for identification and naturalness, respectively. Raw and unbiased hit rates were calculated, with identification responses significantly above chance level for every emotion at both intensities. The overall raw hit rates reached 87% and 78% for the strong and moderate stimuli, respectively, indicating that strong emotional expressions were more accurately identified than their moderate counterparts. Similarly, a recognition advantage for strong intensity over moderate intensity was observed for each emotion at both intensities. High inter- and intra-rater reliabilities were found in listeners’ identifying emotion categories and assigning naturalness ratings, respectively. Further, there was a strong association between identification accuracy and the degree of naturalness; more natural variants of an emotion were more accurately identified than its less natural counterparts. These results confirm that the proposed database will serve as a valuable source for emotion research. This database is available for research purposes upon request from the corresponding author. |