Emotional prosody
Emotional prosody or affective prosody is the various non-verbal aspects of language that allow people to convey or understand emotion. It includes an individual's tone of voice in speech that is conveyed through changes in pitch, loudness, timbre, speech rate, and pauses. It can be isolated from semantic information, and interacts with verbal content.
Emotional prosody in speech is perceived or decoded slightly worse than facial expressions but accuracy varies with emotions. Anger and sadness are perceived most easily, followed by fear and happiness, with disgust being the most poorly perceived.
Production of vocal emotion
Studies have found that some emotions, such as fear, joy and anger, are portrayed at a higher frequency than emotions such as sadness.- Anger: Anger can be divided into two types: "anger" and "hot anger". In comparison to neutral speech, anger is produced with a lower pitch, higher intensity, more energy across the vocalization, higher first formant and faster attack times at voice onset. "Hot anger", in contrast, is produced with a higher, more varied pitch, and even greater energy.
- Disgust: In comparison to neutral speech, disgust is produced with a lower, downward directed pitch, with energy, lower first formant, and fast attack times similar to anger. Less variation and shorter durations are also characteristics of disgust.
- Fear: Fear can be divided into two types: "panic" and "anxiety". In comparison to neutral speech, fearful emotions have a higher pitch, little variation, lower energy, and a faster speech rate with more pauses.
- Sadness: In comparison to neutral speech, sad emotions are produced with a higher pitch, less intensity but more vocal energy, longer duration with more pauses, and a lower first formant.
Perception of vocal emotion
On average, listeners are able to perceive intended emotions exhibited to them at a rate significantly better than chance. However, error rates are also high. This is partly due to the observation that listeners are more accurate at emotional inference from particular voices and perceive some emotions better than others. Vocal expressions of anger and sadness are perceived most easily, fear and happiness are only moderately well-perceived, and disgust has low perceptibility.
The brain in vocal emotions
Language can be split into two components: the verbal and vocal channels. The verbal channel is the semantic content made by the speaker's chosen words. In the verbal channel, the semantic content of the speakers words determines the meaning of the sentence. The way a sentence is spoken however, can change its meaning which is the vocal channel. This channel of language conveys emotions felt by the speaker and gives us as listeners a better idea of the intended meaning. Nuances in this channel are expressed through intonation, intensity, a rhythm which combined for prosody. Usually these channels convey the same emotion, but sometimes they differ. Sarcasm and irony are two forms of humor based on this incongruent style.Neurological processes integrating verbal and vocal components are relatively unclear. However, it is assumed that verbal content and vocal are processed in different hemispheres of the brain. Verbal content composed of syntactic and semantic information is processed the left hemisphere. Syntactic information is processed primarily in the frontal regions and a small part of the temporal lobe of the brain while semantic information is processed primarily in the temporal regions with a smaller part of the frontal lobes incorporated. In contrast, prosody is processed primarily in the same pathway as verbal content, but in the right hemisphere. Neuroimaging studies using functional magnetic resonance imaging machines provide further support for this hemisphere lateralization and temporo-frontal activation. Some studies however show evidence that prosody perception is not exclusively lateralized to the right hemisphere and may be more bilateral. There is some evidence that the basal ganglia may also play an important role in the perception of prosody.
Impairment of emotion recognition
Deficits in expressing and understanding prosody, caused by right hemisphere lesions, are known as aprosodias. These can manifest in different forms and in various mental illnesses or diseases. Aprosodia can be caused by stroke and alcohol abuse as well. The types of aprosodia include: motor, expressive, and receptive.It has been found that it gets increasingly difficult to recognize vocal expressions of emotion with increasing age. Older adults have slightly more difficulty labeling vocal expressions of emotion, particularly sadness and anger than young adults but have a much greater difficulty integrating vocal emotions and corresponding facial expressions. A possible explanation for this difficulty is that combining two sources of emotion requires greater activation of emotion areas of the brain, in which adults show decreased volume and activity. Another possible explanation is that hearing loss could have led to a mishearing of vocal expressions. High frequency hearing loss is known to begin occurring around the age of 50, particularly in men.
Because the right hemisphere of the brain is associated with prosody, patients with right hemisphere lesions have difficulty varying speech patterns to convey emotion. Their speech may therefore sound monotonous. In addition, people with right-hemisphere damage have been studied to be impaired when it comes to identifying the emotion in intoned sentences.
Difficulty in decoding both syntactic and affective prosody is also found in people with autism spectrum disorder and schizophrenia, where "patients have deficits in a large number of functional domains, including social skills and social cognition. These social impairments consist of difficulties in perceiving, understanding, anticipating and reacting to social cues that are crucial for normal social interaction." This has been determined in multiple studies, such as Hoekert et al.'s 2017 study on emotional prosody in schizophrenia, which illustrated that more research must be done to fully confirm the correlation between the illness and emotional prosody. However, people with schizophrenia have no problem deciphering non-emotional prosody.
Non-linguistic emotional prosody
Emotional states such as happiness, sadness, anger, and disgust can be determined solely based on the acoustic structure of a non-linguistic speech act. These acts can be grunts, sighs, exclamations, etc. There is some research that supports the notion that these non-linguistic acts are universal, eliciting the same assumptions even from speakers of different languages.In addition, it has been proven that emotion can be expressed in non-linguistic vocalizations differently than in speech. As Lauka et al. state:
Speech requires highly precise and coordinated movement of the articulators in order to transmit linguistic information, whereas non-linguistic vocalizations are not constrained by linguistic codes and thus do not require such precise articulations. This entails that non-linguistic vocalizations can exhibit larger ranges for many acoustic features than prosodic expressions.
In their study, actors were instructed to vocalize an array of different emotions without words. The study showed that listeners could identify a wide range of positive and negative emotions above chance. However, emotions like guilt and pride were less easily recognized.
In a 2015 study by Verena Kersken, Klaus Zuberbühler and Juan-Carlos Gomez, non-linguistic vocalizations of infants were presented to adults to see if the adults could distinguish from infant vocalizations indicating requests for help, pointing to an object, or indicating an event. Infants show different prosodic elements in crying, depending on what they are crying for. They also have differing outbursts for positive and negative emotional states. Decipherment ability of this information was determined to be applicable across cultures and independent of the adult’s level of experience with infants.
Sex differences
Men and women differ in both how they use language and also how they understand it. It is known that there is a difference in the rate of speech, the range of pitch, and the duration of speech, and pitch slope. For example, "In a study of relationship of spectral and prosodic signs, it was established that the dependence of pitch and duration differed in men and women uttering the sentences in affirmative and inquisitive intonation. Tempo of speech, pitch range, and pitch steepness differ between the genders". One such illustration is how women are more likely to speak faster, elongate the ends of words, and raise their pitch at the end of sentences.Women and men are also different in how they neurologically process emotional prosody. In an fMRI study, men showed a stronger activation in more cortical areas than female subjects when processing the meaning or manner of an emotional phrase. In the manner task, men had more activation in the bilateral middle temporal gyri. For women, the only area of significance was the right posterior cerebellar lobe. Male subjects in this study showed stronger activation in the prefrontal cortex, and on average needed a longer response time than female subjects. This result was interpreted to mean that men need to make conscious inferences about the acts and intentions of the speaker, while women may do this sub-consciously. Therefore, men needed to integrate linguistic semantics and emotional intent "at a higher stage than the semantic processing stage."