How music parallels spoken language

Music, especially singing, parallels spoken language in many ways; however, the organization of circuits pertaining to these two skills seem to diverge from each other. Broca’s aphasics provide tangible evidence that language and musical lyrics are processed differently by the brain.

Broca’s area is an area of the brain that is essential to verbal language communication. Someone with damage to Broca’s area usually exhibits non-fluent speech and a degraded ability to use function words and to pronounce words accurately. Their comprehension is slightly, but not severely, impaired. The syndrome involves three major speech deficits: agrammatism (difficulty with grammatical constructions), anomia (difficulty finding words), and a deficit in articulation (though Broca’s aphasics recognize improper pronunciation they have difficulty pronouncing words and often invert the sequence of sounds). People with damage to their Broca’s area or regions of the brain nearby who exhibit such symptoms are deemed ‘Broca’s aphasics.’ Oddly enough, Broca’s aphasics, although unable to communicate fluidly through speech, can often sing the lyrics to familiar songs at a level on par with people without brain damage. Although music and language are obviously not processed by the brain in an identical way, that overlap occurs is supported by numerous cases in which patients have regained speech through the systematic use of rhythmic patterning. In this process, music serves as a rhythmic and patterned cue to help patients regain control of the initiation and rate of speech---if successful, they first recover the ability to sing lyrics and words in familiar songs and eventually recover control of initiating normal speech.

Another link between music and language is prosody. Prosody is the word for all the changes we make to spoken language to communicate more than simply literal meaning. The components of prosody are what keep us from speaking in a flat monotone. Such components includes stress, pitch direction (going up in pitch at the end of a sentence when you ask a question), pitch height (communicates such information as how convicted one is in a statement they are making), and inflection.

Rhythm and song are inherently predictable—music and rhythm are composed entirely of patterns and they generally follow certain melodic/cultural rules and structures. Aniruddh D Patel at the Neurosciences Institute in California theorizes that this predictability applies to a lesser extent to spoken phrases--prosody incorporates rhythm and aspects of song in order to give an indication of what is coming next (Patel, 2003). This can be noticed in everyday conversation, when you understand what someone is saying before they finish a sentence or thought—such an understanding probably wouldn’t occur without nonverbal cues such as those provided by prosody.

Brain-imaging studies Dr. Burkhard Maess at the Max Planck Institute of Cognitive Neuroscience showed that areas peripheral to language structures in left hemisphere are active during the processing of single words that are sung (Maess, 2001). Further studies provide evidence that some aspects of music and language are processed in both the right and left sides of the brain. It is thought that music can help enhance ability to attend to sounds and to initiate sounds, probably because music and singing activates the mechanisms for these function. This seems to be supported by the innate and widespread use of singing by parents to their children, especially newborn infants, which will be described in greater detail in the subsequent section.