NSF: The role of prosody in word segmentation and lexical access
Understanding how humans comprehend speech is an unsolved and challenging problem, in part because of the many-to-many mapping between the acoustical properties of the speech signal (i.e., frequency, timing, and amplitude) and the words perceived by the listener. The focus of this research is on the contributions of acoustic properties of speech associated with voice pitch, loudness, and speech rate, collectively termed prosody, to understanding spoken words. Previously, these prosodic aspects of the speech signal have been assumed to play a minor role in spoken word recognition and its component processes of word segmentation and lexical access. However, recent results from our group and others suggest that speech prosody can have very significant effects on how words are understood, pointing to new and under-investigated processes in the use of prosodic cues by human listeners to understand speech. This research holds potential for significant advancements in human health, technology, and science. For example, perception or production voice pitch, loudness, and/or speech timing are often highly disrupted in many disorders affecting speech and language, including dyslexia, autism, stuttering, Parkinson's disease, aphasia, and dysarthria. New insights regarding mechanisms underlying these disorders will also inform the development of better treatments for those afflicted. Advances in this research may also lead to improved speech technology applications, from enhanced automatic speech recognition by computer, to more natural-sounding computer-generated speech.