Language Processing and the Brain
Comprehending Spoken Language
How do we understand sounds from different speakers?
Lack of invariance! The human brain and our speech perception mechanisms are designed to overlook the variability of speech in different speakers. Listeners readjust their awareness of the speech signal to account for different variances in sounds. This can be accomplished fairly quickly, it may take up to one minute to adapt to differences in non-native speech patterns. Categorical perception, which is the ability to detect physically distinct stimuli as belonging to a particular category, also aids in identifying sounds with the same language even if produced by different speakers.
Language comprehension requires parallel processing. Understanding the speech signal requires the human brain to segment continuous speech into individual sounds, morphemes, and phrases; access the mental lexicon or mental dictionary of known words, analyze the meaning of unknown words and syntactic ambiguities, update meaning given new information or data, and use the knowledge of pragmatics to understand what is heard. Psycholinguists believe that in order to process this stimulus listeners make “guesses” as to what to should come next. The believe that listeners must use the top-down and bottom up processing in order to comprehend spoken language.
Bottom-up processing model processes incoming stimuli as it is received from phonemes, morphemes, words, phrases, and lastly applying semantic interpretation. Listener use sounds to make a phonological representation of words that can then be accessed in their mental lexicon.
Top-down processing model requires listeners to utilize semantic, syntactic, and contextual information to analyze the speech signal. Listeners use what they know about the grammar of a language and the context of where or what is being said, to guess at what may be said next.
How do we access words from our mental dictionaries?
Syntactical processing also plays a role in how our mental lexicon is accessed. Ambiguous words require syntactic knowledge in order to access meaning from phrases and sentences.
Speech production requires on demand processing using all the features of language; such as segmenting rules, morphemes, words, and phrases. Error analysis has also demonstrated that mental lexicon access, word choice, and sequencing occur before actual articulation. Planning occurs at a sentence level rather than word by word.
The Brain and Language
Localization of Language in the Brain
Paul Broca proposed in the 1860's that language is localized in the front left hemisphere of the brain, now called Broca's area. Carl Wernicke, a German neurologist, later described another area known now as Wernicke's area, located in the temporal lobe of the left hemisphere. Through this research it was learned that language is lateralized to the left hemisphere, which refers to the location of a function to a specific hemisphere of the brain.
Early evidence of lateralization comes from the study of Aphasia. Injury to Broca's area results in impaired syntax and agrammatism. Damage to Wernicke's area will result the speaker making semantically meaningless utterances. Damages to other areas in the left hemisphere can lead to anomia, which is a form of aphasia in which the individual has difficulty in finding the correct words. These deficits can also be found in deaf individuals who suffer damage to similar areas of the brain.
Lateralization of language in the brain to the left hemisphere occurs in the early stages of life. Infants show many neural correlations similar to the linguistic categories found in adults. However, there is evidence that shows that the brains plasticity allows children who have undergone left hemispherectomy the ability to re-acquire a linguistic system similar to normal children.
Is there a "Critical Period" for acquiring a first language?
Bilingualism and the Brain
Boston, MA. Wadsworth.