Language Processing and the Brain

Psycholinguistics

Big image

Comprehending Spoken Language

Understanding language requires deep analysis on many levels. At it’s most basic level the human brain must process each individual sound that is heard. The sounds we hear differ in fundamental frequency or pitch, magnitude or loudness, and the quality of the sound, which is determined by the position of the tongue, lips, and vocal cords. The listener must decode the speech signal in order to derive meaning from what is being heard.

How do we understand sounds from different speakers?

Lack of invariance! The human brain and our speech perception mechanisms are designed to overlook the variability of speech in different speakers. Listeners readjust their awareness of the speech signal to account for different variances in sounds. This can be accomplished fairly quickly, it may take up to one minute to adapt to differences in non-native speech patterns. Categorical perception, which is the ability to detect physically distinct stimuli as belonging to a particular category, also aids in identifying sounds with the same language even if produced by different speakers.

Comprehension Models

Language comprehension requires parallel processing. Understanding the speech signal requires the human brain to segment continuous speech into individual sounds, morphemes, and phrases; access the mental lexicon or mental dictionary of known words, analyze the meaning of unknown words and syntactic ambiguities, update meaning given new information or data, and use the knowledge of pragmatics to understand what is heard. Psycholinguists believe that in order to process this stimulus listeners make “guesses” as to what to should come next. The believe that listeners must use the top-down and bottom up processing in order to comprehend spoken language.

Bottom-up processing model processes incoming stimuli as it is received from phonemes, morphemes, words, phrases, and lastly applying semantic interpretation. Listener use sounds to make a phonological representation of words that can then be accessed in their mental lexicon.

Top-down processing model requires listeners to utilize semantic, syntactic, and contextual information to analyze the speech signal. Listeners use what they know about the grammar of a language and the context of where or what is being said, to guess at what may be said next.

How do we access words from our mental dictionaries?

Extensive research has been conducted to learn how listeners gain meaning of words from their mental lexicon. Researchers have found that words may be accessed through semantic priming or morphological priming. Semantic priming indicates that words that have similar meanings or pertain to similar categories are all accessed or primed when a word that relates to it is heard. Morphological priming is a type of semantic priming where multiple morphemic words prime or related words.


Syntactical processing also plays a role in how our mental lexicon is accessed. Ambiguous words require syntactic knowledge in order to access meaning from phrases and sentences.

Speech Production

In order to convey a message a speaker must encode what they are thinking into a speech signal that is comprehensible for their listener.

Lexical Selection

During speech production evidence has been found that supports semantical priming. Slips of the tongue are evidence that support semantical priming. When errors are made word substitutions are rarely random. The substitutions usually have sound or meaning similarities. Errors produced while speaking also show evidence that rules of morphology, syntax, and inflectional rules are applied along with allophonic rules.


Speech production requires on demand processing using all the features of language; such as segmenting rules, morphemes, words, and phrases. Error analysis has also demonstrated that mental lexicon access, word choice, and sequencing occur before actual articulation. Planning occurs at a sentence level rather than word by word.

The Brain and Language

Big image

Localization of Language in the Brain

Franz Joseph Gall proposed the theory that different areas of the brain are responsible for different cognitive abilities and behaviors. This theory has been upheld in part through extensive study of brain disorders. The study of aphasia, a neurological term used for any language disorder that is the result of brain damage through trauma or disease.


Paul Broca proposed in the 1860's that language is localized in the front left hemisphere of the brain, now called Broca's area. Carl Wernicke, a German neurologist, later described another area known now as Wernicke's area, located in the temporal lobe of the left hemisphere. Through this research it was learned that language is lateralized to the left hemisphere, which refers to the location of a function to a specific hemisphere of the brain.


Early evidence of lateralization comes from the study of Aphasia. Injury to Broca's area results in impaired syntax and agrammatism. Damage to Wernicke's area will result the speaker making semantically meaningless utterances. Damages to other areas in the left hemisphere can lead to anomia, which is a form of aphasia in which the individual has difficulty in finding the correct words. These deficits can also be found in deaf individuals who suffer damage to similar areas of the brain.


Lateralization of language in the brain to the left hemisphere occurs in the early stages of life. Infants show many neural correlations similar to the linguistic categories found in adults. However, there is evidence that shows that the brains plasticity allows children who have undergone left hemispherectomy the ability to re-acquire a linguistic system similar to normal children.

Big image

Is there a "Critical Period" for acquiring a first language?

In most cases children are exposed to language for the moment they are born. In cases where there have been children who lack this early exposure it has been found that they fail to develop a the grammatical structure of their language. Furthermore, brain imaging shows that delayed exposure to language has a direct impact on the brain formation for language. This is known as the critical-age hypothesis. This hypothesis states that the ability to learn a first language develops within a fixed period of time estimated between birth and middle childhood. If exposed to language during this period acquisition will be much easier and faster. Children not exposed to language during this critical period are not able to acquire language fully.

Bilingualism and the Brain

The benefits of a bilingual brain - Mia Nacamulli

References

Fromkin, V., Rodman, R., Hyams, N. (2014). An introduction to language (10th ed.).

Boston, MA. Wadsworth.