You must turn off your ad blocker to use Psych Web; however, we are taking pains to keep advertising minimal and unobtrusive (one ad at the top of each page) so interference to your reading should be minimal.




If you need instructions for turning off common ad-blocking programs, click here.

If you already know how to turn off your ad blocker, just hit the refresh icon or F5 after you do it, to see the page.

Psi man mascot

Comprehension

Psycholinguists (scientists who study language processing) focus on three aspects of language competence: acquisition, comprehension, and production.

Language acquisition is the language learning, in babyhood or later. Language comprehension is the ability to extract intended meanings from language. Language production is the ability to speak or write fluently.

What topics are included in psycholinguistics?

As a rule, comprehension develops faster than production. A three year old can understand more than the same child can speak.

A non-native speaker of English can understand more than he or she can say. A student new to a discipline understands the professional jargon before being able to produce it.

What is ambiguity?

Language comprehension would be easy if a particular combination of words always meant the same thing. But expressions can often be interpreted more than one way. Such expressions are ambiguous (having more than one meaning).

Unintended puns in newspaper headlines ("Housing for Elderly Still Not Dead") are examples of ambiguous language. The same words can lead to more than one interpretation or meaning.

One source of ambiguity in language is the existence of multiple meanings for individual words. By one estimate, over half the commonly used English words either (1) sound like other words or (2) look like other words.

Words that sound alike are called homophones. For example, bear and bare sound alike, although they are spelled differently.

What are homophones? Homographs? Homonyms?

Words that look alike but sound different are called homographs. An example of a homograph which affects psychology majors is the word "affect" which can be pronounced a-FECT, meaning cause, or AFF-ect, meaning emotion.

Either type of ambiguous word, homo­phone or homograph, can be called a homonym, which means "same name." Humans interpret a homonym by using the context to select a meaning.

A psychologist who reads, "The patient had flat affect" will know to pronounce the word AFF-ect and will interpret this sentence as meaning "The patient showed little emotion."

How do people pick the correct meaning of a homonym?

If there is no helpful context, people pick the most common or personally relevant meaning of an ambiguous term. If you hear the word plane by itself you might think of an airplane. If you work a lot with wood, you might think of a wood plane.

If you are a math major hearing "plane" you might think of a flat surface. If you live in Nebraska and you hear the word instead of seeing it, you might think of a grassy landscape (a plain). Your past experience and current setting biases your interpretation of a homonym.

Usually there is a context that helps us determine the intended meaning of a word. The surrounding words disambig­uate (remove ambiguity from) a homonym.

One experiment tested whether words snipped out of audio recordings of spoken sentences could be identified. Word were identified with 90% accuracy only when accompanied by an average of six other words from the same sentence.

For example, the sound "duh" might not be recognized as a sloppy version of "done" until preceded by "I won't give up 'til I'm..." Even five words might not be enough. "Won't give up 'til I'm done" might sound like "woki fupp tlam" until suddenly the clearly spoken word "done" makes the pattern click into place: "I won't give up 'til I'm done."

Because context usually disambiguates homonyms quickly and subconsciously, we are unaware of how many unintended words are hidden in normal speech. Cole (1979) gave this example.

"Ream ember, us poke in cent tense all Moe stall ways con tains words knot in ten did."

What does the "Ream ember" example show?

This example shows there are at least 18 unintended word sounds in a simple sentence. "Remember, a spoken sentence almost always contains words not intended."

If you speak that sentence to another person, that person will typically be aware of none of those extra 18 words. The context set up by preceding words biases a listener toward conventional interpretations.

Sentence Comprehension and the Inner Model of the Universe

In the introduction to this chapter, we referred to Roger Schank's statement, "We have in our minds a model of the world" (Schank, 1983)

That quote was from an informal source, an interview in Psychology Today. I wrote to Roger Schank in 2012 to ask him if he stood by that quotation 30 years later.

He replied, "I haven't changed my point of view... In fact my latest book lists the 12 fundamental cognitive processes (and) modeling is one of the them. Is it the key one?"

I would substitute universe for world because that inner model includes schematic knowledge of how the earth goes around the sun, we are located in a galaxy, and so forth. "Multiverse" might be even better, especially for physicists who have models of multiverses in their minds, not just a model of this universe.

We need an inner model of the universe to understand the referents of language: the real-world events (or imaginary events) to which language refers. An example comes from Terry Winograd's 1971 dissertation at the Massachusetts Institute of Technology.

Winograd asked how a computer would figure out the referent of the word they in the following two sentences:

a) The city councilmen refused to give the women a permit for a demonstration because they feared violence.

b) The city councilmen refused to given the women a permit for a demonstration because they advocated revolution.

To figure out who "they" refers to, the computer would require world knowledge. As Winograd put it:

To make the decision, it has to have more than the meanings of words. It has to have the inform­ation and reasoning power to realize that city councilmen are usually staunch advocates of law and order, but are hardly likely to be revolutionaries. (Winograd, 1971, p.11)

World knowledge is needed even to understand simple sentences. "I stubbed my toe on a rock and it broke" would be ambiguous to a computer. The word it could refer to the toe or the rock.

Humans have the knowledge that toes can break when hitting a rock, but not vice versa. Knowledge of elementary physics and how things work is used to understand language.

Why do we need an inner model of the universe to understand language?

How can a computer be provided with this much knowledge? Humans spend a lifetime building up knowledge about the world, and when we hear language, we use that accumulated knowledge to interpret what we are hearing.

Computers can retrieve knowledge from the internet. But specifying which information to retrieve and how to use it to understand a sentence is not simple, especially when you do not know ahead of time what knowledge will be needed.

This is what stopped computer transla­tion cold in its tracks, in the 1960s. To translate from one language to the other, a translator uses world knowledge as an intermediate step, a bridge between the two languages.

The translator listens to language A and relates it to an inner model of the universe. Then this meaning, this model, is used to generate a sequences of words in language B that means approximately the same thing.

Computer scientists initially assumed language translation would require only the equivalent of automated dictionary look-ups. But that fails whenever world knowledge is required to disambiguate a word, which turns out to be frequently.

Another problem is that many lan­guages are full of idioms: expressions not meant to be taken literally. If we say, "This person ended up with egg all over their face" a computer would not know this meant the person was publicly embarrassed.

Idioms are used very commonly by people speaking informally, and they wreak havoc with machine translation. I found that out when I used Google Translate to render a Russian friend's Facebook postings into English.

The translations were better than nothing. But when his friends reacted to something he posted (everybody writing in Russian) Google Translate produced gibberish in its English translations.

Friends writing informally on social media use all sorts of idioms and slang terms. Poor Google Translate took it all literally when converting it to English, and the results made no sense at all.

Ironically, AI researchers in the 1960s expected to tackle language translation as one of their first goals. The U.S. Defense Department wanted this capacity. They had a flood of Russian-language information from intelligence sources, and they did not have enough human translators to deal with it all.

However, competent language translation by computers never happened, and it still hasn't happened. You see the reason why above. Extensive world knowledge is sometimes necessary just to understand or translate simple sentences.

What difficulties did computer language translation run into?

In examining visual scene analysis, we saw that the meaning of one line segment or vertex propagates to other related segments and vertexes. The same is true in language. That can help with some ambiguity resolution. If a person is in a forest, the word "bear" will usually refer to an animal.

But if a person is walking through a forest carrying a heavy backpack and says, "I can't bear this load much longer," a computer will stumble by relying on the forest context to assign meaning to the word bear. The computer will need knowledge of common phrases to realize "bear this load" refers to carrying something, even if the context is a forest with bears in it.

Humans use all these different sources of knowledge: word meanings, phrase familiarity, and general world knowledge. Usually we figure out sentence meanings quickly, on the fly, using the constraints provided by these types of knowledge to converge on a model of what a sentence refers to: its meaning.

Rumelhart (1976) noted that researchers of the mid-1970s had converged on an agreed-upon or modal model of compre­hension. He described the modal model as involving two steps: (1) Words activate schemata in the brain. (2) When the listener or reader finds a combination of schemata that account for all the words, comprehension is achieved.

What is the modal model of language comprehension? How does it resemble visual scene analysis

This is much like the process of visual scene analysis. To comprehend a passage, the listener must find one large-scale interpretation that makes sense out of all the parts of a passage. Bower and Morrow (1990) described language comprehension as construct­ing mental models of a speaker's or writer's intended meaning.

---------------------
References:

Bower, G. H. & Morrow, D. G. (1990) Mental Models in Narrative Comprehension. Science, 247, 44-48.

Cole, R. A. (1979, April). Navigating the slippery stream of speech. Psychology Today, pp.77-84.

Rumelhart, D. (1976) Toward an Interactive Model of Reading. Technical Report No. 56. San Diego Center for Human Information Processing, University of California at San Diego

Schank, R (1983, April). A conversation with Roger Schank. Psychology Today, pp.28-36.

Winograd, T. (1971) Procedures as a representation for data in a computer program for understanding natural language. Massachusetts Institute of Technology [PhD Dissertation]. Retrieved from: https://hdl.handle.net/1721.1/7095


Write to Dr. Dewey at psywww@gmail.com.


Don't see what you need? Psych Web has over 1,000 pages, so it may be elsewhere on the site. Do a site-specific Google search using the box below.