You must turn off your ad blocker to use Psych Web; however, we are taking pains to keep advertising minimal and unobtrusive (one ad at the top of each page) so interference to your reading should be minimal.




If you need instructions for turning off common ad-blocking programs, click here.

If you already know how to turn off your ad blocker, just hit the refresh icon or F5 after you do it, to see the page.

Psi man mascot

Other Problems with Polls

Another set of problems with survey research involves response options given to respondents. Years ago, Bayer Aspirin ran an ad reporting, "more doctors recommended Bayer."

The doctors were given a choice between Nuprin, Tylenol, Tylenol II, or Bayer. Of course, Bayer was a brand of aspirin (more accurately, it is a huge pharmaceutical company that manufac­tures aspirin among other products).

To the skeptical consumer, the poll might raise an obvious question. What if doctors were given a different set of options including a cheap, generic aspirin?

The results might not be so favorable to Bayer. In a poll like this, the options given to respondents make a huge difference.

Researchers also distinguish between open and closed questionnaires. Closed question­naires require a respondent to pick from a list of items, such as the Bayer poll. Open questionnaires ask people to come up with their own response options.

The two procedures produce dramatic­ally different results. One study found that 61.5% of a sample given a closed questionnaire about "the most important thing for children to prepare them for life" endorsed "To think for themselves" if that was one of the choices. If people were asked to come up with their own list, they mentioned it only 4.6% of the time (Schuman and Presser, 1996).

What are "open" and "closed" questionnaires? What are examples of how this can affect results of a survey?

Similarly, when people were asked to list historically important events of the 20th Century, they seldom mentioned the invention of the computer. However, when this choice was added to the list, a high percentage of people selected it.

People may make different responses to a poll or survey depending upon the scale that is used. Schwarz (1999) gives the following example:

"When asked how successful they had been in life, 34% of a representative sample reported high success when the numeric value of the rating scale ranged from -5 to 5, whereas only 13% did so when the numeric values ranged from 0 to 10."

Wording of Questions

Another well-known problem in polling and surveys relates to the wording of the question. Results may vary considerably depending on how a question is worded.

The New York Times/CBS News polltakers did an experiment in which they asked people the following question:

Do you think there should be an amendment to the Constitution prohibiting abortions, or shouldn't there be such an amendment?

Only 29% replied yes to that ques­tion. 62% said no, and the rest were undecided. Later, the same people were asked the following question:

Do you believe there should be an amendment to the Consti­tution protecting the life of the unborn child, or shouldn't there be such an amendment?

How did the wording of a question influence an abortion poll?

This time, 50% of the respondents said yes and 39% said no (Dionne, 1980). Clearly the wording of the question influenced the outcome.

Biasing Context

Polltakers can influence results in subtle ways by how they behave, even if they do not intend to. A biasing context could be set up before a poll was ever started. If a telephone pollster started by saying, "Hello, I'm calling from Fox News..." he or she would elicit different responses from a pollster who started by saying, "Hello, I'm calling for the Washington Post."

As a rule, pollsters do not identify themselves for that reason. If pressed, they might give the name of the company hired to make the call ("Jones and associates") instead of the news organization sponsoring them.

The makers of Tylenol created a useful biasing context in the mid-20th Century. They gave free samples of their products–lots of free samples–to hospitals all over the United States.

Patients in hospital did not have to pay for non-prescription pain relievers; they received free Tylenol. Then the makers of Tylenol ran a television ad campaign pointing out, with complete honesty, that Tylenol was used more often than any other pain medication in hospitals all over the country.

Demand Characteristics

Martin Orne specialized in studying measurement effects due to the general, unspoken atmosphere or setting or details of data collection. He recognized in the early 1960s that all data collection except the unobtrusive type creates unspoken demands on research subjects. He called these effects demand characteristics and compared them to the suggestions of a hypnotist.

Typically the pressure is not overt. No researcher says, "Respond this way, because this is what I expect." However, people often wish to please a researcher. Whether asked to or not, they try to figure out how they are supposed to respond (even if the researcher has no agenda).

What are demand characteristics?

The opposite of trying to please the experimenter may occur if subjects resent being included in research and try to sabotage it. This is a problem at some colleges and universities where introductory students must participate in a subject pool.

In essence, such students are cons­cripted as research volunteers, whether or not they they want to participate. Some students deliberately produce useless data, such as marking all their answers the same way. When I was in graduate school this was called the "F— You" effect.

What problems can be introduced by subject pools?

One solution is to make research participation voluntary, or to reward it with extra credit. This introduces new biasing factors. The poorest students may be most desperate for extra credit, so they may be disproportionately represented in subject pool samples.

Demand characteristics can be produced by the setting of an experiment, or the type of person collecting the data. Little details may imply that certain responses are expected, even if that is not the case.

Once a 14 year old student wrote to me asking for help with a science fair project. She wanted to test the ability of fellow students to identity odors.

I suggested that she explore the effects of demand characteristics by having people sit down in front of a table with fragrant objects like fruit and perfume bottles. Then (after they put on a blindfold) she could ask them to identify completely different things that were not on the table.

How did a young experimenter create a biasing context?

The student wrote back a month later and said the experiment produced interesting results. People's answers depended on what they saw on the table, not what they actually sniffed during the experiment.

Their expectations dominated. They assumed (wrongly) they would be offered objects on the table as odor samples, so they responded accordingly.

Uninformed Opinions

Another problem with survey research is that people may offer opinions about things they really know nothing about. George F. Bishop, a senior research associate at the behavioral sciences laboratory of the University of Cincinnati, asked people for their opinions on the repeal of a non-existent act of Congress, "the 1975 public affairs act."

About a third of the respondents expressed an opinion for or against the imaginary law. When respondents were given the option of saying they had "no opinion," up to 10 percent still had one (Rice, 1980).

What happened when people were asked for opinions on the 1975 public affairs act?

If a third of respondents are willing to express viewpoints on imaginary issues, how many more might be willing to express opinions about genuine issues, concerning which they know very little? Similar questions are raised by research conducted by a magazine publisher.

Respondents were shown articles and asked if they had read any of them. About 20% of the subjects claimed to have read one or more of the articles, although all were scheduled for an upcoming publication, and none had yet been published.

Push Polls

Sometimes questionnaires are thinly disguised attempts to sway the opinion of the respondents. These are called "push polls." In a push poll, people may be asked questions which introduce damaging information about an opposing candidate.

For example, "Do you approve of the fact that Candidate X voted to increase taxes 20 times during the past two decades?" or "How important is it to you that Candidate Y fired a secretary for taking two days off when she had a baby?" The objective of this poll is not to gather data, but to influence voters.

What is a push poll?

Similar to the push poll is the question­naire about some important issue that turns out, in the end, to be a fundraising letter. Usually the letter has a message on the outside such as "You have been selected to participate in an important national survey."

Needless to say, such polls are useless for providing scientific data, for at least two reasons. First, the sample is not randomly chosen from any larger population. Second, answers are likely to be influenced by the obvious agenda of the organization conducting the poll.

Push polls are becoming very common during U.S. political campaigns. Once I received over 20 telephone calls in the two months before an election, each pretending to ask how I felt about political issues.

In each case, the questions eventually revealed a clear preference for one candidate. I was eventually asked if I knew about the heroic exploits of candidate X or the bad deeds of candidate Y.

This left me wondering: What happens when people get disgusted, or catch on to the intentional deceptions? What if they simply refuse to participate in any telephone polls at all?

That would create a bias of its own. Those people most skeptical or disgusted about telephone polls might no longer contribute data.

---------------------
References:

Dionne, E. J. (1980, August 18). Abortion Poll: Not Clear-Cut. New York Times, p.A15.

Rice, B (1980, May). Pseudo-opinions about reading. Psychology Today, p.16.

Schuman, H., & Presser, S. (1996). Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording, and Context. Thousand Oaks, CA: Sage Publications.

Schwarz, N. (1999) Self-reports: How the questions shape the answers. American Psychologist, 54, 93-105.


Write to Dr. Dewey at psywww@gmail.com.


Don't see what you need? Psych Web has over 1,000 pages, so it may be elsewhere on the site. Do a site-specific Google search using the box below.