Site MapHelpFeedbackMore About Polls
More About Polls
(See related pages)

Types of Polls

The first check a reporter makes of polling material is a determination of how the poll was conducted. There are a variety of ways to elicit information from the public, some of them unreliable, some less reliable than others.

Coupon Poll: The coupon poll is just about worthless. People are asked to clip, fill out and mail a coupon that appears in the publication. The results are distorted because the person who goes to all that trouble—including paying for postage—is usually someone with strong feelings about the subject and hardly representative of the general population.

Call-In Poll: The call-in poll, in which readers or viewers are asked to express their sentiments about an issue by telephoning the newspaper or station, is subject to the same criticism as the coupon poll. When the San Francisco Chronicle conducted a call-in poll on the death sentence, it headlined the results on page 1. Not surprisingly, three-fourths of those calling favored the death penalty. After USA Today reported that its call-in poll showed that Americans loved the financier Donald Trump, it learned that 5,640 of the 7,800 calls came from a single source, another financier. Shamefaced, the newspaper said its polls are "strictly for fun."

The 1-900 call-in poll is useless as a measure of public opinion. "You have to be willing to pay for a phone call in order to participate, and that very seriously limits the representativeness of the people whose responses are being reported," says Harry O'Neill, president of the National Council on Public Polls.

Straw Poll: The straw poll is no more reliable than the other types of polls already mentioned, although some newspapers have been using this technique for many years. For a straw poll, a person hands out ballots at one or more locations. People then drop their ballots into a box at the spot. It is difficult to keep the straw poll from being overrepresented by the people in the one or two locations selected—usually supermarkets, factory gates and the like. The Daily News, which conducts straw polls for New York and national elections, uses a professional pollster to organize its straw locations.

Man-in-the-Street Poll: The man-in-the-street poll is probably the poll most frequently used to gather opinions at the local level. Newspapers and stations without access to their own polling apparatuses will send reporters out to interview people about a candidate or an issue. Those who have had to do this kind of reporting know how nonrepresentative the people interviewed are. Reporters may seek out those who look as if they can supply a quick answer, those who do not need to have the question explained to them.

These polls can be used as long as the story says precisely what they are—the opinions of a scattering of people, no more than that, no more scientific than astrology. The sample of people interviewed in such polling is known as a "nonprobability sample," meaning that the conclusions cannot be generalized to the population at large.

Proper Polling

To reach its conclusions about how well adults around the country thought the president was handling his problems, The New York Times/CBS News Poll spoke to 1,422 people by telephone. At first glance, this effort seems like madness. To pass off the opinions of fewer than 1,500 people as representative of more than 100 million adults seems folly. Yet if the sample is selected carefully, the questions properly put, the results stated fairly and completely, and the interpretations made with discernment, a sample of around 1,500 people can reflect accurately 95 percent of the time the opinions of people across the country with a margin of error of about 3 percentage points.

The Sample: Obviously, if a small group is to speak for many, that group must be carefully chosen so that it is representative of the larger group. The key word here is representative.

If we want to know what people think of an incumbent governor, we can interview voters and nonvoters. But if we want to know whether people will vote to re-elect him or her, it is common sense to interview only eligible voters, those who have registered. But we cannot stop there. If we went out in the daytime to supermarkets, laundromats and parks where mothers gather with their young children, our sample would be skewed heavily toward women. That would be a nonrepresentative sample because men constitute a large percentage of voters. Also, because only a third to a half of eligible voters actually go to the polls in most contests, questions must be asked to determine whether the person being polled will actually vote.

Samples are selected in a number of ways. The New York Times/CBS News Poll sample of telephone exchanges was selected by a computer from a complete list of exchanges around the country. The exchanges were chosen in such a way that each region of the United States was represented in proportion to its population.

Once the exchanges were selected, the telephone numbers were formed by random digits. This guaranteed that unlisted as well as listed numbers would be included in the sample.

The making of The New York Times/CBS News Poll sample demonstrates how pollsters try to eliminate the human factor from the choice of those who will be interviewed. A good sample should give everyone in the population we are trying to learn something about a chance to be represented in the poll. This is what is meant by the term random sample. To the average person, the word random usually means haphazard, without plan. Pollsters use it to mean that the sample guarantees that any of those in the larger population group have as good a chance to be polled as anyone else in that group.

Once the sample has been drawn, it is then weighted to adjust for sample variations. In the Times/CBS Poll of the president's popularity, the sample consisted of 445 people who told the pollsters making the calls that they were Democrats, 482 who said they were Republicans, and 495 who said they were independents. Using the breakdown in the sample would overrepresent Republicans. To reflect the known proportion of party members in the voting population, the groups of voters were weighted by party identification.

The poll also weighted the results to take account of household size and to adjust for variations in the sample relating to religion, race, age, sex and education. This weighting was done in accordance with what is known of the characteristics of the voting population from the results of the last election, the proportion of men and women in the population, census figures on nationality, religion, income and so on. The raw figures were adjusted to eliminate distortions from the norm.

Despite the inclination to think that the more people interviewed, the more accurate the results, the statistical truth is that it is the quality of the sample that determines the accuracy of the poll. After a critical number of interviews have been conducted, little additional accuracy is achieved. Results based on a good sample successfully interviewed are adequate for most purposes.

Margin of Error

The page one type was huge:

LAZIO
BY A
NOSE

The race between Rick Lazio and Hillary Clinton for New York's senate seat was a national story. This headline in the New York Post gave Republicans heart in what was seen as a probable Clinton victory. But the headline was dead wrong.

Lazio had 47.4 percent of the respondents, and Clinton had 45.1 percent. But 613 were polled, and that means that the poll was accurate only with a margin of error of four points, plus or minus. In other words, Lazio could have been ahead by an even bigger count, 51.4 percent to Clinton's 41.1 percent. That's Lazio's plus. But it could just as well have been a minus: Lazio 43.4 percent to Clinton's 49.1 percent.

In other words, the race, according to the poll, was at that stage a tossup.

What happened on election day? It was a Clinton runaway: Clinton, 55 percent; Lazio, 44 percent.

Exit Polls: The exit poll is based on interviews with voters as they emerge from polling places. For national campaigns, the AP and the networks use the Voter News Service. In the 2000 presidential race, exit polls called 8 of the 50 states incorrectly and was a major reason for the network chaos in calling the presidential race.

A study of the election night performance found that because of non-responses and the increase in absentee balloting, exit polls are unreliable and should be scrapped.

The Answer Depends on the Question

During the last months of the Clinton administration, a budget surplus developed and the president suggested that two-thirds of it be set aside to fix the Social Security system. Others suggested using the money to support a tax cut. How did the public feel? It depends on how the polling question was phrased:

No. 1: Should the money be used for a tax cut, or should it be used to fund new government programs?

The response: 60 percent for a tax cut; 25 percent for new programs; 11 percent other purposes; 4 percent don't know.

No. 2: Should the money be used for a tax cut, or should it be spent on programs for education, the environment, health care, crime-fighting and military defense?

The response: 22 percent for a tax cut; 69 percent for new programs; 6 percent other purposes; 3 percent don't know.

Online Help for Interpreting Polls

For more help understanding polls, visit the Web site of the American Association for Public Opinion Research [www.aapor.org]. The site includes a page of resources just for journalists:

http://www.aapor.org/journalist_resources.asp

AAPOR's Web site also contains a seven-page primer on election polling [www.aapor.org/pdfs/varsource.pdf] by Cliff Zukin, the group's president-elect and a professor of public policy at Rutgers University.

Other online resources for reporting about polls include:

"20 Questions A Journalist Should Ask About Poll Results," by the National Council on Public Polls

http://www.ncpp.org/?q=node/4








News Reporting and WritingOnline Learning Center

Home > NRW Plus > Chapter 4 > More About Polls