As the calendar turns toward Election 2016, a thickening storm of pre-election polls already has begun, covering every possible angle of contests that remain almost a year away. Despite all the lavish attention, however, polls are only as valid as their design, execution and analysis.
The best polls are produced by independent, nonpartisan polling organizations, with no vested interest in the outcome of the findings. These include organizations like Gallup and the Pew Research Center and as well as media groups such as CBS News/New York Times, ABC News/Washington Post and NBC News/Wall Street Journal. Many surveys are conducted by partisan actors — political consulting firms, industry groups and candidates. In some cases, the findings are biased by factors such as respondent selection and question wording. Partisan-based polls need to be carefully scrutinized and, when possible, reported in comparison with nonpartisan poll results.
It’s important to remember that polls are a snapshot of opinion at a point in time. Despite 60 years of experience since Truman defied the polls and defeated Dewey in the 1948 presidential election, pollsters can still miss big: In the 2008 Democratic primary in New Hampshire, Barack Obama was pegged to win, but Hillary Clinton came out on top. A study in Public Opinion Quarterly found that “polling problems in New Hampshire in 2008 were not the exception, but the rule.” In a fluid political environment, it is risky to assume that polls can predict the distribution of opinion even a short time later.
Here are some polling concepts that journalists and students should be familiar with:
- In a public opinion poll, relatively few individuals — the sample — are interviewed to estimate the opinions of a larger population. The mathematical laws of probability dictate that if a sufficient number of individuals are chosen truly at random, their views will tend to be representative.
- A key for any poll is the sample size: a general rule is that the larger the sample, the smaller the sampling error. A properly drawn sample of one thousand individuals has a sampling error of about plus or minus 3%, which means that the proportions of the various opinions expressed by the people in the sample are likely to be within plus or minus 3% of those of the whole population.
- In all scientific polls, respondents are chosen at random. Surveys with self-selected respondents — for example, people interviewed on the street or who just happen to participate in a web-based survey — are intrinsically unscientific.
- The form, wording and order of questions can significantly affect poll results. With some complex issues — the early debate over human embryonic stem cells, for example — pollsters have erroneously measured “nonopinions” or “nonattitudes,” as respondents had not thought through the issue and voiced an opinion only because a polling organization contacted them. Poll results in this case fluctuated wildly depending on the wording of the question.
- Generic ballot questions test the mood of voters prior to the election. Rather than mentioning candidates’ names, they ask the respondent would vote for a Republican or Democrat if the election were held that day. While such questions can give a sense of where things stand overall, they miss how respondents feel about specific candidates and issues.
- Poll questions can be asked face-to-face or by telephon…….MORE
by Leighton Walter Kille | Last updated: November 10, 2015