HomeWHYWhy Do Cell Phones Present A Problem For Pollsters

Why Do Cell Phones Present A Problem For Pollsters

How Do Pollsters Know Their Telephone Surveys Accurately Capture U.S. Public Opinion?

Public Editor’s Note: Almost every day, in our Public Broadcasting Service Public Editor’s email box, we receive written questions and concerns from confused or misdirected audience members who believe we share a platform with National Public Radio. And, at NPR, the public editor’s office routinely nets commentary about PBS programming. Sometimes, though, the correspondence is an actual two-fer, hitting on collaborative news or programming presented by the two public media giants. Below is a column born of recent audience concern about the results of joint political telephone polling done for NPR and PBS.

The looming midterm elections have once again brought the quality of polling methods into focus. The New York Times chief political analyst Nate Cohn this week pointed out that Democratic candidates are seeing rosy poll numbers in the exact same places that the polls missed so badly in 2020. Are we looking at another mirage?

– Ricardo Sandoval-Palos, PBS Public Editor.

In the years since George Gallup introduced the presidential approval rating in 1938, a profusion of competing polls measuring the public’s opinion of the commander-in-chief’s job performance has come into being.

PBS, along with National Public Radio, uses the NPR/PBS NewsHour/Marist poll, conducted by Marist College in Poughkeepsie, NY.

Recently, we received an email message from a news consumer criticizing our reliance on telephone polling. While the complaint was not about the actual reporting of the poll findings, the headlines it generated spurred him to complain. One CNN banner read, “A brutal poll number for Joe Biden.” It appeared over an opinion piece by regular political commentator Chris Cillizza.

The email’s author, Dennis Stevens, who described himself as a consultant with a doctorate in education, said such headlines made things look more dire than they actually were.

“It is shameful,” Stevens wrote, “that a survey of 1,264 adults that were willing to answer a random phone call in 2022 became a national headline propagated as a conclusive representation of the sentiments of the entire American population.”

This is an important statement, especially now that all manner of media are busy conducting polls to measure public opinion amid a contentious political season, and ahead of a critical mid-term election this fall.

Headlines by their nature boil things down to a single statement. They are not meant to be the story. One mustn’t judge a book by its cover, and one mustn’t judge news of the day by reading headlines alone.

It is easy to see how a reader may have swiped past the CNN headline – perhaps it made them think of the clichéd man in a barrel riding the rapids and heading straight for the falls – and not read the article. Others surely read the essay and perhaps found that it tempered the drama given in the headline.

Refer to more articles:  Why Does Kakashi Cover His Eye

Stevens argued that “headlines and articles of this nature … are propaganda because the data is inconclusive.”

We disagree.

But his comments caused us to confront a question about polling methodology that has been simmering for a while:

Is it time to rethink polling methods to make them more attuned to new digital-age behaviors of voters?

Less ‘Cooperation’ These Days

“There’s a dirty little secret that we pollsters need to own up to,” wrote polling expert David Hill, president of Hill Research Consultants and a 2020 fellow at the University of Southern California’s Dornsife Center for the Political Future, in The Washington Post in 2020. “People don’t talk to us anymore, and it’s making polling less reliable.”

Now, that is a strong statement.

The PBS Public Editor’s office caught up with Hill to discuss the email we’d received from Stevens.

“There’s a whole class of Americans who don’t answer calls with caller IDs that they don’t know,” Hill said. “If we’re honest … this is all a complete mess today because we can’t really have a true random sample anymore because we can’t get a random sample or anything close to that to cooperate.”

This partially mirrored what Stevens told us. “My opinion is that polling data is generally useful,” Stevens wrote in a followup message to PBS. “I just have concerns about who answers phone calls in 2022.” Poll data “does not account that most people do not do not pick up the phone anymore,” he added. “Who does? And, why do these people get to represent the sum of American sentiment?”

Some Americans are very picky about what calls they answer. Those who are not selective tend to have different traits than those who are, a situation that needs to be corrected by the polling institution, Hill said. People who tend to pick up any and all phone calls tend to be older, conservative, less wealthy and less educated, he said. So you’re not capturing opinions of younger, more affluent, progressive and educated voters if you only use the ones you gather from people who pick up their phones all the time, he said.

But that’s not how it works in real life, countered Barbara Carvahlo, director of the Marist College Poll. “Attempts to reach people are deliberate and the surveyors’ calls are identified as being from a research entity,” she said. They do not say “unidentified,” “unavailable” or “unknown.” And pollsters follow up, she added.

“It is not just one contact to a phone number,” Carvahlo said. “Messages are left and numbers are re-tried. In addition, we are registered as a research organization with providers to minimize our calls being treated or flagged as spam by call blocking technology.”

Refer to more articles:  Why No Bananas On Boats

An Explosion of Telephone Numbers — and Cost

The fast-escalating cost of telephone surveying has led to the proliferation of many non-probability methods of polling, Carvalho said, describing them as “cheaper but less dependable.”

Marist – and other polls that use probability sampling – must be able to say that using their calling procedures, every adult in the United States has an opportunity and known chance of being selected. In order to uphold that standard, Carvahlo said, poll workers must dial each phone number that is part of a random sample, whether it turns out to be a landline, cell phone, or even a phone line used by a security camera, an apartment building’s intercom system, a fax machine or a line that is answered by a recorded message. Because of laws restricting the use of robocalls, the pollsters must dial each phone number by hand. In the last 20 years, she said, there has been an explosion in phone numbers, and the cost has sky-rocketed.

The Associated Press guidelines, to which PBS NewsHour adheres, use the known probability standard — the one Marist uses. Polls that rely on a random sample of a population in which every member has a known probability of inclusion, “are known as probability-based polls, and they are the best method of ensuring the results of a survey reflect the true opinion of the group being surveyed,” the AP Stylebook 2022 edition states.

The AP, and Marist, recognize that a plurality of American adults do not have a landline. Polls that are “conducted by telephone must include people interviewed on their cellphones,” according to the Stylebook. “Those that only include landline interviews have no chance of reaching the more than half of American adults who have only a mobile phone.”

But Hill said that with the spread of cell phones has come an erosion of standards of true probability. “We have slowly but surely abandoned many rules of the road when it comes to accepted principles of polling that attempt to maintain some linkage to probability theory,” he said. He argued that a call being identified as being from a research organization or a university, as Carvalho cited, is no help.

People see a call that is identified as a “research organization” or a university or possibly “anonymous” or “unavailable” and they don’t take the call. “Only a university person would assume that a caller ID mentioning research or a university would be a universal positive,” he said. “For some, it may be a negative and cause them not to pick up the phone. And I’ve been a university person, so I’m not trying to be hard on universities, but sometimes they don’t know what they don’t know.”

Sampling the Soup

If you do it right, probability sampling allows you to make conclusions about a lot of people using a small sample. “If you’ve ever tasted something you’re cooking, you’ve employed probability sampling,” Carvalho said. “You didn’t have to eat the whole pot of soup to know what it was, assess what was in it, or decide if ‘it needs just a little more of something else.’”

Refer to more articles:  Why Did Chris Vansant Leave Delta

There are four techniques of gathering a random sample. A sample is “random” if every subject in the target population has an equal chance of being selected in the sample. If that sounds familiar, it is the definition Marist and AP cite, and it’s not the controversial part of polling.

The Problem With Polls
Because of laws restricting the use of robocalls, pollsters must dial each phone number by hand. In the last 20 years there has been an explosion in phone numbers, and the cost has sky-rocketed. Photo credit: Shutterstock

Methodology is the controversial part, and no poll is perfect. That is why polls include a margin of error, and reporters must include them in their stories. “The margin of sampling error for all respondents is plus or minus 3.7 percentage points,” is the example given in the AP Stylebook.

Sampling error is not the only source of polling errors, the AP Stylebook points out, “but it is one that can be quantified. Other methodology errors could include the wording of questions or the effectiveness of interviewers.

Hill’s firm uses various stratifications – one of the four standard techniques of random sampling – to get the right mix. An example of a stratified set that Hill cited was: age, gender, geographic location, and party affiliation.

‘Polls of the Willing’

Hill acknowledged that while surveyors try various ways of correcting for sampling errors, call screening – a shift in peoples’ behavior as they respond to the technology available to them – is a deep, looming problem for pollsters, including his own firm. “The incidence of people cooperating has dropped so low that polls are now ‘polls of the willing’,” he said. They’re not random samples. It’s the dirty little secret that the cooperation rate has collapsed.”

“This is a very sensitive subject,” said Hill, whose namesake firm was founded in 1988, according to its website. “I’m at the end of my career so I can be more candid.”

The Margin of Error – Cite it!

Whenever the term “margin of error” appears in a poll story, it refers to sampling.

Perhaps the single most important way to mitigate the effects of overreliance on polling (of any sort) is to temper audience members’ expectations of their reliability (the larger the margin of error, the lower the confidence) and of what they really mean. For instance, they cannot predict the future and they should never purport to.

Every reputable poll cites a margin of error, expressed in terms of percentage points.

No matter whose practices are best, polls never have been, nor will they ever be perfect, and AP guidelines are clear on what reporters must do, in these two points:

  • “When writing or producing stories that cite survey results, take care not to overstate the accuracy of the poll. Even a perfectly executed poll does not guarantee perfectly accurate results.
  • “It is possible to calculate the potential error of a poll of a random sample of a population, and that detail must be included in a story about a poll’s results.” (Emphasis added.)

Does all of this mean that it is okay for PBS to rely on telephone-based polling?

PBS should (and does) rely on the highest standard of methodology that is accepted in the polling industry. To the extent the polling industry starts to question itself, PBS should pay attention. Methodologies will no doubt evolve – they have to. But this does not mean PBS or any other news organization should suddenly abandon current and accepted practices: Science takes time. It takes time for a community, an industry, a profession, to arrive at new statistical conventionality. A sudden departure from current practice is not the way to go, but what is important now is paying attention to new trends, new ideas and above all, the data.

______________________

Daniel Macy is a Senior Associate in the office of the Public Editor at PBS. He has been a Washington, DC-based journalist since 2007.

RELATED ARTICLES

Why Is 13 Reasons Why Banned

Why Is Arr Stock Down

Why Is Blood Sausage Illegal

Most Popular

Recent Comments