Your NPR news source
Brandon Johnson

In this file photo, Chicago Mayor-elect Brandon Johnson participates in a public safety forum back in March, when most polls showed him trailing Paul Vallas heading into the April 4 runoff for mayor.

Teresa Crawford

What was up with mayoral polling in Chicago's runoff election?

Although Mayor-elect Brandon Johnson won Chicago’s mayoral runoff election with 52% of the vote, most election polls on the candidates, sometimes referred to as “horse-race” polls, indicated that more Chicagoans favored former Chicago Public Schools CEO Paul Vallas.

WBEZ reviewed mayoral polls released prior to both the February general and April runoff elections, contacted several of the companies and organizations who conducted those polls, and talked to experts to understand what polling can and can’t tell us.

Why the polls were off

Most polls rely on random sampling to find potential respondents, which ensures the polls include a representative sample of the population. But declining response rates to traditional polling methods have made it increasingly difficult for pollsters to generate a representative sample, especially given that some groups are more responsive to polls than others.

Younger voters, Black voters, Latino voters and low-income voters are more difficult to reach and have voting patterns that are harder to predict than white or wealthier voters, said Trevor Tompson, senior vice president of public affairs and media research at NORC, an independent research institution affiliated with the University of Chicago.

Those harder-to-poll voters were a key part of Johnson’s base in the runoff and were likely undersampled, collectively, compared to Vallas supporters in the polls leading up to the runoff election.

“When you look at all these polls put together, it does seem that they all sort of tended to overstate the support that Vallas would get in the election over Johnson, and it probably has to do with the demographic makeup of the bases of the two different candidates,” said Tompson. “Johnson’s voters, in particular, are also those kinds of voters I mentioned are harder to get to respond to surveys.”

The winning weights

Rigorous pollsters will go the extra mile to oversample for hard-to-reach groups. But even then, samples will not be perfectly representative, so most pollsters employ various statistical weighting techniques based on different demographic traits (e.g., race, age, gender, etc.) to fit their sample demographics to the target population’s demographics.

But what are the demographics of an election pollster’s target population? That’s anyone’s best guess, because an electorate — the group of registered voters who will actually cast a ballot in an election — is a population that doesn’t yet exist. It’s a fact that makes election polling very tricky.

Pollsters need to make several assumptions about who will cast a ballot in an election. They use a mix of methods to narrow their sample to “likely voters,” from screening out respondents who say they are not likely to vote to using more complex voter turnout modeling techniques.

“This is something that we have to make sure that we get right, because if you have a slightly different model of likely voters that has a lot more white voters in it and not enough Black voters, you’ll get to a different answer of who’s going to win the election,” said Matt A. Barreto, co-founder of the polling firm BSP Research and professor of political science at UCLA. BSP Research was the pollster behind a Northwestern University poll released March 28 that had Johnson and Vallas tied, with both at 44% and 12% undecided. The poll was funded by Northwestern University’s Center for the Study of Diversity and Democracy, Hispanic Federation, Illinois Black Advocacy Initiative, Latino Policy Forum and the Latino Victory Project.

In addition, there’s no guarantee that a pollster’s target electorate, and the respondents they’ve reached, are actually representative of the candidate preferences of the broader public.

“Let’s say [a poll has] a percentage of Black respondents that we think is sort of commensurate with who we think the electorate is going to look like. There’s no guarantee that those Black voters who we’ve reached in the survey are representative of Black voters, in general,” said David Doherty, a professor of political science at Loyola University.

One example of how differences in modeling can lead to different results is a 2016 experiment where the New York Times gave four pollsters the same raw interview data from a New York Times Upshot/Siena College poll of 867 likely voters conducted during the 2016 presidential election. But each pollster made different decisions in adjusting the sample and identifying likely voters, resulting in four very different margins of victory for then-presidential candidates Hillary Clinton and Donald Trump.

Which polls can I trust?

Determining if a poll is trustworthy is often not easy. It can be difficult to spot troublesome polls. For instance, some polls aren’t truly polls, but “push polls,” a form of negative campaigning aimed to persuade voters toward a certain candidate that is disguised as a public opinion poll.

And there are also issues like “herding,” when pollsters adjust their results to match what existing polls show.

Experts say a lack of transparency itself is a key reason to be skeptical of a poll, independent of the poll’s methodology.

“If there’s open disclosure, part of what that means is that the polling firm feels confident enough that they’re making reasonable choices that will withstand scrutiny from other pollsters [and] other experts in the field,” said Doherty.

But oftentimes polls, and the news reports of poll results, don’t share all the information recommended as industry best practices, such as those established by the American Association for Public Opinion Research (AAPOR).

“Unfortunately, there’s still a lot of companies and a lot of media organizations that don’t do what they really should be doing in terms of disclosure,” said Tompson, who has served as a member of the professional standards committee of AAPOR.

The following are key details experts recommend looking for in a poll’s release:

  • Who sponsored and/or conducted the poll: Who is paying for the poll? Does the pollster have a political leaning? Are they a reputable organization that has a track record?

  • How the sample was recruited: Was the sample recruited with random digit dialing or randomly selected from a third-party online panel of previously vetted participants? A combination of both?

  • How the data were collected: Was data collected via live or robocalls to landlines, cell phones, a link in a text message, or a combination of methods?

  • Dates the data were collected

  • Poll questionnaire: Does it include the exact wording of the questions asked in the poll?

  • Sample size and population: Was it a poll of likely and registered voters?

  • Margin of error: Experts say the real margin of error is likely double the reported margin of sampling error.

  • Sample demographics: Weighting techniques are not perfect, and knowing the number of people that responded by race, education level, age and gender can help determine how representative the underlying sample was before adjustments.

  • Likely voter model: What criteria, such as demographics or historical voter turnout, was the sample weighted on? How were likely voters determined?

Additionally, pollsters may themselves be partisan and work on behalf of only Democratic or Republican clients, or they might conduct “internal” polls, or those commissioned by candidates to get a sense for where they stand with potential voters.

Sometimes, these “internal” polls are released to the public, although it’s not often easy to identify such polls. A FiveThirtyEight analysis of publicly released “internal” polls in congressional elections from 1998 to 2014 found they tended to overstate the leads held by their candidate or party by an average of four to five percentage points.

On average, Doherty says polls fielded by an academic institution, or sponsored by a media organization, working with a polling firm that works with both Democratic and Republican clients are likely to be higher quality than a poll fielded by a campaign. But there’s really “no magic bullet” and no one type of pollster or poll that is reliably more correct or accurate than another.

What can the polls really tell us?

Experts say that polls should be viewed as an approximate snapshot in time of public opinion and not exact predictions of the outcome of an election.

With the mayoral runoff polls showing that Johnson and Vallas were within single digits of one another and the percentage of undecided voters in the double digits, Tompson said there wasn’t enough there for voters to expect a clear outcome in the race.

“The correct interpretation for just about all of these polls is that the race was very close and that there’s no one clearly ahead,” said Tompson.

Even ahead of the February general election, which featured a crowded field of nine candidates, Tompson said the polls could suggest what “pack of candidates are ahead of the others, but that’s about all that these polls are really useful in telling you.”

In early February, a WBEZ/Chicago Sun-Times/Telemundo Chicago/NBC5 Poll had Mayor Lori Lightfoot, Congressman Jesus “Chuy” García and Vallas leading the pack and essentially locked in a statistical dead heat. All within the poll’s margin of error, García led with 20%, followed by Vallas with 18% and Lightfoot with 17%. Businessman Willie Wilson and Johnson trailed closely with 12% and 11%, respectively.

There’s no clear consensus on whether “horse-race” election polls provide a public benefit, and it really depends on who you ask. But, the experts WBEZ interviewed agree that public opinion polls about issues, rather than candidates, are the most valuable polls because they’re one of the best ways to learn what people want from their government.

Perhaps more than any other method, issue polls provide a “better sense of what the public thinks about the issues of the day,” said Doherty. “Are they imperfect? Yes, they are imperfect. But do we have a better way of figuring out, say, whether the residents of Chicago support reforms to policing in the city?”

Amy Qin is a data reporter for WBEZ. Follow her at @amyqin12.

The Latest
A greater share of Chicago area Republicans cast their ballots by mail in March compared to the 2022 primary, but they were still vastly outpaced by Democrats in using a voting system that has become increasingly popular.
As the 2024 presidential election approaches, officials, advocates and experts have expressed concern over misinformation and disinformation about candidates and elections in Chicago, Cook County and Illinois.
In interviews with WBEZ, several decried the length of sentence the 80-year-old could face, while a handful of others said he deserves significant time in prison.

From 1968 to today, volunteers in Chicago aim to connect visitors to their city, and to see some of the convention action themselves
Chicago’s longest-serving alderman Ed Burke will face up to 10 years in prison when he is sentenced later this month. WBEZ’s Mariah Woelfel shares what prosecutors and Burke’s defense team are requesting from the judge overseeing the case.