Polling Quality Guide: Key Metrics to Evaluate Any Poll
| Metric | Low Quality | Medium Quality | High Quality | Red Flag |
|---|---|---|---|---|
| Sample size | <400 | 400-800 | 800+ | <200 (statistically useless) |
| Methodology | Online opt-in only | Online panel | Live phone + online | No methodology disclosed |
| LV screen | None | Simple filter | Validated past voter | Claims 'all adults' for horse-race |
| Sponsor | Campaign internal | Media/advocacy | Independent pollster | No sponsor disclosed |
| Field dates | 1 day | 2-3 days | 4-7 days | Weeks-old data presented as current |
| Crosstabs | None provided | Partial | Full release | Selective crosstab cherry-picking |
| Weighting | Age/sex only | + education | Full demos + past vote | No weighting described |
The Margin of Error Is Not a Buffer: What MOE Actually Means
Margin of error is one of the most frequently misunderstood concepts in political polling. When a poll shows Candidate A at 48% and Candidate B at 44%, with a stated margin of error of plus or minus 3 points at 95% confidence, many readers assume the true result is somewhere between 45% and 51% for Candidate A. This is correct for that single candidate, but the margin of error for the spread between two candidates is larger: it is approximately the square root of 2 times the individual margin of error, or about 4.2 points in this case. In other words, a 4-point lead in a poll with a 3-point margin of error is statistically indistinguishable from a 2-point lead or a 6-point lead. The 95% confidence level means that if the exact same survey were conducted 100 times, 95 of them would produce a result within the margin of error — and 5 would fall outside it. Surveys of registered voters tend to show Democratic candidates performing roughly 2 points better than surveys of likely voters, because the pool of people who actually vote in any given election skews slightly older, more educated, and somewhat more Republican than the full pool of registered voters. This LV/RV gap is not constant: it was smaller in 2020, a high-turnout election, and larger in 2022, a more typical midterm. Pollsters who use sophisticated likely voter screens based on validated voting history produce more accurate results than those using simple self-reported screens, but validated screens require expensive data licensing that smaller polling firms may not be able to afford.
Herding, Mode Effects, and Why Aggregates Beat Single Polls
Two of the most important systematic biases in modern polling are herding and mode effects. Herding occurs when pollsters look at other published polls and consciously or unconsciously adjust their methodological choices to avoid being the outlier. The effect is that polls cluster closer together than the true underlying uncertainty would warrant, creating false confidence in the consensus. Analysts identified significant evidence of herding in the 2020 and 2022 cycles, where the final polling averages were substantially off in several states. Mode effects refer to how the medium of data collection affects responses: live phone interviews produce systematically different results from automated phone calls (IVR), which differ from online panels, which differ from text message surveys. Generally, live phone interviews produce slightly more Democratic-leaning results because Republicans are more likely to hang up on unknown callers, while online panels can favor Republicans because of opt-in selection bias among participants. A properly constructed polling aggregate — the kind produced by 538, RealClearPolitics, or The Economist's model — weights polls by quality, recency, and the historical accuracy of each pollster, substantially reducing the impact of any single outlier. The track record is clear: in 2022 and 2024, quality polling aggregates outperformed any single poll in virtually every competitive race. Readers who cite individual polls, especially partisan internal polls, to make confident electoral predictions are misusing the data.
What This Means for 2026
For 2026 consumers of political polling data, the golden rules are: always look at the aggregate, not the single poll; check whether it is a likely voter or registered voter survey; identify the sponsor; verify the sample size and methodology; and remember that even a perfect poll is a snapshot, not a prediction. When polls diverge significantly from the aggregate, assume the aggregate is more likely correct.