SnoutCounter is a poll aggregation site compiling polling averages on figure approval, favorability, and electoral intent. It currently tracks presidential approval (including approval on specific issues) and generic ballot. I also plan on using this site to host other poll aggregates, such as predictive models for elections and potentially other statistical modeling projects unrelated to politics.
Methodology
SnoutCounter aggregates polling data via a weighted average. All scientific polls conducted by professional pollsters included in the relevant dataset are included in the aggregation, with the exception of polls conducted by :banned pollsters. All polls are collected manually. I generally check the Silver Bulletin, FiftyPlusOne, The New York Times, and Polling USA for anything I missed. In the case that a poll has surveyed two samples of different population types (for example, one likely voter sample and one registered voter sample), we only use the results of one of these samples being surveyed. "All adult" samples are preferred to registered voter samples, and registered voter samples are preferred to likely voter samples, for the various presidential approval averages. When measuring job approval among registered voters, polls that draw from a sample of all adults, but include crosstab results for registered voters, are included - specifically the results for the registered voter sample. For the generic ballot, likely voter samples are preferred to register voter samples are preferred to "all adult" samples. Tracking polls in our dataset are dynamically selected and weeded out such that all tracking polls from the same pollster are non-overlapping in fielding dates; I always include the most recent tracking poll from each pollster. Our weights are determined by the following four factors:
- Sample size: Polls with higher sample size are more likely to accurately estimate the population parameter in question, and generally have less uncertainty than polls with smaller sample sizes. However, sample sizes are subject to diminishing returns - a poll with a sample size of 5000 won't much more accurate than one with a sample size of 5000. I cap sample size at 5000, to avoid polls with particularly large sample sizes from dominating the averages. For generic ballot polls, I additionally :winsorize the sample sizes to counter the effect of extreme outlier sample sizes on the sample size weight. The weight function I use is the square root of the sample size over the square root of the median sample size of all polls in the dataset - this is similar to the function used by :538 for their averages before they shut down.
- Pollster rating: Not all pollsters are created equal; some are more reliable in producing accurate and precise results than others. I use the Silver Bulletin's pollster ratings, specifically the predictive plus-minus, a measure of how accurate a pollster is expected to be, as the input for the pollster rating weight function. For predictive plus-minus, lower is better. Pollsters with a predictive plus-minus above 1 are assigned a flat weight of 0.2, as are pollsters without a predictive plus-minus rating from Silver Bulletin. All other pollsters are assigned a weight according to square root function.
- Time since poll was conducted: Of course, polls which were conducted more recently are more likely to be reflective of the state of public opinion. I utilize an exponential function as a weight for recency, with the function being more aggressive with more frequently polled topics (e.g. general approval, versus issue-specific approval).
- Multiple polls in short window: If there are multiple polls from the same pollster and sponsor in a two week window, each poll is downweighted, based on the number of polls from that pollster/sponsor in this window. This is to ensure that one pollster/sponsor doesn't dominate the averages just out of frequency, and to counter pollsters who attempt to "flood the zone." The formula utilized for this is borrowed from Strength In Numbers' polling averages; it is calculated as 1 over the square root of the number of polls from the pollster/sponsor pair within a two-week window.
All these weights are combined into a final weight, calculated as the product of these four weights. The weights are normalized to sum to 1. This is then used to calculate a weighted average for each rating or variable we are trying to measure, and for each day. The weighted standard deviation is also calculated for each day and used to determine :confidence intervals.
In addition to these weights, I apply four different adjustments to each poll result, each of them calculated by a :mixed effects model. These adjustments are calculated relative to the average calculated from the previous steps. The adjustments are as follows:
- House effect adjustment: Many pollsters have unique biases stemming from differences in fielding, weighting, question wording, etc, that may not be captured by other adjustments calculated. Thus, I calculate the "house effect" of each pollster. Like all other adjustments, house effects are calculated relative to the weighted average of all polls, rather than the biases of each pollster relative to election results - this is because systematic, industry-wide polling bias (and, by corollary, the biases of many pollsters relative to actual results) are not predictable and change from cycle to cycle.
- Mode adjustment: The choice of how a pollster chooses to field will often have a significant effect on the results of the poll, as different fielding methods will often reach different audiences and different types of people. For example, online panels often reach a younger demographic compared to live phone surveys, and probability panels often present significantly different results compared to non-probability based methods. I correct for this by introducing an adjustment for methodology.
- Population adjustment: Each polling average is trying to measure approval or horse-race standings for a specific population, but often we will have multiple surveys polling different populations for the same measurable variable. To address this, I calculate three population adjustments - likely voter, registered voter, and all adult sample adjustments - and adjust the results of polls surveying different populations towards the population that the average is trying to measure. For general and issue-specific presidential approval, the average attempts to measure approval among all American adults, while for approval among registered voters, the average attempts to measure approval among, well, registered voters. For generic ballot, we measure electoral preferences among registered voters until Labor Day, at which point we start measuring electoral preferences among likely voters. We measure registered voter preferences prior to Labor Day as likely voter samples may not be representative of the actual electorate long before the election. Thus, the model adjusts results towards the population being measured in each of these cases.
- Partisanship adjustment: Some pollsters are very explicitly partisan in nature, often working with certain political parties or groups affiliated with political parties, consistently polling for candidates of one party, and/or being funded by a certain party or partisan group. For these pollsters, a partisanship adjustment is applied to correct for potential biases introduced by strong partisan affiliation. As an additional measure to counteract partisanship, I employ a partisanship downweight - all else held equal, a poll conducted by a partisan pollster is given 70% of the weight of a poll conducted by a non-partisan pollster.
These adjustments are summed to get the total adjustment for each poll, which is then added to measured approvals/disapprovals or two-party horse race estimates. After these adjustments are applied, the weighted average and confidence intervals are recalculated to get the final average for each rating/variable.
Updates
- February 2, 2026: Modified mixed effects model used for calculating adjustments. Instead of calculating and applying adjustments to the net/spread, adjustments are calculated for and applied to each individual target feature (e.g. for generic ballot, Democrats and Republicans each have an individual adjustment).
- January 20, 2026: Fixed bug in model which caused different population screens of the same poll to not be chosen properly.
- January 16, 2026: Added new, updated generic ballot averages, with new methodology (as part of the methodological overhaul).
- January 14, 2026: Updated pollster ratings with new 2026 ratings, as per Silver Bulletin.
- January 8, 2026: Added partisanship downweight.
- January 4, 2026: Completely overhauled the methodology for polling averages.
:Click to expand for a log of updates prior to the methodological overhaul conducted in January 2026.
Data Download
You can download the polling data used in SnoutCounter averages at the links below.
Acknowledgments & Support
Data
While the polling data is collected manually, the datasets collected by Silver Bulletin, FiftyPlusOne, The New York Times, RealClearPolitics, and Polling USA have made this work significantly easier for myself. Pollser ratings from the Silver Bulletin.
Site Design Tools
Nutshell by Nicky Case.
California Gothic font by Matt Lag.
IBM Plex Sans font can be found at Google Fonts.
Support
If you want to, you can throw money at me on Ko-Fi.
:x banned pollsters
Some polling outfits are banned from use by SnoutCounter for aggregation. The following pollsters are banned from SnoutCounter for methodological misconduct, lack of methodological transparency, and/or other methodological issues.
- Rasmussen Reports
- Trafalgar Group
- TIPP Insights
- ActiVote
Additionally, the following pollsters are banned for having received an 'F' quality rating from the Silver Bulletin.
- Strategic Vision LLC
- Pharos Research Group
- Research 2000
- Big Data Poll
- Overtime Politics
- Rethink Priorities
- Blumenthal Research Daily
- CSP Polling
- KG Polling
- OurProgress (The Progress Campaign)
- TCJ Research
:x 538
RIP :(
:x pre overhaul updates
- September 26, 2025: Fixed bug in sample size weights where, in calculation of median sample size, poll sample sizes are capped at 2000 instead of the intended 3000.
- September 18, 2025: Added a new issue to issue-approval poll averages: crime.
- September 16, 2025:
- Polling averages now include an additional weight to account for multiple polls from the same pollster being conducted in a short window.
- Unrated pollsters are now assigned a flat pollster quality weight of 0.2 instead of 0.1.
- July 22, 2025: Added a new issue to issue-approval poll averages: healthcare policy.
- July 21, 2025: Added graphs measuring presidential job approval among registered voters.
- June 23, 2025: Started including polls from pollsters without a Silver Bulletin pollster rating. For these pollsters, a flat pollster quality weight of 0.1 is assigned.
- June 21, 2025: Unbanned McLaughlin, for similar reasons to the recent unbanning of other partisan pollsters. While McLaughlin is a particularly extreme case of partisanship, I am unaware of any significant methodological concerns beyond their bias, which can be rectified via house effect adjustment.
- June 20, 2025: Unbanned OnMessage Inc. and North Star from use by SnoutCounter averages, for similar reasons to the unbanning of Civiqs and co/efficient.
- June 17, 2025:
- Tweaked time weights for presidential approval polling (both overall and issue-specific) to be more aggressive. This should make the averages more responsive.
- Unbanned co/efficient and Civiqs from use by SnoutCounter averages. These are both partisan pollsters, whose partisan bias is already largely rectified via house effect adjustment, and there really isn't much wrong with these polling outfits besides the aforementioned partisanship.
- June 16, 2025: Slightly tweaked population type weighting for generic ballot polling. This would lead to slight decrease in weights for polls utilizing all adult samples.
- June 15, 2025: Added a "Featured Charts" section, so I can show other neat visualizations without cluttering up the main sections.
- May 31, 2025: Modified code for calculating population weights for generic ballot polling averages. As generic ballot polling aims to measure voting intent, it makes more sense to value LV > RV > A.
- May 20, 2025: Added a chart tracking net presidential approval rating.
- May 13, 2025: Added a new issue to issue-approval poll averages: trade and tariffs.
- May 10, 2025: Fixed bug in pipeline function that caused registered voter samples in polls to be chosen over all adult samples.
- April 19, 2025: Added chart showcasing net issue-specific approval ratings.
- April 18, 2025: Adjusted pollster quality weights to be slightly less aggressive, thus lowering the chance that an unduly small number of polls dominate the averages.
- April 17, 2025: Adjusted the linear time weight to be somewhat more aggressive. This helps averages be more responsive and less sluggish, especially the Congressional and SCOTUS approval averages.