<?xml version="1.0" encoding="UTF-8"?>
<rss  xmlns:atom="http://www.w3.org/2005/Atom" 
      xmlns:media="http://search.yahoo.com/mrss/" 
      xmlns:content="http://purl.org/rss/1.0/modules/content/" 
      xmlns:dc="http://purl.org/dc/elements/1.1/" 
      version="2.0">
<channel>
<title>SnoutCounter</title>
<link>https://snoutcounter.works/</link>
<atom:link href="https://snoutcounter.works/index.xml" rel="self" type="application/rss+xml"/>
<description>Statistical modeling and poll aggregation</description>
<generator>quarto-1.9.36</generator>
<lastBuildDate>Fri, 27 Mar 2026 07:00:00 GMT</lastBuildDate>
<item>
  <title>Tracking the 2026 California governor primary</title>
  <link>https://snoutcounter.works/posts/ca-gov-primary-2026.html</link>
  <description><![CDATA[ 




<p>With incumbent governor Gavin Newsom unable to run for a third term due to term limits, this year’s California governor’s top-two primary race has become rather competitive. The sheer amount of Democrats in the race has prompted <a href="https://fairvote.org/democrats-could-be-locked-out-of-race-for-california-governor/">some concern</a> that the Democratic candidates would act as spoilers for each other, vaulting a Republican to the governor’s mansion. In the midst of all this, I have decided to release my averages tracking electoral intent in the race. I will regularly update this average until June 2 (the date of the gubernatorial primary election) rolls around.</p>
<p>You can read how I compute my average in the methodology section below. You can also find the underlying polling dataset and model source code at the <a href="https://github.com/Hackquantumcpp/snoutcounter-backend">GitHub repo</a>.</p>
<section id="average-and-polls" class="level1">
<h1>Average and Polls</h1>
<iframe title="Who is going to win the 2026 California gubernatorial primary?" aria-label="Line chart" id="datawrapper-chart-emANK" src="https://datawrapper.dwcdn.net/emANK/1/" scrolling="no" frameborder="0" style="width: 0; min-width: 100% !important; border: none;" height="499" data-external="1"></iframe><script type="text/javascript">window.addEventListener("message",function(a){if(void 0!==a.data["datawrapper-height"]){var e=document.querySelectorAll("iframe");for(var t in a.data["datawrapper-height"])for(var r,i=0;r=e[i];i++)if(r.contentWindow===a.source){var d=a.data["datawrapper-height"][t]+"px";r.style.height=d}}});</script>

<iframe title="2026 California Gubernatorial Primary Polls" aria-label="Table" id="datawrapper-chart-rmXtD" src="https://datawrapper.dwcdn.net/rmXtD/1/" scrolling="no" frameborder="0" style="width: 0; min-width: 100% !important; border: none;" height="808" data-external="1"></iframe><script type="text/javascript">window.addEventListener("message",function(a){if(void 0!==a.data["datawrapper-height"]){var e=document.querySelectorAll("iframe");for(var t in a.data["datawrapper-height"])for(var r,i=0;r=e[i];i++)if(r.contentWindow===a.source){var d=a.data["datawrapper-height"][t]+"px";r.style.height=d}}});</script>
</section>
<section id="methodology" class="level1">
<h1>Methodology</h1>
<p>All professional polls measuring electoral intent in the California gubernatorial primary are used, with the exception of those conducted by banned pollsters - see <a href="../posts/poll-avg-methodology.html">here</a> for a list of pollsters excluded from use in SnoutCounter’s averages. I generally check <a href="https://fiftyplusone.news/polls/governor/nonpartisan-primary/california">FiftyPlusOne</a>, <a href="https://www.nytimes.com/interactive/polls/california-governor-election-polls-2026.html">The New York Times</a>, and Twitter/Bluesky poll collectors (<a href="https://bsky.app/profile/usapolling.bsky.social">Polling USA</a>, <a href="https://xcancel.com/iapolls2022">Interactive Polls</a>, <a href="https://xcancel.com/PollTracker2024/">Politics &amp; Poll Tracker</a>) for new polls and to check what I missed. In the case that a poll has surveyed two samples of different population types (for example, one likely voter sample and one registered voter sample), we only use the results of one of these samples being surveyed. Likely voter samples are preferred to register voter samples are preferred to “all adult” samples. Tracking polls in our dataset are dynamically selected and weeded out such that all tracking polls from the same pollster are non-overlapping in fielding dates; I always include the most recent tracking poll from each pollster.</p>
<p>I aggregate polls via a weighted average. The following factors are utilized to determine weights:</p>
<ul>
<li><strong>Sample size</strong>: Polls with higher sample size are more likely to accurately estimate the population parameter in question, and generally have less uncertainty than polls with smaller sample sizes. However, sample sizes are subject to diminishing returns - a poll with a sample size of 3000 won’t much more accurate than one with a sample size of 2000. I cap sample size at 2000, to avoid polls with particularly large sample sizes from dominating the averages. I additionally <a href="https://en.wikipedia.org/wiki/Winsorizing">winsorize</a> the sample sizes to counter the effect of extreme outlier sample sizes on the sample size weight. Some polls do not reveal their sample sizes; in these cases I assume that the sample size for that poll is equal to the median sample size for all California gubernatorial primary polls conducted by that pollster. If there are no other governor primary polls conducted by that pollster, I set the sample size equal to the median sample size for all governor primary polls utilizing the same mode; if there are no polls with the same mode, I simply set the sample size equal to the median sample size for all California governor primary polls. The weight function I use is the square root of the sample size over the square root of the median sample size of all polls in the dataset.</li>
<li><strong>Pollster rating</strong>: Not all pollsters are created equal; some are more reliable in producing accurate and precise results than others. I use the <a href="https://www.natesilver.net/p/pollster-ratings-silver-bulletin">Silver Bulletin’s pollster ratings</a>, specifically the predictive plus-minus, a measure of how accurate a pollster is expected to be, as the input for the pollster rating weight function. For predictive plus-minus, lower is better. Pollsters with a predictive plus-minus above 1 are assigned a flat weight of 0.2, as are pollsters without a predictive plus-minus rating from Silver Bulletin. All other pollsters are assigned a weight according to a square root function.</li>
<li><strong>Time since poll was conducted</strong>: Of course, polls which were conducted more recently are more likely to be reflective of the state of public opinion. I utilize an exponential function as a weight for recency.</li>
<li><strong>Multiple polls in short window</strong>: If there are multiple polls from the same pollster and sponsor in a two week window, each poll is downweighted, based on the number of polls from that pollster/sponsor in this window. This is to ensure that one pollster/sponsor doesn’t dominate the averages just out of frequency, and to counter pollsters who attempt to “flood the zone.” The formula utilized for this is borrowed from <a href="https://www.gelliottmorris.com/p/democrats-lead-house-generic-ballot">Strength In Numbers’ polling averages</a>; it is calculated as 1 over the square root of the number of polls from the pollster within a two-week window.</li>
<li><strong>Partisan-sponsored and internal polls</strong>: A large portion of California governor polls have either been commissioned by a partisan organization or are internal polls commissioned by one of the candidates in the race. Despite the media hullabaloo that tends to surround internal polls, they actually tend to be <a href="https://www.natesilver.net/p/why-you-should-mostly-ignore-internal"><em>less</em> accurate</a> than publicly released polls. I downweight both partisan and internal polls accordingly to prevent them from having an excessive impact on the average. All else held equal, partisan polls are assigned 70% of the weight of a non-partisan poll, and internal polls are assigned 50% of the weight of a publicly released poll.</li>
</ul>
<p>After calculating these weights, I compute the weighted average for each candidate. However, we aren’t done here - I apply a series of adjustments to each poll result, calculated by a <a href="https://www.bayesrulesbook.com/chapter-17">multilevel regression model</a>. I use the previously calculated weighted average as a regressor in this model. The adjustments calculated and applied are as follows:</p>
<ul>
<li><strong>House effect adjustment</strong>: Many pollsters have unique biases stemming from differences in fielding, weighting, question wording, etc, that may not be captured by other adjustments calculated. Thus, I calculate the “house effect” of each pollster. Like all other adjustments, house effects are calculated relative to the weighted average of all polls, rather than the biases of each pollster relative to election results - this is because systematic, industry-wide polling bias (and, by corollary, the biases of many pollsters relative to actual results) are not predictable and change from cycle to cycle.</li>
<li><strong>Mode effects adjustment</strong>: The choice of how a pollster chooses to field will often have a significant effect on the results of the poll, as different fielding methods will often reach different audiences and different types of people. For example, online panels often reach a younger demographic compared to live phone surveys, and probability panels often present significantly different results compared to non-probability based methods. I correct for this by introducing an adjustment for methodology.</li>
<li><strong>Likely voter adjustment</strong>: Each polling average is trying to measure approval or horse-race standings for a specific population, but often we will have multiple surveys polling different populations for the same measurable variable. To address this, I calculate three population adjustments - likely voter, registered voter, and all adult sample adjustments - and adjust the results of polls surveying different populations towards the likely voter population.</li>
<li><strong>Partisanship adjustment</strong>: Some pollsters are very explicitly partisan in nature, often working with certain political parties or groups affiliated with political parties, consistently polling for candidates of one party, and/or being funded by a certain party or partisan group. For these pollsters, a partisanship adjustment is applied to correct for potential biases introduced by strong partisan affiliation.</li>
<li><strong>Candidate sponsor adjustment</strong>: Similar to the partisanship adjustment, for internal polls I adjust for the candidate that has commissioned the poll, to correct for the biases that these polls may present.</li>
<li><del><strong>Minor candidates adjustment</strong>: I track the major candidates in my averages, but they are not the only candidates who are participating in the race. More marginal candidates, such as Tony Thurmond, Betty Yee, Butch Ware, Leo Zacky, and others have also filed for the race, and have taken their own (very small) slice of the electorate. Not all polls include these candidates as possible options, so I adjust for their absence in polls that exclude these candidates. As these candidates will most likely show up on the ballot come June and win a small slice of the vote share, I apply the adjustment to polls that exclude these candidates, the post-adjustment topline of each of these polls essentially being equivalent to what the toplines would most likely look like <em>if</em> these candidates were included in that specific poll.</del> After some further exploratory analysis, I have made the determination that polling on minor candidates is insufficient, and thus this adjustment may unduly skew the averages. Therefore I have disabled this unless and until further polling on minor candidates is conducted.</li>
<li><del><strong>Drop-in and drop-out adjustments</strong>: The California gubernatorial primary has been a rather chaotic race to keep track of, with major and minor candidates alike dropping out and in of the race continuously over the course of the past year. Thus, I adjust for the inclusion and exclusion of candidates that have “dropped in” or “dropped out” of the race in various polls.</del> Applying the drop-in and drop-out adjustments seem to produce rather funky and anomalous effects in the poll averages, which imply associated methodologial issues; thus I will be excluding these adjustments from the overall computation of my average.</li>
</ul>
<p>These adjustments are summed to get the total adjustment for each poll, which is then added to measured candidate polling estimates. After these adjustments are applied, the weighted average is recalculated to get the final average for each rating/variable. 95% confidence intervals are also computed in order to measure and represent uncertainty in the average for each candidate; they are computed as <img src="https://latex.codecogs.com/png.latex?%5Cbar%7Bx%7D%5Cpm%201.96s_x">, where <img src="https://latex.codecogs.com/png.latex?%5Cbar%7Bx%7D"> is the weighted polling average and <img src="https://latex.codecogs.com/png.latex?s_x"> is the weighted standard deviation.</p>
</section>
<section id="updates" class="level1">
<h1>Updates</h1>
<ul>
<li><strong>March 30, 2026</strong>: Disabled the minor candidates adjustment due to insufficient polling data on these candidates.</li>
</ul>


</section>

 ]]></description>
  <category>Polling Averages</category>
  <category>2026 Elections</category>
  <guid>https://snoutcounter.works/posts/ca-gov-primary-2026.html</guid>
  <pubDate>Fri, 27 Mar 2026 07:00:00 GMT</pubDate>
  <media:content url="https://snoutcounter.works/assets/blog-images/State_flag_of_California_-_June_2025_-_Sarah_Stierch.jpg" medium="image" type="image/jpeg"/>
</item>
<item>
  <title>Methodology for Our Polling Averages</title>
  <link>https://snoutcounter.works/posts/poll-avg-methodology.html</link>
  <description><![CDATA[ 




<p>Here you can find the methodology for (most of) SnoutCounter’s polling averages. Some polling averages deviate from this substantially - in those cases the methodology will be listed on the same post as the average.</p>
<p>The source code for all poll aggregation models can be found at the <a href="https://github.com/Hackquantumcpp/snoutcounter-backend">GitHub repo</a>. If you find a bug in the code, or have useful feedback to report, then feel free to do so by opening an issue on GitHub.</p>
<section id="methodology" class="level1 page-columns page-full">
<h1>Methodology</h1>
<p>SnoutCounter aggregates polls via a weighted average. I include all professional polls, with the exceptions of those conducted by pollsters that have been banned from usage in SnoutCounter averages. The following pollsters are excluded from usage in our averages due to methodological misconduct, lack of transparency, or other reasons:</p>
<ul>
<li>Rasmussen Reports</li>
<li>Trafalgar Group</li>
<li>ActiVote</li>
</ul>
<p>Additionally, the following pollsters have been assigned an “F” quality rating from Silver Bulletin and thus are also excluded:</p>
<ul>
<li>Strategic Vision LLC</li>
<li>Pharos Research Group</li>
<li>Research 2000</li>
<li>Big Data Poll</li>
<li>Overtime Politics</li>
<li>Rethink Priorities</li>
<li>Blumenthal Research Daily</li>
<li>CSP Polling</li>
<li>KG Polling</li>
<li>OurProgress (The Progress Campaign)</li>
<li>TCJ Research</li>
</ul>
<p>All polls are collected manually. I generally check the <a href="https://www.natesilver.net/p/trump-approval-ratings-nate-silver-bulletin">Silver Bulletin</a>, <a href="https://fiftyplusone.news/polls/approval/president">FiftyPlusOne</a>, <a href="https://www.nytimes.com/interactive/polls/donald-trump-approval-rating-polls.html">The New York Times</a>, <a href="https://www.realclearpolling.com/">RealClearPolitics</a>, and <a href="https://ropercenter.cornell.edu/">The Roper Center</a>, as well as the Twitter/Bluesky poll collectors <a href="https://bsky.app/profile/usapolling.bsky.social">Polling USA</a>, <a href="https://xcancel.com/iapolls2022">Interactive Polls</a>, and <a href="https://xcancel.com/PollTracker2024/">Politics &amp; Poll Tracker</a> for new polls and anything I missed. In the case that a poll has surveyed two samples of different population types (for example, one likely voter sample and one registered voter sample), we only use the results of one of these samples being surveyed. “All adult” samples are preferred to registered voter samples, and registered voter samples are preferred to likely voter samples, for the various presidential approval averages. When measuring job approval among registered voters, polls that draw from a sample of all adults, but include crosstab results for registered voters, are included - specifically the results for the registered voter sample. For the generic ballot, likely voter samples are preferred to register voter samples are preferred to “all adult” samples. Tracking polls in our dataset are dynamically selected and weeded out such that all tracking polls from the same pollster are non-overlapping in fielding dates; I always include the most recent tracking poll from each pollster. Our weights are determined by the following four factors:</p>
<ul>
<li><strong>Sample size</strong>: Polls with higher sample size are more likely to accurately estimate the population parameter in question, and generally have less uncertainty than polls with smaller sample sizes. However, sample sizes are subject to diminishing returns - a poll with a sample size of 5000 won’t much more accurate than one with a sample size of 3000. I cap sample size at 5000, to avoid polls with particularly large sample sizes from dominating the averages. For generic ballot polls, I additionally <a href="https://en.wikipedia.org/wiki/Winsorizing">winsorize</a> the sample sizes to counter the effect of extreme outlier sample sizes on the sample size weight. The weight function I use is the square root of the sample size over the square root of the median sample size of all polls in the dataset.</li>
<li><strong>Pollster rating</strong>: Not all pollsters are created equal; some are more reliable in producing accurate and precise results than others. I use the <a href="https://www.natesilver.net/p/pollster-ratings-silver-bulletin">Silver Bulletin’s pollster ratings</a>, specifically the predictive plus-minus, a measure of how accurate a pollster is expected to be, as the input for the pollster rating weight function. For predictive plus-minus, lower is better. Pollsters with a predictive plus-minus above 1 are assigned a flat weight of 0.2, as are pollsters without a predictive plus-minus rating from Silver Bulletin. All other pollsters are assigned a weight according to a square root function.</li>
<li><strong>Time since poll was conducted</strong>: Of course, polls which were conducted more recently are more likely to be reflective of the state of public opinion. I utilize an exponential function as a weight for recency, with the function being more aggressive with more frequently polled topics (e.g.&nbsp;general approval, versus issue-specific approval).</li>
<li><strong>Multiple polls in short window</strong>: If there are multiple polls from the same pollster and sponsor in a two week window, each poll is downweighted, based on the number of polls from that pollster/sponsor in this window. This is to ensure that one pollster/sponsor doesn’t dominate the averages just out of frequency, and to counter pollsters who attempt to “flood the zone.” The formula utilized for this is borrowed from <a href="https://www.gelliottmorris.com/p/democrats-lead-house-generic-ballot">Strength In Numbers’ polling averages</a>; it is calculated as 1 over the square root of the number of polls from the pollster within a two-week window.</li>
</ul>
<p>All these weights are combined into a final weight, calculated as the product of these four weights. The weights are normalized to sum to 1. This is then used to calculate a weighted average for each rating or variable we are trying to measure, and for each day. The weighted standard deviation is also calculated for each day and used to determine <a href="https://en.wikipedia.org/wiki/Confidence_interval">confidence intervals</a>.<sup>1</sup></p>
<div class="no-row-height column-margin column-container"><div id="fn1"><p><sup>1</sup>&nbsp;This works due to the Central Limit Theorem - as a certain question or election is polled more, the distribution of toplines for that question or election will approach the normal distribution.</p></div></div><p>In addition to these weights, I apply four different adjustments to each poll result, each of them calculated by a <a href="https://en.wikipedia.org/wiki/Mixed_model">mixed effects model</a>. These adjustments are calculated relative to the average calculated from the previous steps. The adjustments are as follows:</p>
<ul>
<li><strong>House effect adjustment</strong>: Many pollsters have unique biases stemming from differences in fielding, weighting, question wording, etc, that may not be captured by other adjustments calculated. Thus, I calculate the “house effect” of each pollster. Like all other adjustments, house effects are calculated relative to the weighted average of all polls, rather than the biases of each pollster relative to election results - this is because systematic, industry-wide polling bias (and, by corollary, the biases of many pollsters relative to actual results) are not predictable and change from cycle to cycle.</li>
<li><strong>Mode adjustment</strong>: The choice of how a pollster chooses to field will often have a significant effect on the results of the poll, as different fielding methods will often reach different audiences and different types of people. For example, online panels often reach a younger demographic compared to live phone surveys, and probability panels often present significantly different results compared to non-probability based methods. I correct for this by introducing an adjustment for methodology.</li>
<li><strong>Population adjustment</strong>: Each polling average is trying to measure approval or horse-race standings for a specific population, but often we will have multiple surveys polling different populations for the same measurable variable. To address this, I calculate three population adjustments - likely voter, registered voter, and all adult sample adjustments - and adjust the results of polls surveying different populations towards the population that the average is trying to measure. For general and issue-specific presidential approval, the average attempts to measure approval among all American adults, while for approval among registered voters, the average attempts to measure approval among, well, registered voters. For generic ballot, we measure electoral preferences among registered voters until Labor Day, at which point we start measuring electoral preferences among likely voters. We measure registered voter preferences prior to Labor Day as likely voter samples may not be representative of the actual electorate long before the election. Thus, the model adjusts results towards the population being measured in each of these cases.</li>
<li><strong>Partisanship adjustment</strong>: Some pollsters are very explicitly partisan in nature, often working with certain political parties or groups affiliated with political parties, consistently polling for candidates of one party, and/or being funded by a certain party or partisan group. For these pollsters, a partisanship adjustment is applied to correct for potential biases introduced by strong partisan affiliation. As an additional measure to counteract partisanship, I employ a <strong>partisanship downweight</strong> - all else held equal, a poll conducted by a partisan pollster is given 70% of the weight of a poll conducted by a non-partisan pollster.</li>
</ul>
<p>These adjustments are summed to get the total adjustment for each poll, which is then added to measured approvals/disapprovals or two-party horse race estimates. After these adjustments are applied, the weighted average and confidence intervals are recalculated to get the final average for each rating/variable.</p>
</section>
<section id="updates" class="level1">
<h1>Updates</h1>
<ul>
<li><strong>February 14, 2026</strong>: Unbanned TIPP Insights from use in SnoutCounter averages. TIPP was initially banned due to an <a href="https://threadreaderapp.com/thread/1844824902198554628.html?utm_campaign=topunroll">incident</a> during the 2024 election cycle where the likely voter model introduced by TIPP in a Pennsylvania poll cut out the vast majority of respondents from Philadelphia, thus leading to results more favorable to Trump in the topline. This was viewed with suspicion by both myself and <a href="https://xcancel.com/lxeagle17/status/1844581842034491471">other poll watchers and analysts</a>, as it was equivalent to a particularly egregious under-sampling of a heavily Democratic area with significant influence over PA election results, thus as a precautionary measure I banned TIPP from use in my averages. As far as I know, however, there haven’t been any subsequent methodological incidents in TIPP’s polling. Thus, I made the decision to allow TIPP polls to be included in our averages. As with all methodological changes, poll averages have been retroactively recalculated to account for this.</li>
<li><strong>February 8, 2026</strong>: Optimized models to run faster. Also, modified mixed effects model by eliminating the global intercept term.</li>
<li><strong>February 2, 2026</strong>: Modified mixed effects model used for calculating adjustments. Instead of calculating and applying adjustments to the net/spread, adjustments are calculated for and applied to each individual target feature (e.g.&nbsp;for generic ballot, Democrats and Republicans each have an individual adjustment).</li>
<li><strong>January 20, 2026</strong>: Fixed bug in model which caused different population screens of the same poll to not be chosen properly.</li>
<li><strong>January 16, 2026</strong>: Added new, updated generic ballot averages, with new methodology (as part of the methodological overhaul).</li>
<li><strong>January 14, 2026</strong>: Updated pollster ratings with new 2026 ratings, as per Silver Bulletin.</li>
<li><strong>January 8, 2026</strong>: Added partisanship downweight.</li>
<li><strong>January 4, 2026</strong>: Completely overhauled the methodology for polling averages.</li>
</ul>
<p>The following updates were made before the methodological overhaul in January 2026, thus the corresponding update notes may lack the context behind them.</p>
<ul>
<li><strong>September 26, 2025</strong>: Fixed bug in sample size weights where, in calculation of median sample size, poll sample sizes are capped at 2000 instead of the intended 3000.</li>
<li><strong>September 18, 2025</strong>: Added a new issue to issue-approval poll averages: crime.</li>
<li><strong>September 16, 2025</strong>:
<ul>
<li>Polling averages now include an additional weight to account for multiple polls from the same pollster being conducted in a short window.</li>
<li>Unrated pollsters are now assigned a flat pollster quality weight of 0.2 instead of 0.1.</li>
</ul></li>
<li><strong>July 22, 2025</strong>: Added a new issue to issue-approval poll averages: healthcare policy.</li>
<li><strong>July 21, 2025</strong>: Added graphs measuring presidential job approval among registered voters.</li>
<li><strong>June 23, 2025</strong>: Started including polls from pollsters without a Silver Bulletin pollster rating. For these pollsters, a flat pollster quality weight of 0.1 is assigned.</li>
<li><strong>June 21, 2025</strong>: Unbanned McLaughlin, for similar reasons to the recent unbanning of other partisan pollsters. While McLaughlin is a particularly extreme case of partisanship, I am unaware of any significant methodological concerns beyond their bias, which can be rectified via house effect adjustment.</li>
<li><strong>June 20, 2025</strong>: Unbanned OnMessage Inc.&nbsp;and North Star from use by SnoutCounter averages, for similar reasons to the unbanning of Civiqs and co/efficient.</li>
<li><strong>June 17, 2025</strong>:
<ul>
<li>Tweaked time weights for presidential approval polling (both overall and issue-specific) to be more aggressive. This should make the averages more responsive.</li>
<li>Unbanned co/efficient and Civiqs from use by SnoutCounter averages. These are both partisan pollsters, whose partisan bias is already largely rectified via house effect adjustment, and there really isn’t much wrong with these polling outfits besides the aforementioned partisanship.</li>
</ul></li>
<li><strong>June 16, 2025</strong>: Slightly tweaked population type weighting for generic ballot polling. This would lead to slight decrease in weights for polls utilizing all adult samples.</li>
<li><strong>June 15, 2025</strong>: Added a “Featured Charts” section, so I can show other neat visualizations without cluttering up the main sections.</li>
<li><strong>May 31, 2025</strong>: Modified code for calculating population weights for generic ballot polling averages. As generic ballot polling aims to measure voting intent, it makes more sense to value LV &gt; RV &gt; A.</li>
<li><strong>May 20, 2025</strong>: Added a chart tracking net presidential approval rating.</li>
<li><strong>May 13, 2025</strong>: Added a new issue to issue-approval poll averages: trade and tariffs.</li>
<li><strong>May 10, 2025</strong>: Fixed bug in pipeline function that caused registered voter samples in polls to be chosen over all adult samples.</li>
<li><strong>April 19, 2025</strong>: Added chart showcasing net issue-specific approval ratings.</li>
<li><strong>April 18, 2025</strong>: Adjusted pollster quality weights to be slightly less aggressive, thus lowering the chance that an unduly small number of polls dominate the averages.</li>
<li><strong>April 17, 2025</strong>: Adjusted the linear time weight to be somewhat more aggressive. This helps averages be more responsive and less sluggish, especially the Congressional and SCOTUS approval averages.</li>
</ul>
</section>
<section id="data-download" class="level1">
<h1>Data Download</h1>
<p>All polling data and averages can be downloaded at the <a href="https://github.com/Hackquantumcpp/snoutcounter-backend">GitHub repo</a>. Polling data can be found <a href="https://github.com/Hackquantumcpp/snoutcounter-backend/tree/main/data">here</a>, while averages can be found <a href="https://github.com/Hackquantumcpp/snoutcounter-backend/tree/main/averages">here</a>.</p>


</section>


 ]]></description>
  <category>Methodology</category>
  <category>Statistics</category>
  <guid>https://snoutcounter.works/posts/poll-avg-methodology.html</guid>
  <pubDate>Thu, 26 Mar 2026 07:00:00 GMT</pubDate>
</item>
<item>
  <title>2026 Midterm Elections Portal</title>
  <link>https://snoutcounter.works/posts/2026-elections.html</link>
  <description><![CDATA[ 




<p>This page serves as the home for SnoutCounter’s generic ballot averages, tracking electoral intent for what is set to be one of the most contentious midterm elections in recent history. This average will update regularly over the course of the year. As the midterms approach, I will also be constructing and releasing a predictive model for the House, Senate, and governor’s races.</p>
<p>You can read how our polling averages work <a href="../posts/poll-avg-methodology.html">here</a>. You can find the underlying polling data and model code at the <a href="https://github.com/Hackquantumcpp/snoutcounter-backend">GitHub repo</a>.</p>
<iframe title="Which party do Americans want to be in control of Congress?" aria-label="Line chart" id="datawrapper-chart-93kHt" src="https://datawrapper.dwcdn.net/93kHt/247/" scrolling="no" frameborder="0" style="width: 0; min-width: 900px !important; border: none;" height="573" data-external="1"></iframe><script type="text/javascript">window.addEventListener("message",function(a){if(void 0!==a.data["datawrapper-height"]){var e=document.querySelectorAll("iframe");for(var t in a.data["datawrapper-height"])for(var r,i=0;r=e[i];i++)if(r.contentWindow===a.source){var d=a.data["datawrapper-height"][t]+"px";r.style.height=d}}});</script>

<iframe title="Generic Ballot Polls" aria-label="Table" id="datawrapper-chart-5N5wz" src="https://datawrapper.dwcdn.net/5N5wz/242/" scrolling="no" frameborder="0" style="width: 0; min-width: 900px !important; border: none;" height="752" data-external="1"></iframe><script type="text/javascript">window.addEventListener("message",function(a){if(void 0!==a.data["datawrapper-height"]){var e=document.querySelectorAll("iframe");for(var t in a.data["datawrapper-height"])for(var r,i=0;r=e[i];i++)if(r.contentWindow===a.source){var d=a.data["datawrapper-height"][t]+"px";r.style.height=d}}});</script>



 ]]></description>
  <category>Polling Averages</category>
  <category>2026 Elections</category>
  <guid>https://snoutcounter.works/posts/2026-elections.html</guid>
  <pubDate>Sat, 04 Apr 2026 22:25:44 GMT</pubDate>
  <media:content url="https://snoutcounter.works/assets/blog-images/Capitol_Building_Full_View.jpg" medium="image" type="image/jpeg"/>
</item>
<item>
  <title>Presidential Approval Portal</title>
  <link>https://snoutcounter.works/posts/president-approval.html</link>
  <description><![CDATA[ 




<p>Here you can find all of the live updating polling averages measuring Trump’s approval throughout his second term. SnoutCounter computes general approval, approval among registered voters, and issue-specific approval.</p>
<p>You can read how our averages work <a href="../posts/poll-avg-methodology.html">here</a>. You can find the underlying polling data and model code at the <a href="https://github.com/Hackquantumcpp/snoutcounter-backend">GitHub repo</a>.</p>
<section id="trumps-second-term-approval" class="level1">
<h1>Trump’s Second Term Approval</h1>
<iframe title="Do Americans approve or disapprove of Donald Trump?" aria-label="Interactive line chart" id="datawrapper-chart-ezoju" src="https://datawrapper.dwcdn.net/ezoju/12/" scrolling="no" frameborder="0" style="width: 0; min-width: 900px !important; border: none;" height="523" data-external="1"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(a){if(void 0!==a.data["datawrapper-height"]){var e=document.querySelectorAll("iframe");for(var t in a.data["datawrapper-height"])for(var r,i=0;r=e[i];i++)if(r.contentWindow===a.source){var d=a.data["datawrapper-height"][t]+"px";r.style.height=d}}}))}();
</script>

<iframe title="What is Donald Trump's net approval rating?" aria-label="Interactive line chart" id="datawrapper-chart-Hs5C6" src="https://datawrapper.dwcdn.net/Hs5C6/2/" scrolling="no" frameborder="0" style="width: 0; min-width: 900px !important; border: none;" height="400" data-external="1"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(a){if(void 0!==a.data["datawrapper-height"]){var e=document.querySelectorAll("iframe");for(var t in a.data["datawrapper-height"])for(var r,i=0;r=e[i];i++)if(r.contentWindow===a.source){var d=a.data["datawrapper-height"][t]+"px";r.style.height=d}}}))}();
</script>

    <iframe title="Job Approval Polls" aria-label="Table" id="datawrapper-chart-c21sL" src="https://datawrapper.dwcdn.net/c21sL/4/" scrolling="no" frameborder="0" style="width: 0; min-width: 900px !important; border: none" height="640" data-external="1"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(a){if(void 0!==a.data["datawrapper-height"]){var e=document.querySelectorAll("iframe");for(var t in a.data["datawrapper-height"])for(var r,i=0;r=e[i];i++)if(r.contentWindow===a.source){var d=a.data["datawrapper-height"][t]+"px";r.style.height=d}}}))}();</script>
</section>
<section id="issue-approval" class="level1">
<h1>Issue Approval</h1>
<iframe title="Do Americans approve or disapprove of Donald Trump's handling of the economy?" aria-label="Line chart" id="datawrapper-chart-iQjHQ" src="https://datawrapper.dwcdn.net/iQjHQ/" scrolling="no" frameborder="0" style="width: 0; min-width: 900px !important; border: none;" height="582" data-external="1"></iframe><script type="text/javascript">window.addEventListener("message",function(a){if(void 0!==a.data["datawrapper-height"]){var e=document.querySelectorAll("iframe");for(var t in a.data["datawrapper-height"])for(var r,i=0;r=e[i];i++)if(r.contentWindow===a.source){var d=a.data["datawrapper-height"][t]+"px";r.style.height=d}}});</script>

<iframe title="Trump's Net Issue-Specific Approval Ratings" aria-label="Line chart" id="datawrapper-chart-ybMn7" src="https://datawrapper.dwcdn.net/ybMn7/282/" scrolling="no" frameborder="0" style="width: 0; min-width: 900px !important; border: none;" height="574" data-external="1"></iframe><script type="text/javascript">window.addEventListener("message",function(a){if(void 0!==a.data["datawrapper-height"]){var e=document.querySelectorAll("iframe");for(var t in a.data["datawrapper-height"])for(var r,i=0;r=e[i];i++)if(r.contentWindow===a.source){var d=a.data["datawrapper-height"][t]+"px";r.style.height=d}}});</script>

<iframe title="Issue Approval Polls" aria-label="Table" id="datawrapper-chart-s3YFJ" src="https://datawrapper.dwcdn.net/s3YFJ/314/" scrolling="no" frameborder="0" style="width: 0; min-width: 900px !important; border: none;" height="907" data-external="1"></iframe><script type="text/javascript">window.addEventListener("message",function(a){if(void 0!==a.data["datawrapper-height"]){var e=document.querySelectorAll("iframe");for(var t in a.data["datawrapper-height"])for(var r,i=0;r=e[i];i++)if(r.contentWindow===a.source){var d=a.data["datawrapper-height"][t]+"px";r.style.height=d}}});</script>


</section>

 ]]></description>
  <category>Polling Averages</category>
  <guid>https://snoutcounter.works/posts/president-approval.html</guid>
  <pubDate>Sat, 04 Apr 2026 22:25:44 GMT</pubDate>
  <media:content url="https://snoutcounter.works/assets/blog-images/Donald_Trump_takes_the_oath_of_office_(2025)_(alternate).jpg" medium="image" type="image/jpeg"/>
</item>
</channel>
</rss>
