Part of the problem with the polls are their error bars. 538 hopes they’re eliminating those error bars through a number of different tactics - like averaging the polls. That should do something like increasing the N and decreasing the error. But that’s assuming the methods used to generate the surveys are themselves fully unbiased and representative of the underlying population. On the other hand, seasoned pros are saying if the polls are wrong this year then it’s time to hang up the towel on the polling profession entirely-some of the gaps are wide enough to be significant even with large error bars.
Election polls are almost always right for the only ultimate answer that matters: who is going to win[0]. They also accurately predict the margin of victory. Same link. Polls, in general, are typically measuring tendencies or preferences, which they also (tend to) accurately capture. So saying they are always wrong doesn’t really make sense, as they are not trying to be right in an absolute sense. The error is with the reader who misconstrues what the poll is measuring and how it is doing so.
> Election polls are almost always right for the only ultimate answer that matters: who is going to win[0]. They also accurately predict the margin of victory.
This isn't true in the US.
Your own link shows not a single result since the 1984 where the error was under 1% for both candidates.
They are usually somewhat accurate for the overall vote, but the results on state races haven't been great. That hasn't mattered until 2016 where it was a critical issue.
Even state polling in 2016 was pretty good, historically. The reason to think otherwise is that there was a specific error (non-college white voters swinging to R) that happened not to be tracked well (those voters don't respond as well to polling) nor accounted for (most pollsters didn't weight for education). And it happened to be most concentrated in a few states in the upper midwest that were close to the tipping point. AND it happened to be JUST ENOUGH to have predicted a very narrow (sub 1% in PA/WI/MI) win incorrectly.
To wit: 2016 was a perfect storm. Swap any of those factors above and 538 would have been predicting a 50:50 race (instead of 70:30) at election day and Trump's very narrow victory would have been "correctly predicted".
Assuming that's going to happen again seems a bit like magical thinking to me. Remember that in 2012 the polls were "wrong" in the opposite direction (undercounting Obama votes by a bit) and no one cared.
> Even state polling in 2016 was pretty good, historically. The reason to think otherwise is that there was a specific error (non-college white voters swinging to R) that happened not to be tracked well (those voters don't respond as well to polling) nor accounted for (most pollsters didn't weight for education). And it happened to be most concentrated in a few states in the upper midwest that were close to the tipping point. AND it happened to be JUST ENOUGH to have predicted a very narrow (sub 1% in PA/WI/MI) win incorrectly.
Yes, this is a good summary, and I'm well aware of what went wrong. I ran a prediction project for a few years so I realise how hard this job is.
But just because it's hard doesn't mean we shouldn't recognize the issues.
My argument is that historically state level polls have always had issues but have rarely mattered.
> Remember that in 2012 the polls were "wrong" in the opposite direction (undercounting Obama votes by a bit) and no one cared.
Exactly my point!
If you look at the data on this, the state level polls had even worse accuracy. The average error was 1.6%, compared to 0.8% for national polls.
If that's the case, the pollsters have an easy way to regain trust in their methodologies. They can simply reweight their 2016 data to ensure education sampling matches the population and show that they now predict 2016 correctly. Have any of them done so?
People have done this analysis repeatedly, you just aren't reading it. And polling in 2018 was generally quite good. But again, there's nothing wrong with their "methodologies". Polling errors in 2016 were in fact not far from historical norms. Numerate consensus (c.f. 538) picked a likely but not certain Clinton victory and the result was an extremely narrow loss instead. That stuff happens.
Unfortunately a bunch of people whose political desires hinge on this kind of error being present in all elections and always to the advantage of "the other side" never want to hear this analysis and so repeat the nonsense that "polls are always wrong".
But they're not. They're just not as precise as we want. If the polls are exactly as wrong this year, and in the same direction, as they were in 2016 then Biden wins a solid victory. If they're as wrong as they were in 2012 then he wins a landslide with Texas and Georgia going blue.
You're arguing against a straw man. I never said that polls are always wrong. My political desires don't hinge on the current polls being wrong.
I'm simply asking a question about whether the pollsters have corrected their methodologies. There was clearly something wrong in 2016 when they repeatedly failed to predict the GOP primaries and failed to predict swing states and even some states that weren't considered swing states in the general election.
You're right that I haven't read the analysis that you say exists. That's why I'm asking for it. Which pollsters have done the reweighting on their 2016 data and shown that they then align with reality?
As this report documents, the national polls in 2016 were quite accurate, while polls in key battleground states showed some large, problematic errors. (which is my claim: national polling is ok, state polls aren't great).
The education weighing issue is commonly accepted as the reason for the inaccuracy. The evidence for that isn't as strong as people assume.
Section 3.4 talks about the weighing issue, and shows some where reweighing worked:
Following the election, two different state-level pollsters acknowledged that they had not adjusted for education and conducted their own post-hoc analysis to examine what difference that would have made in their estimates. Both pollsters found that adjusting for education would have meaningfully improved their poll’s accuracy by reducing over-statement of Clinton support.
but also:
Despite this, it is not clear that adjusting to a more detailed education variable would have universally improved polls in 2016. Analysis of the effect from weighting by five education categories rather than three categories in four national polls (Appendix A.H) yielded an average change of less than 0.4 percentage points in the vote estimates and no systematic improvement.
It's worth noting that in previous elections this had never been the case before (which was why it wasn't done).
> I'm simply asking a question about whether the pollsters have corrected their methodologies. There was clearly something wrong in 2016 when they repeatedly failed to predict the GOP primaries
Polls don't predict anything. Poll-based forecasts are different than polls, and done by different people.
The major polls in 2016 conducted close to the election were well within the MoE of the actual results, so as far as polls go there is very little evidence that anything was wrong.
Some of the poll-based forecasts made errors like assuming independence of state-level deviations from polling averages woere independent rather than linked, which gave Trump a very small chance of victory; that doesn't accord with history, was called out by 538 before the election, and I doubt anyone is making that mistake again.
Semantic nitpicking. Clearly, my comment was about how well the polls reflected reality.
> Some of the poll-based forecasts made errors like....
You're making a different statement than the person I'm responding to, who said that the polls didn't align with reality because they did not weight for education. That is something that is easy to verify. I was simply asking if it was verified by showing that weighting by education made the 2016 polls align with reality.
> I was simply asking if it was verified by showing that weighting by education made the 2016 polls align with reality.
To try and answer your technical question: yes and no.
I know why you think this would be easy to do. But this is harder to do than you're thinking because the problem with education was both weighting and sampling.
So, yes, you can play with the weights and get a corrected result. (That's literally just tautological though -- of course in a parameterized model you can wiggle weights around to give the perfect result when you already know what the result should be...)
But, no, that doesn't tell you about what the effect of increasing the sample of these voters would be, and improving sampling is the only real way to get closer to ground truth.
So, some pollsters are sampling larger numbers of less educated people and also weighing those votes differently. But no, we can't know what the combined effect of those two things would've been in 2016. Why? because we can wiggle around the weights post hoc but can't go back in time and resample. And the question about correcting without resampling is kinda uninteresting because the fact that you can get the correct result by just wiggling around weights is literally just a mathematical tautology.
The weights can be computed by comparing how often different educational levels appear in the samples vs. the population. They should not be wiggled in order to get a particular outcome.
nl linked to a paper showing that the analysis I asked for was done and did not have the affect that newacct583 claimed it would have.
> You're making a different statement than the person I'm responding to, who said that the polls didn't align with reality because they did not weight for education.
Yes, I'm disagreeing; the polls in 2016 reflected the actual outcome very well (they were off, but not unusually far; the entire idea that there was an unusual error is based on overemphasis of fairly new poll-based predictors, especially one at the NYTimes, that painted an extremely high probability of a Clinton win based on a flawed prediction methodology which was separate from any problem with the polls.)
> I was simply asking if it was verified by showing that weighting by education made the 2016 polls align with reality.
If you want to fit a particular single result you can change how you handle any factor and get a perfect match. You'd actually need to test against a different set of real data than you used to determine the adjustment to validate it, not the same single result.
> If you want to fit a particular single result you can change how you handle any factor and get a perfect match.
That's not how it works. Weighting education samples to match the population they were sampled from is not fitting a particular result.
> Yes, I'm disagreeing
Then your point is irrelevant to the question I was asking. newacct583 said there was a particular problem with the polls, and I pointed out that if the polls had that particular problem, there is an easy way to check. I asked if that check had been done.
Uh... polls reliably predicted Trump's win in the primaries long before anyone in the pundit class believed it. For months and months it was explained away as simple name recognition. But he was ahead the whole time.
Honestly, you're misremembering this. Polls were not wrong in 2016 like you think they were. Just browse through the article history at 538, they really are the best source for this stuff.
You yourself said the polls were wrong because they didn't weight for education. Now you say they weren't. Are you walking back your original statement? That statement is what this whole discussion is about.
To be clear: that's a republican pollster giving a message to republican-dominated media which was quite clearly intended to be encouraging to the overwhelmingly republican audience.
Luntz is a data guy. He believes polls, they're what he does. If he didn't believe the polls he would have said so. Instead he said that if they were wrong then polling must not work, which is what his audience wants to hear.