I know I just raised this issue, but a new way to speak to it came to mind.
The August 12th poll by Ipsos predicts that 77% of the voters in the City of Victoria will cast a ballot in the November referendum on borrowing for the Johnson Street Bridge replacement project. This astonishing result will be highest turnout for an election in Victoria in more than a generation. Just under 50,000 people are predicted to vote in this referendum.
Based on how they state their results, this result is accurate +- 5.3% 99 times out of 100. This means they predict that we will have at least 72% of the people vote.
Now if anyone believes this is an accurate result for how many people will vote, I will happily lay a bet with you that it is not within the 99% confidence margin.
So if we can all agree that this one result is completely and totally wrong and it is the only we can accurately make an estimate off, how can we assume all the others to which we have no baseline to measure it against is accurate?
Either you accept that all the results are correct or you have to assume all of them are wrong. Picking and choosing is not an option that is valid. You can do it, but then you can make anything and claim to believe it. The survey is either scientifically correct or it is not. I content that it is fundamentally flawed.
2 comments:
You are wrong, Bernard. Kyle Braid from Ipsos Reid explained this phenomenon earlier this Spring when the first poll was released.
They ask this question all the time in political polls and get back the same predictable answers. They know it's skewed high because people want to appear cooperative and interest in voting wanes over time.
The other anomaly Braid mentioned for that poll (but not the latest poll) was the under represented youth sample. They were more heavily weighted so the accuracy goes down somewhat but not so much that they didn't feel the results were fairly representative.
Ipsos (and the other major pollsters like Mustel etc.) predict election results, often with astonishing accuracy, within the margin of error.
Now, you can fairly criticize the poll questions themselves and how the City interprets the results, that's another topic.
I have heard similar answers from pollsters and it is scientifically and statistically inaccurate. If you are getting a result that is not what is expected, you have to be able to explain why.
We are all certain that the numbers for voter turnout are wrong and there are some lame attempts to explain why but there is no attempt to explain why the rest of the answers are skewed (or why they should be considered not to be skewed). The one answer we can test and see if it is close or not tends to inaccurate, what data is there to show that the other answers are anywhere close to correct?
Pollsters do weighting but rarely seem to account for in the statistical measure of error.
As to accuracy of polls and elections, actually they tend to be not that close at all, in 2008 only a single polling company came close to accurately measuring the results on election day - within in that 95% confidence interval. All the companies should have been within the range of the election results but they were not. In 2006 once again only one company was within the margin of error, a different one than in 2008.
It is more common in an election for the pollsters to outside of the 95% confidence interval. In fact the pollsters should be within a very small range of each other and of the final result, we are talking about tenths of a percentage point in federal elections.
If the methodology is correct, the pollsters should be right in every election with only one anomalous poll every few elections.
So why are the pollsters wrong much more often than correct? Because they count the opinion of people that will not vote as being the same weight in political opinion as the people that will vote. Federally this is about 1/4 to 1/3 of the respondents. Taking a response from these people skews the results in a mainly unpredictable way. If we reduce the sample size to reflect the people that will actually vote and then add an error factor for the people that are not honestly answering, we get a margin of error that captures all of the polling results.
In the case of the JSB poll, the number of people that did not answer honestly is even higher than the federal polls. The margin of error due to false answers on this poll is so high that it creates a margin of error that makes any and all the answers meaningless
Post a Comment