Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

but his methods do lean towards being relatively unverifiable on a short timeline.

Perhaps. But I think a lot of his critics fall into this trap of thinking that what Silver thinks is that he is offering some sort of revolutionary and perfect prediction, or that he thinks he is doing some sort of groundbreaking science.

He's just trying to make a prediction based on all of the available information. If election predictions are inherently "relatively unverifiable", Nate Silver isn't capable of changing that - just working with what he is given (and doing a great job of explaining all of the ins and outs of this stuff to the layman).



In fact he says a good amount of this himself. I think some of his readers impute a greater level of "scientificness" to his numbers than he himself claims. He's had many posts throughout the fall explaining where his model is based on some assumptions that could turn out to be incorrect, and key parameters fit based on relatively limited data. For example, an important one is how you translate current poll leads to likelihood of winning on election day, i.e. why does an x% lead a week before the election give you y% chance of winning? His method is to look at the empirical distribution of poll misses in the 11 elections 1968-2008, make some normality assumptions, and use that to estimate poll->results mapping, which serves as a single estimate of a whole bunch of miscellaneous sources of error (likelihood the polls are systematically biased this year, likelihood of a last-minute change, etc.). But of course that's a small number of data points, and not IID ones, either, all of which he acknowledges. All he really claims is that this model is a reasonable attempt to integrate the available data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: