I’m lazily reclining, with my iBook on my lap, enjoying a hot cup of gunpowder green, and looking forward to spending a sunny Sunday in the garden, pruning my raspberries and planting peas, when I surf on over to The Seattle Times and… ahhh shit! I see the headline: “State’s election accuracy called into question.” Looks like I’m going to have to waste my morning refuting another bullshit hack job.
Then I read Eric Pryne’s article, and his companion piece (“Idea of closer scrutiny met with mixed reaction“), and he’s actually done quite a good job explaining a rather complicated subject. (I don’t know under what headlines these articles appear in hard copy, but whoever edits the home page deserves a rhetorical beating.)
Pryne actually cites the authoritative research conducted by the Caltech/MIT Voting Technology Project, and while he doesn’t use the terminology, he discusses the two most common metrics for measuring the accuracy of elections, the “residual vote rate” and the “tabulation validation rate.” (Some of you may remember that these studies were the subject of a protracted pissing match between me and the Snark.)
The residual vote rate measures the percentage of ballots for which no vote was recorded in a major election like president or governor. Some of these “under votes” are surely intentional, but since similar precincts using different voting technology can have dramatically different residual vote rates, it can be assumed that some of the under vote is attributable to the voting technology — primarily, the way the voter interacts with it.
Numbers varied widely across the nation by ballot type, researchers found