I started blogging in May of 2004, and within my first two weeks I had already been driven near the point of aneurism by our local media’s refusal to do simple math. Far from a recent obsession, my focus on numbers and the failure of the press to consistently present them accurately and in proper context, has actually been a recurring theme here on HA since the early days of the blog.
Yes, it’s true that even accurate numbers can mislead (“lies, damn lies and statistics” and all that), and so it’s not always easy to separate the truth from the facts. But what frustrates me most is when journalists simply regurgitate the numbers fed them, without ever bothering to run the equations for themselves.
That’s what happened with the early reports on R-71 signature verification, creating a false impression that the invalidation rate started off low—well below the maximum error threshold—only to rise steadily as the count continued. As a result there are some R-71 backers who now suspect foul play on the part of the Secretary of State’s office, accusing them of toughening up the standards in an effort to keep the measure off the ballot, when in fact the projected invalidation rate, from the very first batch, has consistently remained in the 14.5 to 15 percent range, well above (statistically speaking) the 12.43 percent maximum.
Yes, I know, it was the SOS who initially reported a “clean” count, and who misleadingly juxtaposed the early raw rate against a supposedly 14 percent cushion. But those numbers simply didn’t add up if you took the time to add, subtract, multiply and divide them, and even when I did the math for them, and showed my work, I was mostly ignored by reporters who obviously assumed the SOS had more credibility on these matters than some partisan blogger.
No, I’m not a statistician, and my formal math education never extended much beyond Algebra II & Trigonometry, but I know how to use a calculator and I have some experience with the process stemming from the drama over Tim Eyman’s I-917, and I knew that duplicate signatures always comprise a significant portion of the total errors, and that the number of duplicates always increase at a predictable rate as the sample size is expanded. I also knew that total invalidation rates never fall outside a certain historical range, and that there was absolutely no reason to expect R-71 to do so. These facts were indisputable.
Darryl could run simulations showing a 92 percent chance of R-71 failing to qualify after the first batch, and a near 100 percent chance of failing thereafter, but I didn’t need a PhD in statistics to know what I knew. R-71’s failure was apparent from the very first batch, even if HA was the only site to report it. Okay, maybe my intuition, my expertise and my math wasn’t enough to convince newspapers to write headlines declaring R-71’s failure, but it should have been enough discourage writing headlines and ledes implying the opposite.
While my complaints may come off as petty bitching at least, or gloating at worst, as I’ve written before, numbers do matter, and especially when it comes to elections. Since the excruciatingly close gubernatorial election of 2004, and the highly contentious dispute that followed, public faith in the integrity of our electoral process has been undermined by hyperbolic, selective and downright erroneous reporting. And unfortunately, misleading reports like those we’ve seen regarding R-71, do absolutely nothing to restore public confidence.
It is ironic that a press corps that is often so cynical of government and the words and actions of government officials, can at the same time be so credulous when it comes to the numbers these government officials feed them. And it is an unfortunate disservice to the public as well.