As part of an analysis of PolitiFact’s (PF) “bias” I made a list of possible items to consider: one was, is there one PolitiFact writer/researcher who may be more biased or prone to error favorable to one side than the others? I thought Bryan White’s Grading PolitiFact series might provide some insight into that. I also wondered, now that I had a population numeric for PolitiFact rulings (2,788), did Bryan attain a number of grades that might be considered a sufficient sample. There was also a certain curiosity as to whether any other characteristics (who made the statements, etc.) of his sample revealed any other trends of interest.
So I went back to his blog and started adding up the rulings. I found that he “officially” started his Grading PolitiFact series in 2009 coinciding with the beginning of Barack Obama’s presidential term. So I counted two years forward from that, to January 20, 2011, getting a total of 97 rulings. According to statistical sampling techniques, he would need a sample of at least 142 rulings with the same finding of “bias” to achieve a 95% confidence level for that finding to exist in the rest of the population. Bryan also “graded” PolitiFact in 2008, so there are additional rulings on top of his 97. But I would make an educated guess that he doesn’t have 45 additional rulings to get to the 142 benchmark. Additionally we get brought down by an important (and perhaps most crucial) requirement of the sampling: it must be unbiased, and we know it is already tainted (because Bryan openly admits to his conservative bias).
Bryan predominantly selected Republican rulings for analysis (60%) with the highly-inflammatory Michele Bachmann and similarly noisy, opinionated pundits like Glenn Beck and Rush Limbaugh topping his list. His choice of Democrats follows a similar pattern of PolitiFact’s choices, with Barack Obama getting roughly one third of the Democrat ruling analyses.
Who’s the most slanted? It would be ridiculous to average or otherwise “look” at Bryan’s grades since almost 97% received a grade of D+ or less (87% were “flunked” with an F or less). Bill Adair received the most F’s, but as the talking “head” of PolitiFact and chiefly being an editor and not a writer or researcher, he warranted less consideration. Robert Farley and Louis Jacobson came in at a tie (23 each) to top the list as the writer/researcher(s) getting graded; however, Louis Jacobson never received any grade higher than F. Of course, Farley, Jacobson, and Drobnic-Holan are the three main writers for PolitiFact (National and Florida) rulings.
![]() |
| More recent F grades from Bryan, this time dated: note comments, one calling the fact check "awful." |
But the selections of rulings and grades also provoke a much more obvious conclusion: that his data cannot easily be used as actual proof of PolitiFact bias. A single, large biased sample of ruling critiques does not make, as he has stated, a “preponderance of anecdotal evidence.” It would have to be a statistically relevant number of OBJECTIVE analyses where there was across-the-board agreement on the bias: in other words, there would have to be agreement between Conservatives and Liberals, and what might be even more essential, agreement on the “bias” by a third moderate or independent appraisal.
My blog began as criticisms of Bryan critiques. Many of his criticisms boiled down to “judgment calls” based on the reader’s beliefs: in other words, his bias or the reader’s bias. I also started a series called “Grading PF Liberal Style” (although I did not assign grades, which I feel are only done for the drama of making it look “official”) which showed that Liberals might have reason to feel PF was conservatively biased, nullifying Bryan’s position. I did find that in some cases, I agreed with his findings, roughly 25% of the time (probably closer to 20%, but I will be “charitable” here); THESE 25% are the cases that would be considered “objective”. However, my “Liberal Style” gradings would also have to be assessed, and these would offset them as well.
So, if 75% of his rulings are most likely not objective, and the 25% remaining can possibly be offset by Liberal claims of Conservative bias by PolitiFact, where does that leave us? Undoubtedly through the free webtracking website StatCounter.com, Bryan uncovered my counting his “Grading PolitiFact” series, and added this to the end of one of his critiques (one more on Michele Bachmann):
Unlike PolitiFact, I will remind readers that totaling my grades for individual PolitiFact staffers tells you little more than how I have graded them. I evaluate stories that I expect will exhibit problems, so selection bias colors the grade averages. Don't waste time playing statistics with the grades if it's not to learn something about me.
Well, Bryan should know his grades mean nothing. But here we have one more admission of “bias.” And that color of bias is what gives his efforts a flunking grade. And he tells ME to go on sabbatical?


No comments:
Post a Comment