There are some who claim that my Politi-Score is far too generalizing, and there’s some truth to that, particularly from an individual viewpoint. Bryan White has recently written that unfortunately, PolitiFact (PF) does not include a “Disclaimer: These ratings should not be taken as a reliable guide to the subject's truthfulness.” Perhaps I should be including that in my Politi-Score. As Bryan stated:
Many are tempted to look at the collected data and draw ill-founded conclusions, perhaps like "Kay Bailey Hutchison is just as likely to lie as to tell the truth!"
If this is true, then I would point out that Obama received 268 PF rulings as of November 1, the largest number of all, and my Politi-Score says the same about him—he averages just over Half True, which means he, like Kay Bailey Hutchinson, is just as likely to lie as to tell the truth. It seems the higher the number of rulings for an individual, the more the Truth-o-meter as an average (Politi-Score) becomes a generalizing, middling, gray, almost equivocating measure, and basically doesn’t tell you much.
On the other hand, I think an example supporting “a reliable guide to” individual truthfulness might be his favorite anecdotal example of PolitiFact’s bias, Michele Bachmann. While I have found that PolitiFact has eased up on the "snark" factor he has so lamented since late 2009, she still is the biggest liar with pants on fire (has the lowest Politi-Score) among those individuals with a large number of rulings, whether by selection bias or not. Often her statements are re-confirmed as false or misrepresentations by more objective sources such as Factcheck.org. Now I wonder, after looking at the infant state-by-state ruling compilations, that if there was a Republican or Independent-leaning “PolitiFact Minnesota” Bachmann might get a fairer shake. In the meantime, yes, she looks to be a “chronic liar”….although it doesn’t seem to bother her constituency, since she keeps getting re-elected.
My recent compilation of PF rulings and their analysis did NOT look at individual truthfulness. However, the 414 rulings for the two-month period, as well as the 1,000 rulings compiled through August from the individual records of rulings at the PolitiFact website, do reveal some patterns by PolitiFact:
- They try to divide the number of rulings (without regard to their Truth-o-meter rating) as evenly as possible between Democrats and Republicans, with roughly 5-10% independents.
- As already noted, the more rulings an individual has, in general, the more “Half True” they get.
- In general the parties are also around “Half True” and devolved to “Barely True” as recently noted in Bill Adair’s PF article regarding the 2010 campaign season.
As for what appears to be the Democrats consistently scoring higher on the Truth-o-meter, it seems that Bill Adair’s foray into PolitiFact state partnerships may take some of that edge off. I don’t know how the publication of the rulings are organized state-wide, but it appears some of the states may be using different approaches to doing the rulings. For example, the high number of False rulings at PolitiFact Texas, along with their overall Politi-Score favoring the Republicans, leads me to believe their approach is similar to Factcheck.org in that they are looking at statements which can more easily be proven False as opposed to verifying statements across the True-to-Pants-on-Fire spectrum. But I will only be able to verify that with continued data checking (or when Bill Adair decides he wants it “his” way?)
In my Part Four analysis I concluded with a Facebook comment by one Vic Pilkington, and Bryan White noted the same comment as well, with a different response:
“Did you know that 90% of the Politifact Pants-on-Fire and False statements come from the Right. It’s true, count them!” [Bryan quoting Pilkington] Pilkington's other comments discourage offering him the charitable interpretation that he feels he has discovered an indication of PolitiFact's selection bias. Mounting anecdotal and other evidences compound the criticism by showing a pattern of bias in PolitiFact's stories.
First off, the data show Pilkington’s statement is not true. Secondly, Bryan’s interpretation is not quite correct. While he has discovered some anecdotal evidence, it’s only anecdotal, which he would have to admit as a fallacy-connoisseur, is considered fallacious itself.
Many of my reviews have found conflicts in this "anecdotal evidence" as well. So while waiting for the “other” patterns of PolitiFact bias Bryan has yet to produce, I’ve come up with this, for what it’s worth. But it's too soon to pronounce it as evidence of such bias.
No comments:
Post a Comment