Wednesday, February 8, 2012

Sidebar: Fear of Fact-Checking

Recently I got into a huff with my conservative counterpart on PolitiFact’s Facebook page. I was so frustrated, I ended up blocking him and deleting my comments in the thread because I felt it was useless to have any discussions at all given his reaction.

Under the Facebook heading of a link to an article about PolitiFact (PF) by Nate Silver at his New York Times Blog Five Thirty-Eight, I told Bryan White that in 2011 PolitiFact’s Truth Index for Republicans was more “truthier” in the context of local rulings, as well as advocacy groups and PACs. Although I thought I might get some partisan reaction of (in so many words) “of course, Republicans are more truthful, why are you telling me, you idiot!” or “This proves their inherent liberal bias (you idiot!)”…instead his response was, as I recall, a summary dismissal that “it didn’t follow (you idiot!)”.

I now realize I’m getting an answer from him in this Jeff Dyberg post at PolitiFact Bias. The fact is he didn’t want to hear this conclusion. He wants to portray Republicans as the victim of PolitiFact’s bias. He wants others to believe the Truth Index is “shtick” and “junk.” And therefore all those stats must be irrelevant and bad, whether favoring the Republicans or not; in fact, he doesn’t want to hear that in certain circumstances they favor the Republicans because it goes against what he’s trying to do: if the Truth Index at a local level says Republicans are truthier, then it belies the notion that “PolitiFact's ratings provides far more information about the ideological bias of PolitiFact's editors.” But that ideological bias must be to the left, or his “findings” go out the window.

Furthermore, because I’ve done a lot of data manipulation (along with others like Steve at Quibbling Potatoes) of the PF Truth-o-Meter stats, it would make sense to trash me. I’ve noticed when he comes across the same data manipulation of others, he notes it in his blog; although I’ve done probably the most extensive analysis of the Truth-o-Meter database, I get ignored (with an occasional peak every now and then exposed at my Stat Counter account—unless of course he has a proxy server). I guess he doesn’t like my criticisms of his Grading PolitiFact series (which he also trashes in a separate blog).

As I recall, he kept asking for a conclusion based on the data and would neither recognize nor accept the conclusions I had given him. So I tried to soft pedal it, saying these conclusions can only be taken with a grain of salt, at which point he accused of me of flip flopping. Well, damned if I do and damned if I don’t—so the hell with him. It was clearly a waste of time.

Yes, there probably is a certain amount of selection bias, to which I say SO WHAT. As I've heard before, if a politician says “The sky is blue” (and it could be Mitt Romney or Barack Obama) PolitiFact doesn’t check it out. That’s because the bias is not to check out statements that are obviously true, or would be impossible to verify, or that are simply insignificant. Neither Bryan nor Jeff has ever satisfactorily answered why other fact-checkers, the great majority of the time, select many of the same statements and reach the same conclusions. In other words, their entire argument relies on this selection bias claim. To me it’s like a person who has a fear of flying; it’s like PF is the jet, but PF Bias wants you to use their car because they have evidence of a few plane crashes. Even though statistically speaking you have more chance of getting in a car crash than a plane crash.

Bryan and Jeff complain that there’s a “vast ocean” of rulings aside from what PF has selected. But among close to 5,000 PolitiFact Truth-o-Meter rulings, they repeat the few anecdotal exceptions they’ve “uncovered” thus far… so why don’t they have anything new, or a larger series? Jeff repeats these ten or so examples: that’s two tenths of one percent of everything PF has subjected to the Truth-o-Meter during its existence. Does that make PF misleading? Bryan White has done somewhere around 200 or so “Grading PolitiFacts” tainted by his own bias, which he has admitted are weak as far as evidence: does that make PF misleading/biased?

Early in 2011 Bryan White claimed he would be publishing some important research on PolitiFact’s bias, and he made that claim again this year. If you go to the link on his PolitiFact Bias page called “Research” you get a “Coming Soon” with nothing else. At another hard-on-my-eyes but having lots of great stuff about the PolitiFact Bias blog called “Content In Reality” Bryan White commented on January 16:
…I've been busy conducting a study that objectively verifies PolitiFact's ideological bias. I've got all the information for the national operation collected and sorted, and I'm in the process of readying it for publication
Okay, I’m still waiting!

Let’s take a look at recommendations for avoiding sampling bias for researchers and see how it can be applied to PolitiFact's rulings:

1. Does PolitiFact “determine what types of bias might impair or compromise its research into the rulings it makes and take measures to prevent it or fend it off"?  I’m going to give Bryan a little insight into Bill Adair that he might already be aware of: I know for a fact he occasionally reads Bryan’s blogs. Doesn’t Bryan think Adair has enough awareness of such bias and of course is trying to take measures to fend off this type of bias? In fact, the recent series that PF has just started, called “In Context” makes no type of Truth-o-Meter judgment. My feeling is that this was to replace many of the “Half True” cases and those where the underlying subjectivity was overwhelming the ability to make an appropriate Truth-o-Meter call.

2. Understanding that it cannot be perfectly unbiased, does PolitiFact try to “include as many variables as possible?” Yes, and we see that PolitiFact Bias actually uses that against PF; Dyberg poked fun at the types of fact-checks it had done. I do it myself (“Dogs, Blogs and Groundhogs”) although I don’t scorn PolitiFact for it. PolitiFact is the only fact-checker that tackles web “facts” from blogs and chain e-mails. They do national fact-checks and the states mainly concentrate on fact-checks by politicians representing their states, generally on local issues. They look at a winning candidates' political promises (Obameter, etc.) ; they look at flip-flops (Flip-o-Meter). Remember, my brou-haha with Bryan revolved around finding that Republicans scored higher on the Truth Index on local issues.

3. “Is there a large enough number in the sample? Larger and more varied samples will reduce omission and over-inclusion biases.” So now as we approach the 5,000 mark for number of rulings, is that sufficient? If PF is around roughly another two and a half years, there will be 10,000. Ten thousand!  How much is enough? I have often said that fact-checks are not a reliable indicator for individual truthfulness, which I call the “micro” level, so in this respect, Dyberg is right about the Truth Index apps. When I’ve graphed or listed individual comparisons, I set a minimum number of rulings per person. One of my most recent was at 10 rulings, but that is not clearly enough. Very few people, however, surpass whatever number of PF rulings might be deemed sufficient as a large number. Obama has had the most number of rulings at over 300; it’s interesting that his Truth Index is very close to the overall average of a little over Half True (13).  It seems the more rulings on a person, the more they slide toward the middle (come within the standard deviation) of the Truth Index.

I’ve never seen the app up close, but it should be noted that PolitiFact makes no distinction between Democrats and Republicans as a group. In fact, it seems they try to avoid it, letting readers come to their own conclusions. Making comparisons between Republicans and Democrats as a whole has never been promoted by PolitiFact.

As I’ve already noted, Dyberg says there’s a “vast ocean” of statements for PolitiFact to select. Other similar fact-checkers, such as FactCheck.Org and the Washington Post Fact Checker, check the same statements (and come to the same conclusion about 98% of the time) and cover many statements that PolitiFact does not (with their own “truth indexes” averaging more negative than PolitiFact’s).

4. “Is there interview bias?” Sometimes Bryan White will complain about it when critiquing a ruling in his Grading PolitiFact series, but often there are a lot of interviews listed in the sources for the ruling, and many of them are not reported in the ruling or only partially. So this is something that can’t be totally validated.
WaPo's Fact-Checker Summary shows
Bachmann as an "outlier" (outliar?) as well.

5. “Are outlying results given the appropriate attention?” I’m not sure what “outlying results” would be. It could be conflicts in rulings or a disproportion between Republicans and Democrats for a given variable. For state governors I counted 465 rulings for which only 25 were Democrats. But that is not really reliable and is because the Republican governors (think PF’s Florida [Scott], Texas [Perry], Ohio [Kasich] and Wisconsin [Walker]) were not only more in number but making more news.

An outlier could even be a “rulee” like Michele Bachmann, who at the beginning of 2011 had one of the most “outlying” negative Truth Indexes due to receiving an unusually high number of Pants on Fire awards. More of her statements received attention, however, when she decided to become a presidential candidate, and as time went on, her Truth Index started to positively increase to where she was exceeded in “untruthiness” by presidential competitor Herman Cain and by congressional competitor Nancy Pelosi.

Dyberg provides anecdotal examples to support his PolitiFact bias contentions. When he discusses PolitiFact’s “selection bias” he states:
This "things we're curious about" method may explain why Senate hopeful Christine O'Donnell (Rep-RI) garnered four PolitiFact ratings, while the no less comical Senate hopeful Alvin Greene (Dem-SC) received none.
Jeff Dyberg must know there is no PolitiFact South Carolina in either Charleston or its capital Columbia. As for PolitiFact National, I challenge him to find an interview with Greene where he made a controversial or significant statement that deserved checking out. Yes, Greene was hardly on anyone’s radar, but that doesn’t make it a requirement that PolitiFact check out statements he’s made. It just seems Green’s “quirky outsiderism and quixotic bid to defeat Sen. Jim DeMint made him something of a public hero/laughingstock” especially to the snarky Dyberg when it came to the laughing stock.

Then there’s this “source” explanation which suspiciously sounds like a suffering complaint of “unfairness” on who “gets hung” with the rating:
…when Barack Obama made the claim that preventative health care is an overall cost saver, and Republican David Brooks wrote a column explaining Obama is wrong; PolitiFact gave a True to Brooks. This spares Obama a demerit in his column* while granting Republicans an easy True. Another example of this source selection is evident in the rating about $16 muffins for a Department of Justice meeting. Despite the claim being made in an official Inspector General report and being repeated by several media outlets, including the New York Times and PolitiFact partners NPR and ABC news, PolitiFact hung a False rating around Bill O'Reilly's neck. PolitiFact refrained from judging the nominally liberal media outlets--and the source of the claim--all while burdening O'Reilly with a negative mark in his file.
I’ve already covered why Bill O’Reilly got the Mostly False: he made a big stink about it, and he repeated it. PF rarely rates media outlets or government offices (Dyberg pointed out to me that PF does it on “governments” but could only show it at the local level). But this “we’ve got to spare Obama a False” and “Oh boy, we can hang on a Mostly False on O’Reilly” is pure, unadulterated conspiratorial whining.

As KnocksvilleE (Eric L.) at "Content in Reality" responded to Bryan's suggestion that the Truth Index app contain a disclaimer:
I would agree that a disclaimer would not be a bad idea when publishing these report cards. However, it isn't necessary. Practically every article written by Politifact makes it clear they are not randomly choosing statements by using statements like "our readers asked us to look into this" or "this statement caught our attention." In fact, it is unclear why a reasonable person would think the statements are actually selected randomly. What sense would it make for Politifact to select a bunch of random, mostly inconsequential statements. Given the fact that statements can't exactly be doled out into discrete sets and thrown into a hat for a reporter to randomly pull, what sense does it make for any reasonable to think this way?
Or, to put it another way, what is Bryan White’s reasonable alternative? I haven’t really seen it.

The last comment posted reminds me of something I’ve published before, although it adds some new insights:

In general, I am done dealing with this article. I give you well-thought out critiques and you respond with absurdities. I have broken down too many things barney style for you. And I have no doubt you can go on forever responding with these absurdities. I seriously feel sorry for anyone who wastes their time on this site. You may have a few good points in here but they are buried in a pile of logical fallacies and poorly researched articles. If you must still hold to the delusion that Politifact is liberally biased, it is no surprise. Delusions can be a hard thing to escape. Have fun single sourcing conservative sites with the cognitive dissonance needed to pretend they are no less trustworthy than serious neutral fact checking sites. This poisoning the well fallacy is the worst habit of republicans, but there are some so committed to the delusion, you cannot shake them. I hope whoever reads your article goes over our conversations and sees just how poor some of your work is, as well as the absurdities you hold onto just to continue to live in your delusions.
Yes I do see how poor some of his work is...oh, and join the crowd.

2 comments:

patriotic vet said...

Karen, not to be too harsh here, I cannot believe you put up with bw & jd's nonsense for so long! You should get a medal for service "above and beyond the call of duty!"
Perhaps I should have said something many months ago, when I felt strongly as you do now (though without your exhaustive data in support).

Do you think they are (have been) paid for purposely trying to skew things?

And your epiphany warrants this next question: now what?

Karen S. said...

I must say I had an epiphany when I read Eric Levine's blog...a "breakthrough" of sorts. He used the expression "Gish Gallup" and if you search that term, it is clear this is the method that Bryan White uses when he's on Facebook. My original theory was, because he lived in the Tampa area, that he was a disgruntled ex-employee of the Tampa Bay Times, and Lou Jacobson got the job he wanted (since he seems to single him out).

Yes, I'm retired, and I have trouble keeping up with Bryan, and assuming he works, and he posts a lot of stuff in the middle of the night, he must either not sleep or....he's being paid to do this. As you know he won't reveal anything about himself, so it's almost as if he doesn't want anyone to know who he really is. For all we know he could just be a night janitor (who failed to get a job at the Tampa Bay Times) somewhere in a closet with a laptop; or he's a professor who prefers his students (and his school) not know what he's up to.

But if I decide to leave Bryan behind at a certain milestone mark (say 50 of his Grading PFs) I will continue to accumulate and gage the PF rulings. There will be more sites (such as Steve's and Eric's) who will be smacking down Bryan and Jeff.

By the way, Jeff (if you ask me) is just riding Bryan's coat tails. If they ARE paid, Bryan is full time and Jeff is part time.

Thanks again for your comments.

Post a Comment