Tuesday, January 25, 2011

Politi-Score: Any Way You Shake It

During some recent internet research on PolitiFact, I came across something that answered what I had always wondered in regard to my Politi-Score: Is there someone out there that’s already done some thing similar to it? An article at Daily Kos published in October, 2009 by “cinematical” entitled “Analysis of PolitiFact: GOP/Conservatives Significantly Less Honest” had analyzed PolitiFact rulings and had done essentially the same thing: counted up the rulings and assigned a score to each ruling (scaled according to True, Mostly True, etc.), added them together and divided by the total number of rulings to get a “relative score.”

But the methodology was completely different than mine, in fact, almost the opposite, although it achieved something similar in its results as well as its conclusions.

“Limitations.” When I initially did the scoring, I wanted to get a relatively large sample, because when sampling a population (according to what I learned in Statistics), the larger the sample, the closer the results of what you are measuring should resemble that of the population. And I didn’t have a lot of time to go through individual listings (PolitiFact’s Personalities Page has over 800 individuals/ groups/blogs/e-mails listed), so I only counted those who had a lot of rulings. My cut off was to only include those with 5 or more rulings. Cinematical did the opposite. He did not include the scores of those who had 13 or more rulings. His reasoning was thus: “Inculsion [sp.] of these individuals would overwhelmingly weight their honesty over the collective honesty of the group.”

One of the commenters to this article said it well:
The fact that you limited the incidents to 13 per input person kind of skews it too much towards the "be kind" side. Why? Because the people with the most input likely have the most media presence. Therefore their propensity for truthfulness or lack thereof has a greater impact. An evaluation of the totality might prove interesting.
As I was looking at this in terms of rulings as opposed to individuals, and also initially looking at individual ruling averages, I don’t necessarily agree with this limitation. If you are looking at overall “truthiness” and the politicians with a lot of rulings typically represent that of the leaders who speak for the party, why would you want to exclude them? To me it would be critical TO include them. IMHO. As you know, since the time I did my initial data-gathering, I have since compiled ALL PolitiFact rulings through the end of 2010.  So, just for sh*ts and giggles, I cobbled this chart together using my larger database through 2010, and using cinematical’s cut-off of 13, comparing the political “low-rollers” (those with 13 rulings or less) with the “high rollers” (those with 13 or more). Lo and behold, there may be some truth to cinematical’s contentions, particularly when it comes to statements ruled True:

Red is Republican, Blue Democrat, dotted lines are rulings greater than 13 per "person."
A few statistics gleened from my pile of data are of note here: Of the 2,788 rulings, 488 individuals, organizations, blogs and emails received a single ruling each. There was a total of 1,173 rulings (out of 2,788 in total) representing 740 individuals, organizations, blogs and emails (out of 814 total) with five or fewer rulings. In other words, capturing the rulings of only 74 individuals, organizations, blogs and emails gave you the majority (58%) of them, although with a slightly more honest “result.”

Cinematical did have an intriguing “disclaimer” included in his limitations:
The data is limited to the statements chosen out of the totality of American political discourse by PolitiFact for evaluation. Therefore, conclusions cannot extend to groups in their entirety; rather, all conclusions are based on the limited, subjective sample chosen by PolitiFact.
Limited and subjective? Oh my! I guess you might call it selection bias en masse.

Scoring: Cinematical also had a different method for computing the “Politi-Score.” While I had calculated it according to a “school grade” system and then changed to a more intuitive type of grading called “Wide Measure”, I also included ALL rulings, and escalated the grades where Honesty gave you a higher score. Cinematical also did the opposite here. First, he excluded all True ratings, saying “As I don't believe that making a correct, truthful statement should positively effect the score (such should be expected from elected officials), I immediately dropped all ‘True’ evaluations.” Secondly, after throwing out the Trues, the scores were assigned as 1 for Mostly True, 2 for Half True, etc.—in other words, Honesty gives you a lower score. You can calculate it either way, as long as you use a sliding scale, so I have no problem with how he does it. His reasoning may be that the closer a score is to zero, the closer the score is to being “thrown out” like a True Ruling.

While cinematical and I agree that a “score” of PolitiFacts’ rulings should show honesty to a calculated degree, I believe the “Trues” should be included, regardless of their “positive effect.” Statements which are ruled True are like any other ruling…if the statement is being investigated in the first place, I think a finding of True should be granted something, since the fact-finder is questioning its veracity in the first place.

The “Frequency of Evaluations” charts gave me pause: cinematical did not explain just what “Adjusted for number of evaluations” caused “the distribution” to “ look like this” in the graphs he presented. I wasn’t sure where he got his vertical axis labels. So I assumed this may have been each ruling category as a percent of total, which would also serve to “adjust” the numbers to a relative distribution. That must have been the answer, because using his numbers, here are our charts side by side:

Everyone’s thinking is a little different on something like this, and since cinematical compiled his statistics in October, 2009, he was onto this long before me. However, with PolitiFact’s partnerships and the 2010 campaigning, many more rulings were added, which one might think would establish a “trend.”

In view of that, here is my chart of cinematical’s 2009 stats in a chart compared to the Overall average from my total data through the end of 2010. It indicates a relatively solid consistency.


Conclusions: The Daily Kos is a “progressive” blog:
Daily Kos (pronounced /ˈkoʊs/) is a American political blog that publishes news and opinions from a progressive point of view. It functions as a discussion forum and group blog for a variety of netroots activists, whose efforts are primarily directed toward influencing and strengthening the Democratic Party….
And so this conclusion should come as no surprise (along with the use of the word "significant" indicating major variances, when they are not):
The conclusion of this analysis is unavoidable: based on the PolitiFact evaluations, Republicans and Conservatives are significantly more likely to make false or dishonest statements than Democrats and liberals, both as politicians and non-politicians. If I were to add in the "True" scores with a point value of 0, the conclusion would remain essentially the same – indeed, it would exaggerate the difference slightly…
But how can one genuinely make this “unavoidable conclusion” considering cinematical’s previous disclaimer that “all conclusions are based on the limited, subjective sample chosen by PolitiFact”? What might it say about PolitiFact as to how it makes its rulings and how it selects statements? My conclusions have been, and remain, that it appears Republicans and conservatives make more dishonest statements, but this may also be evidence of PolitiFact’s bias skewing the results more favorably to Democrats and liberals. There’s really no good way to tell due to the inherent subjectivity of all facets of this project that PolitiFact has taken on: selecting statements, who writes them, who edits them, the research, the interpretations, the “framing” of the statement and the “underlying argument” (as its detractors often point out), of course the final ruling, and everything in between. Even blatant mistakes can point in one direction.

My research indicates that while both sides have complained about PolitiFact’s rulings (which some contend confirms PolitiFact’s impartiality), the right-wing has complained more vociferously. I will be taking a closer look at that in another post.

1 comment:

Unknown said...

Seems there's many trying the same approach
http://blog.lib.umn.edu/cspg/smartpolitics/2011/02/selection_bias_politifact_rate.php

The %age graphs are visually the best. Smartpolitics broke down 2010, maybe you can create a similar relational graph.

Post a Comment