Friday, January 11, 2008

CT Discusses Iraq Numbers

I'm going to go without links on this because I'm lazy, but in October 2004 (roughly), a statistical study of the number of "excess deaths" (a term of art that, I'm learning, may actually explain some of the discrepancy) that had occurred in Iraq because of the invasion appeared in the British health journal, Lancet. Note, this was literally weeks before the US presidential election, and the journal's editor stated that he rushed this to print because of the findings would/should/could have an impact on American voters preferences over Kerry and Bush. The Lancet article (now called "Lancet 1" or L1, to distinguish it from another study done two years later, itself called L2) found excess deaths were 650,000 since the invasion. To give you a sense of the magnitude, the only other data available about mortality was something called the Iraqi Body Count and it estimated casualties at around 40,000. So, big difference. When that study was published in Lancet, the proverbial shit hit the fan, at least on economics and policy blogs, like Crooked Timber and Deltoid, not to mention ones right of center, which basically immediately dismissed the numbers as too high. You could see what appeared to be people choosing sides, too, based entirely on their prior beliefs about the relative justice of the war. If you favored the war, you thought the numbers were too high, and that something was horribly wrong with the survey's methodology - you thought something was terribly wrong with the methodology, even if you knew nothing whatsoever about survey methodology or statistics, mind you. But, if you opposed the war, you were absolutely convinced the study was right. If you favored the war, but weren't convinced the study was flawed, you couldn't make sense at all because so many different arguments about methodology were argued, and I am not methodologian (is that a word? It is now).

This week, another study was published in the New England Journal of Medicine. It used different data, and different survey instruments, and a different sampling strategy. Which is better is totally beyond my area of expertise, but the point is, they find the number of "violent deaths" (note the difference) from 2003 to 2006 was 150,000. Again, the proverbial fecal matter is being slung so hard at the fan, that it's passing through the blades and hitting the wall on the other side. You can read the article here. You can read some blogging opinions here, here, here, here, and here. If you really want to get that feeling - you know the one where you want to stab yourself in the eyes with pencils - then feel free to wade through the hundreds of comments left in each of those blogs. You know the phrase "more heat than light"? The L1, L2 and now the new IHIS study (what it's being called) have tended to generate a blazing furnace of heat, and very little understanding about what we actually know about Iraq mortality because of the invasion.

So, I thought it was worth pointing you to this post at CT as a good place to take a break, and hear some comparisons made between the two studies. At the very least, I'm not wanting to stab myself in the eyes when I read it, and that's a big plus considering. I do take issue, though, with this point he makes:
I’d add that to have been sceptical of Lancet 1 (when it was the high number) but not to have a word of criticism for this study (now that it isn’t the high number) goes really badly for the old credibility.
Not sure what to say here, except that this seems a bit extreme. I found the initial L1 study impossible to fathom. I wasn't saying it was wrong, I just couldn't understand how the Iraqi Body Count could have that level of an undercounting bias. It's not that I don't believe IBC could undercount, but this was an order of magnitude to qualify as fraud. And maybe it was fraud on the part of whoever collected the IBC data, but the point is, not knowing anything else, the number just seemed inflated. Call it having priors. Plus, the paper was hurried through peer review for what seemed like political purposes - that is, to influence the 2004 election. That's just plain weird, and very hard to understand as someone trying to be objective. So you take a number that feels too big, a methodology you don't understand, and the appearance of political bias, and you see the intense disagreements, and to say the least, you just don't yet feel like you can call those numbers fact. Then this paper comes out in a very credible health journal - arguably one of the most selective health journals - and does not appear to be pushed through peer review. It finds a lower number, and uses also a methodology that would seem appropriate. Why should I be so incredulous towards it, exactly?

No comments: