This week, another study was published in the New England Journal of Medicine. It used different data, and different survey instruments, and a different sampling strategy. Which is better is totally beyond my area of expertise, but the point is, they find the number of "violent deaths" (note the difference) from 2003 to 2006 was 150,000. Again, the proverbial fecal matter is being slung so hard at the fan, that it's passing through the blades and hitting the wall on the other side. You can read the article here. You can read some blogging opinions here, here, here, here, and here. If you really want to get that feeling - you know the one where you want to stab yourself in the eyes with pencils - then feel free to wade through the hundreds of comments left in each of those blogs. You know the phrase "more heat than light"? The L1, L2 and now the new IHIS study (what it's being called) have tended to generate a blazing furnace of heat, and very little understanding about what we actually know about Iraq mortality because of the invasion.
So, I thought it was worth pointing you to this post at CT as a good place to take a break, and hear some comparisons made between the two studies. At the very least, I'm not wanting to stab myself in the eyes when I read it, and that's a big plus considering. I do take issue, though, with this point he makes:
I’d add that to have been sceptical of Lancet 1 (when it was the high number) but not to have a word of criticism for this study (now that it isn’t the high number) goes really badly for the old credibility.Not sure what to say here, except that this seems a bit extreme. I found the initial L1 study impossible to fathom. I wasn't saying it was wrong, I just couldn't understand how the Iraqi Body Count could have that level of an undercounting bias. It's not that I don't believe IBC could undercount, but this was an order of magnitude to qualify as fraud. And maybe it was fraud on the part of whoever collected the IBC data, but the point is, not knowing anything else, the number just seemed inflated. Call it having priors. Plus, the paper was hurried through peer review for what seemed like political purposes - that is, to influence the 2004 election. That's just plain weird, and very hard to understand as someone trying to be objective. So you take a number that feels too big, a methodology you don't understand, and the appearance of political bias, and you see the intense disagreements, and to say the least, you just don't yet feel like you can call those numbers fact. Then this paper comes out in a very credible health journal - arguably one of the most selective health journals - and does not appear to be pushed through peer review. It finds a lower number, and uses also a methodology that would seem appropriate. Why should I be so incredulous towards it, exactly?
No comments:
Post a Comment