THAT LIBERAL MEDIA notes the constant repetition of the debunked Lancet study claiming 100,000 civilian casualties in Iraq. Now that they can’t say the war was a failure, they’ll try to claim it wasn’t worth it.
UPDATE: Reader Dave Ujeio emails that it’s Fred Kaplan’s debunking that’s wrong:
Let me firstoff start by saying I love your blog. Though I do not always agree with the ideas presented, they are always thought provoking, and I appreciate that as truly rare in this day and age.
On a side note, I wanted to help a bit with the fact checking – we studied that piece in one of my courses. Slate has the statistical analysis of the piece wrong – though the confidence interval is 8000-194000, the median/mean in this case is actually far more likely to be true than either of the tails. These studies are conducted under the premise that the data fits a standard normal curve (imagine a mountain with low hills leading to a peak, then descending back to low hills.) 8000 and 194000 are the very end of the tails, and are
thus FAR more unlikely to occur than the instances in the middle of the curve. What is most likely, and in this study statistically significant at the 95% level is that 101000 civilians have died as a result of violence attributable to the war.Another interesting part of the study is that though Fallujah came up in the sample, the authors purposely excluded it because it might bias the data in an unrepresentative way.
If my account of the study sounds wrong, please check with a Statistics professor – I am admittedly a lowly grad student, and I only got an A- in that class. However my understanding is that 8000, and for that matter 194000 would be extremely rare events were the study to be repeated 100 or 1000 times. The most likely (and most likely to be true) count is approximately 100,000 at the time of the study, (remember, excluding Fallujah.)
I am not saying that the war isn’t worth it – I think the number of civilian casualties is lamentable, and that is something you and your readers can debate. I just wanted to let you know that the debunking piece is almost certainly wrong.
I certainly don’t know, though I’m deeply skeptical of this sort of thing because so many of them (e.g., Marc Herold) have been wrong in the past. Meanwhile, reader Hugh Thorner nets out the analysis and pronunces the war a life-saver!
There’s no need to debunk the 100,000 civilian casualty figure being cited so often by war opponents. In progressive circles it’s an article of faith that pre-war sanctions killed 5000 Iraqis per month. Cost of the war two years later? 20,000 Iraqi civilians saved! And counting…
So there you are. And you should probably net out the number that Saddam was killing, too.
ANOTHER UPDATE: Debunking the debunking of the debunking:
Sorry to burst that grad student’s bubble, but there are a few problems with his debunking of the debunking of the Lancet article.
1) the distribution of probable dead is not normal. It actually probably resembles a Poisson distribution.
2) the study distribution’s 95% confidence range covers so much of the possible range as to be a nearly flat distribution (at least relatively speaking).
3) even if the statistics were acceptable, there are serious questions about the sampling, as pointed out in the original debunking.
4) the author of the original study is known to have biases related to the research.
Aron S. Spencer, Ph.D.
Assistant Professor, School of Management
New Jersey Institute of Technology
See, this is why I hate “studies” of this sort. Meanwhile, reader John Mattaboni wants more people to look at the numbers:
This needs to be debunked. It is as absurd as it is false. And unfortunately, the media are up to their old tricks: It’s being reported as “fact” on newscasts across the country.
Are we honestly to believe that twice as many non-combatants have died as a result of the liberation of Iraq as were American combatants in 8 years of VietNam? In a war designed and fought to minimize civilian casualties with things like GPS guided bombs?
Please, you have the power to unleash the internet on this wholesale fabrication with a call to factual arms. This fraud cannot go unchallenged or in 30 days from now, it will simply be cited as irrefutable “fact” that “George Bush killed 100,000 Iraqis.”
Most people, of course, will either believe such statements because they want to, or assume that, like so many expert pronouncements from war opponents, this is just another lie.
YET ANOTHER UPDATE: Read “read more” for more.
Here’s another comment:
Please allow me first to congratulate you on providing the most addictive form of information I have found on the Internet, as well as one of the primary reasons my dissertation is progressing so slowly :).
Regarding the update to the post regarding the confidence interval for the 100,000 figure for casualties in Iraq, Mr. Ujeio is close, but not entirely correct in his assessment. In fact, his interpretation of confidence intervals is a common misconception — one which I often find in teaching and tutoring econometrics.
In non-Bayesian statistics, it is the interval that is random, not the population parameter of interest. The correct interpretation for, say, a 95% confidence interval around a given unknown parameter (in this case, the # of casualties) would be that the interval contains the true number about 95% of the time.
One cannot correctly claim that there is a 95% probability of the true number of casualties lying between the bounds of the interval. These bounds are now fixed, and thus the probability that the true parameter lies between these bounds is either 0 or 1 — in other words, it is in there or it is not.
Based on this information, is it technically incorrect to claim that 8000 or 194,000 would be “rare” events. Instead, the correct conclusion, as in the “debunking” article by Kaplan, is that we can be 95% confident that the true number of casualties lies between the bounds. It says nothing of the probability of any of these outcomes.
Another way to think about this is in the form of a hypothesis test, in which the null hypothesis is that the true number of casualties (call it y) equals x. Based on the confidence interval, we would not reject the null hypothesis that y=x (against the alternative yx), based on our sample evidence, so long as x lied between the 8000 and 194000 bounds. In other words, our test produces exactly the same results for testing the hypothesis that the true number of casulties is 8,000, 100,000, or 194,000…namely, do not reject.
Note that we do not “accept” the null hypothesis, either…in statistics, one looks for evidence against the null, but there are many null hypotheses consistent with the data. A great quote on this that might provide some illumination to an illustrious law professor comes from Jan Kmenta in his 1971 Elements of Econometrics book…
“…just as a court pronounces a verdict as ‘not guilty’ rather than ‘innocent’, so the conclusion of a statistical test is ‘do not reject’ rather than ‘accept.”
I should note that this is turned on its head in a Bayesian framework…but I’m not as familiar with these methods. In closing, yes I know I’m an econ geek and should be doing something much more fun on a rainy Saturday afternoon in CA.
Cheers,
Craig A. Bond
Ph.D. Candidate
Department of Agricultural and Resource Economics
University of California, Davis
Trust me, the dissertation is more important than any blog-reading.
EVEN MORE: The indefatigable Tim Lambert has done 41 (!) posts on this study. Here’s his 41st.