MEGAN MCARDLE: What We Don’t Know About False Claims Of Rape.
The number of false rape reports is obviously a number we’d like to have. Whether that number is many, or few, alters how vigorously police interrogate victim’s stories, how the media treats accusations of rape, how juries decide tricky cases. Maybe it shouldn’t, you’d argue, but humans are imperfect, and it inevitably does. It’s not surprising that in the wake of the Rolling Stone debacle, we’ve had a lot of feminists claiming that we should draw no wider lessons from this case because statistics show that false positives are rare, and a lot of people on the other side arguing that they can show beyond a reasonable doubt that false reports are epidemic. Both sides should stop, because they are wrong.
I don’t mean that they disagree with me. I don’t mean I think they are wrong. I mean that they are wrong.
Any number of pieces have recently been written suggesting that we actually know — or have a pretty good idea — how many rape reports are false. Deadspin, for example, of the Jameis Winston case: “There’s no doubt that being falsely accused of rape is a dreadful thing that no one should have to endure. One of the reasons it is such a dreadful thing is that false accusations of rape basically do not happen. Statistically, between 2% and 8% of reported rapes are found to be false, but only about 40% of rapes are reported. Do a little math and that means that, for every false accusation of rape, there are up to 100 actual rapes that take place.” When I pointed out on Twitter that the author did not know the percentage of false rape reports, and therefore could not possibly calculate the ratio of false reports to rapes, he suggested that this was a matter of opinion: Maybe I liked one study better, but he thought his was pretty good. This is not a difference of opinion; it is simply a misunderstanding about the data. He has substituted a number he knows — which is, presented in its absolutely best light, the percentage of reports that can definitely be shown to be false by investigators using stringent criteria — for a number he does not know, which is how many reports of rape are actually false.
Perhaps a parallel will make what I mean more clear. Every year, it’s virtually certain that some number of people get away with killing their spouses. More than occasionally, it happens that investigators think they killed their spouses. They’re maybe even pretty sure that they killed their spouses. But they can’t prove it. In the statistics, this will not show up as “spousal murder” or “intimate partner violence”; it will show up as an unsolved case. But they still killed their spouse. How often does this happen? We have absolutely no idea.
I’ve now spent quite a bit of time reading research on rape prevalence over the last few decades. What you see in the literature on false reports is a general move from methods designed to exclude more false negatives (finding a rape report to be true, when in fact it was false), toward one that is designed to minimize the number of false positives (finding a rape report to be false, when in fact it was true). They use quite stringent criteria, where you basically need a confession, or strong evidence that the attack could not have happened as described, to declare it false.
First determine the outcome you want, then develop your methodology. It’s not social science. It’s social science.