REWRITING HISTORY? If true, this is an enormous scandal:
. . . replication is impossible if someone else has changed the dataset since the original analysis was conducted. But that would never happen, right? Maybe not. In an interesting paper, Alexander Ljungqvist, Christopher Malloy, and Felicia Marston take a look at the I/B/E/S dataset of analyst stock recommendations “made” during the period from 1993 to 2000. Here is what they found:
Comparing two snapshots of the entire historical I/B/E/S database of research analyst stock recommendations, taken in 2002 and 2004 but each covering the same time period 1993-2002, we identify tens of thousands of changes which collectively call into question the principle of replicability of empirical research. The changes are of four types: 1) The non-random removal of 19,904 analyst names from historic recommendations (“anonymizationsâ€); 2) the addition of 19,204 new records that were not previously part of the database; 3) the removal of 4,923 records that had been in the data; and 4) alterations to 10,698 historical recommendation levels. In total, we document 54,729 ex post changes to a database originally containing 280,463 observations.
. . . Not surprisingly, they find that these changes typically make it appear as if analysts were (a) more cautious and (b) more accurate in their predictions. The clear implication from the paper is that analysts and their employers had a vested interest in selectively editing this particular dataset; while I doubt that anyone cares enough about most questions in political science to do something similar, it is an important cautionary tale. The rest of their paper, “Rewriting History,” is available from SSRN. (Hat tip: Big Picture)
The I/B/E/S database keeps track of analyst recommendations for 35,000 companies. It’s used in research into financial markets, as well as by people who rank analyst performance. Altering the database is pretty major, though it’s not clear whether this is something like grade-grubbing, where analysts only correct the mistakes that make them look bad, or whether it’s actual fraud.
I always read these things with a slightly admiring air—not for the researchers, though this is great work, but for the criminals. I get all nervous and blushing when I lie to telemarketers in order to get them off the phone. I would never in a zillion years have the guts to bribe someone to alter my past recommendations in a database. I don’t admire it, exactly, but I’d like to know where I could buy some of that nonchalance.