ACADEMIC FRAUD: How Social Scientists, and the Rest of Us, Got Seduced By a Good Story. “I do not excuse those who resort to cheating. But as the consumer of these publications, we should be worried, because this system essentially selects for bad data handling. The more you manipulate your data (and there are lots of ways to massage your data so that it shows what you’d like, even without knowing you’re doing it), the more likely you are to come up with a publishable result. Peer review acts as something of a check on this, of course. But your peers don’t know if, for example, you decided to report only the one time your experiment worked, instead of the seven times it didn’t. It would be much better if we rewarded replication: if journals were filled not only with papers describing novel effects, but also with papers by researchers who replicated someone else’s novel effects. But replicating an effect that someone else has found has nowhere near the prestige–or the publication value–of something entirely new. Which means, of course, that it’s relatively easy to make up numbers and be sure that no one else will try to check.”