JOHN LOTT UPDATE: My last post on John Lott produced a criticical email from Ben Zycher, an economist at RAND who thought I was being unfair to Lott. I offered Zycher space to respond, and what he sent is set out below (click “more” to read it). I was hoping for something that went into more detail regarding the statistical issues involved, but . . . .
In a related development, Clayton Cramer responds to an email from Tim Lambert regarding Lott.
UPDATE: Another economist has emailed in with comments. Click “More” to read them, too.
ANOTHER UPDATE: James Lindgren has emailed a lengthy response to Zycher’s comments, also reachable by clicking “more.” There’s also more from Tim Lambert, and an update from Lindgren, below now.
Zycher’s email:
Herewith, a few observations on the virtues and vices of John Lott, the focus once again September 19 on Instapundit:
First, a full disclosure: I have known John Lott over 25 years, from graduate school days, and I know him to be a careful, honest, and indeed scrupulous scholar, and, more generally, a truly honorable person. Yes, I think his use of a pseudonym in the chatroom discussions was highly unwise—and I have told him that in no uncertain terms—but that has nothing to do with the substance of the issues.
With respect to that substance, the source of Jim Lindgren’s “substantial doubts whether Lott ever did the supposed 1997 study” is wholly unclear. Does he believe that computers never crash, or, in Lott’s case, that bookshelves never collapse onto electronic equipment, or that such events never afflict scholars with whom Lindgren disagrees? Or does he simply not believe the written testimony of contemporaries who observed the survey process as it was conducted? Does Lindgren believe that Lott would invent an entire survey for the purpose of adding a sentence or two to a book of well over 200 pages?
And is it merely an accident, as Pravda in its glory years would have put it, that the more recent survey conducted by Lott—about the veracity of which there is no dispute at all—yielded virtually the same answers?
No, what is really going on is that Lindgren agrees “with almost every point in Ayres and Donohue’s two critiques of Lott’s work.” And that agreement is a good deal more revealing than Lindgren surmises, except about Lindgren rather than Lott. To put it bluntly: Any undergraduate student receiving a B or better in introductory Econometrics would be able to pick the Ayres/ Donohue work apart. This is for a number of reasons, the most fundamental of which is—and this is an error more appropriate for freshman Statistics 1—that their own interpretation of their estimated coefficients is simply wrong. They discuss two variables purporting to measure the effect of concealed-carry laws, but then fail to understand that it is the joint effect of the two variables, rather than merely one of them, that represents the estimated effect in the model. That Lindgren finds their (claimed) findings “plausible” by itself carries no weight at all, and his view that their work is “high quality” demonstrates only that Lindgren needs a refresher course or two.
There is no need here to delve into a mini-course on econometrics, however lacking for sleep your readers may be. Anyone interested simply can read the paper by Plassmann and Whitley, utterly devastating in its critiques of the Ayres/Donohue paper. I believe that no honest reader with even a small amount of expertise in this area can read the two papers and conclude that Ayres and Donohue have cast even trivial doubt on Lott’s work. I assume that the Stanford Law Review, in which the Ayres/Donohue paper was published, is edited by law students, a group hardly inspiring confidence with respect to prowess in econometric analysis. Alternatively, perhaps the Ayres/ Donohue paper indeed was reviewed by referees; if so, the referees, typically paid little or nothing at all, delivered full value.
The modern art of blogging—of which you are one of the truly prominent practitioners—has many virtues, among them the stimulation of discussion and the ability to correct errors and set records straight quickly. But among those virtues one searches in vain for carefulness; the familiar tradeoffs are heavily weighted toward edginess and speed. And so it really was a bit snotty of you to complain that “Lott supporters [are] complaining that I’m picking on him,” followed by the irrelevant truism that “you can’t please everyone.” No, you cannot; but you can make a real effort to be fair regardless of the speed with which you are trying to get each hour’s blog onto the web. It may indeed seem to you that “this time Lott’s critics have him dead to rights, and [that] he’s failed to mount a convincing response,” but that is inconsistent with your later plea for an “authoritative look by a disinterested party,” which you note that you are “not qualified to provide.” No need for a correction there.
Benjamin Zycher
Senior economist, RAND Corporation
Well, I’m still not qualified. But I don’t find this very persuasive, I’m afraid, since it consists mostly of assertions that people are idiots, but without much actual exposition. I don’t think it has contributed much to the debate, but I promised to run Zycher’s response, so here it is.
I find Jim Lindgren quite credible, and I don’t think that assertions that he is biased are very persuasive. Assertions that he doesn’t understand elementary statistics would be more persuasive if accompanied by explanations.
UPDATE: More mail:
Ben Zycher is right of course when he states “There is no need here to delve into a mini-course on econometrics, however lacking for sleep your readers may be.” He wrote to a blog. And he knows that his argument prior to making that statement, if incorrect, will torpedoed in an instant by one of Lott’s critics who is also skilled at econometrics. Perhaps Mark Duggan of Chicago. I doubt that will happen.
Zycher’s criticism of the tradeoffs involved in blogging are sound. You may not find his blog-criticism of Ayres and Donohoue persuasive, and indeed this critique would require the full assembly of details to be published in an academic journal (please refer back to my first sentence). As an economist who has been around the block a few times, I trust Zycher. I think you goofed here, and your “rebuttal” compounds your earlier mistake.
Raymond Sauer
Professor of Economics
Clemson University
Well, it’s not a “rebuttal” because I don’t know enough to “rebut” these statements. I find Lindgren credible; Ayres and Donohue, too, though they’re anti-gun. Are they wrong? Maybe. I certainly can’t say. But the above — an assertion that they’re wrong — isn’t likely to persuade me since it’s, well, just an assertion.
I offered to set up a separate page for Zycher if he needed it, and from our correspondence I expected a more complete explanation than I go. I would, of course, prefer to have it turn out that Lott is correct and that his critics are mistaken. The problem with this entire affair is that it has been a back-and-forth of dueling experts in a field in which I lack the expertise to determine the answer.
But I’ve certainly gotten plenty of criticism from all concerned. That’s okay — I can take it! Here’s an earlier post with links to the Stanford Law Review articles by Ayres and Donohue, and Plassman and Whitley.
ANOTHER UPDATE: James Lindgren responds to Zycher, at length:
Despite the angry tone of Mr. Zycher’s email, I will respond.
The first few arguments that Mr. Zycher responds to are straw men. About me, Mr. Zycher writes:
Does he believe that computers never crash, or, in Lott’s case, that bookshelves never collapse onto electronic equipment, or that such events never afflict scholars with whom Lindgren disagrees?
If Mr. Zycher had read my report online (Link), he would have seen that I tracked down the leads that John Lott gave me and found solid support for Lott’s claim that he had a major computer crash, losing important data in the summer of 1997. I never seriously doubted this. Indeed, it was my report that seemed to settle in Lott’s favor the limited issue of whether there was a crash with data loss. I have noticed Mr. Zycher is only one of several scholars who have since made public statements based on the assumption that people are doubting that Lott had a major data loss in 1997, which is not really disputed by those who know much about this issue.
Mr. Zycher continues:
Or does he simply not believe the written testimony of contemporaries who observed the survey process as it was conducted?
I am not quite sure what Mr. Zycher is referring to here. Perhaps he is referring to the people who have come forward to support the story of a computer crash with a major data loss, as if that were seriously disputed. Or perhaps I missed some recent developments on this matter. But as far as I know, no one has come forward who says that they observed the survey process, with the possible exception of David Gross, who might have been surveyed by Lott. I spoke to two people whom Lott said could confirm the process at the time it was done in 1997–David Mustard and Russell Roberts–and neither remembered when they first heard about the supposed 1997 study.
Mustard wrote me twice in December 2002, ultimately clarifying his lack of knowledge of the 1997 survey:
As we finished the concealed carry paper John talked about working on other projects related to guns. So the first sentence of my previous response should be more accurately “… after our concealed carry paper had been finished (about Sep 1996)…”. Once it was finished he started to work on a number of extensions, including the book. This is about the extent of my knowledge about John’s activities and the timing of those activities from the fall of 1996 when we finished our JLS paper through the summer of 1997, when I left Chicago.
Then, several weeks later David Mustard told me that he was “fairly confident” that Lott had told him in 1997 about the study. When I discovered that Mustard had told Frank Zimring on the telephone in the summer of 2002 that he knew “nothing” about Lott’s 1997 survey, I called Mustard and we had a series of long talks.
Mustard confirmed the substance of his conversation with Zimring, but said that his general statement of knowing nothing about Lott’s 1997 survey followed a series of specific questions from Zimring about the survey, which he couldn’t answer. Mustard said that he meant that he knew nothing specific about the survey since he was not involved in it. In Mustard’s conversations with me, he also backed off his claim that he was fairly confident that he heard in 1997 about the survey, saying that he was certain that he learned about it before his October 1999 testimony, but he couldn’t remember whether he heard about it weeks, months, or years earlier. He said that his memory of talking with Lott about follow-ups in 1996 was firm and his memory of what he knew in 1999 was firm, but between late 1996 and late 1999 he did not know when he first learned of the 1997 survey. Nonetheless, Mustard then released a public statement covering much the same ground as he had covered with me, but adding claims about both 1998 and 1997. About 1997, Mustard wrote: “I believe it likely that John informed me of the completed survey in 1997.” I have not talked with Mustard since, so I never learned the basis for his recovered belief that Lott informed him in 1997 or his statement about 1998. I can only say that Mustard did not have either of those recollections when I spoke with him at length a few weeks before.
But perhaps Mr. Zycher is referring to David Gross’s account. As people may have heard, but may have not quite understood, I found Gross a credible witness. Unfortunately, that is a lawyer’s term of art. It is possible to have credible witnesses on both sides of a case telling inconsistent stories. What I meant is my opinion that most people who heard him would give credence to his account (and that I found him generally believable), not that his account would necessarily trump any other evidence.
The part of Mr. Gross’s written public statement that was slightly different from what he told me concerned who called him for the interview. When I asked him if he remembered anything about who called, he said that he “was beginning to think” that the call came from students in Chicago, perhaps at Northwestern or the University of Chicago, but he was very uncertain about whether the call came from a Chicago area source. In his public statement issued after he talked with me more than once, however, Gross’s very uncertain memory became a bit more certain, suggesting that the call probably came from the University of Chicago. That and the timing (which he was also not certain about) were the only things that pointed to him having been called by Lott as opposed to another survey organization.
As I delved into the other studies being done in the 1996-97 period, I found that Gross’s description of the questions that he was asked fit a 1996 Harvard study by Hemenway & Azrael better than Lott’s account of his study questions. First, Gross said that the person who called him was interested in a defensive gun use that happened a few years before he was surveyed, but was not interested in a defensive use that occurred many years before that. This would not fit Lott’s survey, since Lott asked only about DGUs in the prior year. It would fit the Harvard study perfectly, which asked about DGUs in the prior 5 years, but excluded events before that. Further, Gross said that he gave a narrative account of the event, which the caller was interested in. Lott’s study had asked closed-end questions, which would make the narrative superfluous, while the Harvard study was one of the first to ask for a narrative account of DGUs. Last, Gross reported that there was a question about state gun laws, which Lott did not ask, but the Harvard study did.
Mr. Zycher continues:
Does Lindgren believe that Lott would invent an entire survey for the purpose of adding a sentence or two to a book of well over 200 pages?
No, I don’t, nor does any academic that I know. Mr. Zycher again repeats a common argument on Lott’s behalf based on a misinterpretation of what Tim Lambert and Dudley Duncan are suggesting. As you can see from a close reading of my report, the first documented time that John Lott claimed to have done a 1997 survey himself was when Dudley Duncan confronted him in May 1999 about a probable error concerning the 98% brandishing figure. Lott did not claim to have done such a study in the first (1998) edition of More Guns, Less Crime, instead pointing to “national surveys.” In the July 16, 1997 Wall Street Journal, Lott appeared to attribute the 98% number to three specific survey organizations:
“Other research shows that guns clearly deter criminals. Polls by the Los Angeles Times, Gallup and Peter Hart Research Associates show that there are at least 760,000, and possibly as many as 3.6 million, defensive uses of guns per year. In 98% of the cases, such polls show, people simply brandish the weapon to stop an attack.” John R. Lott Jr., Childproof Gun Locks: Bound to Misfire, Wall Street Journal, 7/16/97 Wall St. J. A22
Over the years, Lott referred publicly many times to the 98% figure, without once hinting that it came from his own study. In my report, I wrote:
“In May 1999, Duncan informed Lott that he was writing an article calling the 98% a ‘rogue number’ and then sent him a draft of an article containing these words, ‘The ’98 percent’ is either a figment of Lott’s imagination or an artifact of careless computation or proofreading.’
Lott then called Duncan on May 21, 1999 and, for the first time, told Duncan that he had conducted a hitherto unrevealed study in 1997. Not long after that phone call, Duncan received a letter dated May 13, 1999, which also mentioned a 1997 study.”
Accordingly, most of those who don’t believe that Lott did a 1997 study do not think that he just made up the 98% figure to put in his 1998 book. If he had done so, he might have taken credit for it in the 1998 book, in op-eds, and in testimony before 1999. Rather, they think it possible that, when his back was to the wall, Lott was unwilling to admit even an honest, unintentional mistake in the 1998 book, such as the possibility that he misread the 98% figure from Kleck or relied on others who misread Kleck.
Mr. Zycher continues in his dismissive tone:
And is it merely an accident, as Pravda in its glory years would have put it, that the more recent survey conducted by Lott—about the veracity of which there is no dispute at all—yielded virtually the same answers?
But did it? As I noted in January, “Even more than with the earlier study, however, I don’t see how one can get an estimate of something that Lott says happened to about 1 out of every 4,800 people each year (2% of 1.05% [experiencing DGUs]) with a sample size of just over 1,000 people, asking about their experiences over the last year.”
I have not gone through the data for the 2002 study myself, but those who have tell me that about 9% of the respondents with DGUs reported more than brandishing, which is 4.5 times higher than the 2% figure in the earlier study (of course, the confidence interval for any estimate is huge). Lott weights this down to 5%, but I have never heard the account of how that was done.
Whitley was kind enough to share his weighting method with me, and it is based on a mathematical error. It will systematically understate any counts of behavior experienced by small groups, especially when there are a lot of demographic groups. Perhaps you might ask Whitley to share his email to me on weighting cases, so that you can see for yourself the problem. Remember, Lott claimed that he used 36 weighting categories in every state in 1997, even though small states would have had fewer than 15 respondents. If Lott had used Whitley’s mistaken weighting method in 1997 with 36 categories for each state, he would have gotten unusable nonsense.
Last, Mr. Zycher raises problems with Ayres and Donohue’s econometric analysis. I was under the impression that they ran many of the same models that Lott did in many different ways, just as Lott had done, as well as with the data corrected and with problems with Lott’s demographic controls corrected.
I will look into Mr. Zycher’s argument about Ayres and Donohue’s misunderstanding coefficients, but I would also ask Mr. Zycher to look into some of the arguments of Ayres, Donohue, and Lambert. In particular, Ayres and Donohue point out that important parts of Lott’s results are driven by using 36 highly multi-collinear demographic controls. Without those controls, or with a more relevant subset of them (such as the percentages of African-Americans in various groups), the results are quite different. Further, is it correct (as has been claimed) that Lott fit some models that predicted negative crime rates, an impossibility? If so, what does Mr. Zycher think of models specified in this way? Last, what does Mr. Zycher think of Tim Lambert’s arguments about the suitability of Lott dropping the control for clustering that he used last Spring in his models, when retaining the clustering control would have rendered Lott’s results statistically insignificant, once his miscodings were corrected?
James Lindgren
Professor of Law
Director, Demography of Diversity Project
Northwestern University
I appreciate Lindgren’s comments, and in particular their clarity and their civil tone. I hope that people will find this discussion useful.
MORE: Tim Lambert has comments, too.
STILL MORE: Jim Lindgren sends further comments:
Update:
I promised to look into Mr. Zycher?s only specific claim of an
Ayres/Donohue error (not combining two gun law predictors to get an
overall or net effect), which I have done. On this point, Mr.
Zycher appears to be dead wrong, not once, but repeatedly. Tim
Lambert found even more evidence on this point than I did, so I include
Lambert’s comments at the end of this email.As I wrote my response to Mr. Zycher last night, I wondered whether he
had actually worked through the exchanges in the Stanford Law Review or
whether he was just taking things mostly on faith, much as Professor
Sauer does in his email. I read the Stanford exchange very
carefully many months ago. Now that I?ve had a chance to examine the
Stanford exchange again quickly this morning, Zycher?s only specific
claim of error in the Ayres/Donohue paper appears to me to be
false. I hope that Zycher will either support his claim that Ayres
and Donohue did not combine the effects of two relevant gun law variables
in their discussion or withdraw it as a well-meaning but careless attempt
to help a friend. I do make mistakes and perhaps Mr. Zycher meant
something other than what I and others understand him to say.Mr. Zycher also states:
Anyone interested simply can read the paper by Plassmann and Whitley,
utterly devastating in its critiques of the Ayres/Donohue paper. I
believe that no honest reader with even a small amount of expertise in
this area can read the two papers and conclude that Ayres and Donohue
have cast even trivial doubt on Lott’s work.
But the Plassman and Whitley paper (from which Lott removed his name before publication) was based on substantially mistaken and miscoded data, as John Lott
eventually admitted to Glenn Reynolds. When the errors are
corrected, important effects collapse into insignificance. So the
idea that anyone competent would find Plassman and Whitley?s paper
?utterly devastating? is flatly false. Only someone who believed
that it didn?t matter if their results depended for significance on
(unintentionally) bogus data would find Plassman and Whitley devastating,
which I certainly hope doesn?t describe most of the people that Mssrs.
Sauer and Zycher would credit with expertise in the area. Although
some coding errors are inevitable in any database, most economists care
whether their data are so substantially erroneous that the miscodings are
determining their results.Of course, Zycher must know that the Plassman/Whitley paper is based on
false data if he actually read the full exchange in the Stanford Law
Review. Since Zucher appears not to know this and refers only to
the ?two papers,? not the three actually published, there is a good
chance that Mr. Zycher?s dismissive comments in his email were based on
his not having read the full exchange in the Stanford Law Review.
Accordingly, his professional judgment may have been asserted before he
had even read the relevant discussions of evidence. I was also
initially somewhat impressed with the Plassman/Whitley paper until I
found out from reading the next paper in the exchange that their evidence
was false because their data were false.That Mr. Zycher may not have read the third paper is the charitable
explanation for his comments. Otherwise, one would conclude that
Zycher was trying to persuade people that ?no honest reader with even a
small amount of expertise? would fail to credit a work based on
admittedly false data, with errors substantial enough to make important
Plassman/Whitley effects appear. Either Zycher did not read the
full exchange or he is apparently endorsing a paper that he knows to be
based substantially on false evidence. I hope and assume that it is
the former and not the latter.I would suggest to Mr. Zycher that people?s reputations are at stake on
all sides, which is why he should either make amends or explain his
assertions. There may be some explanation for his comments that I
fail to see or misunderstand. Yet one problem with the sort of
dismissive approach that Mr. Zycher took to this exchange is that — when
you are wrong in your only substantive points (as Mr. Zycher now appears
to be) — it leaves you in the embarrassing position of having made
condescending and insulting comments about people who appear to be
correct on the only specific points you raise. Of course, this does
not mean that the Ayres and Donohue articles are free of other problems,
which Mr. Zycher should be encouraged to point out, assuming that he
carefully works through the full exchange.On his website Tim Lambert provides some of the evidence that seems to
refute Zycher?s supposed Ayres/Donohue error on the need to combine two
predictors to get a net effect:http://cgi.cse.unsw.edu.au/~lambert/cgi-bin/blog/2003/10
Ayres and Donohue have extensive discussions on the interpretation of the two variables in their paper. Those discussions appear on pages 12201222, pages 12641268 and pages 12771280. For example (page 1277, my emphasis):
To calculate the five-year impact of the shall-issue law under the hybrid specification it is necessary to add together the impacts of the intercept and trend terms for individual years and then sum the yearly impacts.
Or page 1264 (my emphasis):
according to the hybrid model, in the year after passage the main effect of the shall-issue law is a 6.7% increase in violent crime, which is dampened by the 2% drop associated with the negative trend variable, for a net effect of 4.7% higher crime. After three and a half years, the conflicting effects cancel out at which point crime begins to fall.
I don’t understand how Zycher managed to miss all of this.
Nor do I.
Jim Lindgren
Stay tuned.
YET MORE: Economist Tyler Cowen offers perspective, and Randy Barnett comments. Both are worth reading.