Donald Kennedy was editor in chief of the journal Science at the time. Courts continue to accept the testimony of forensic experts, despite the fact that they use unproven techniques. Courts have recently begun to recognize the scientific limitations of firearms identification, in which an examiner visually compares fired bullets or cartridge cases and opines on whether the items were fired by the same gun. Firearms identification is a field built on smoke and mirrors.

They claim to be able to match a bullet to a specific gun, and thus solve a case. Science isn't on their side. Few studies of firearms exist and those that do show that examiners can't determine whether a bullet was fired by a gun. Firearms identification must adhere to evidence-based standards. No less is required for fundamental justice. The chances of convicting the innocent and letting the guilty go free are too great without such standards. It is possible that this realization has led courts to restrict firearms testimony.

firearms examiners present themselves as experts They have the expertise of a doctor in the application of forensic techniques, just like a doctor has the expertise of a doctor in the use of drugs or vaccines. There is a key difference between this form of expertise and that of a researcher who is professionally trained in experimental design, statistics and the scientific method and who manipulates inputs and measures outputs to confirm that the techniques are valid. Both forms of expertise have different purposes. If you need a vaccine, the nurse has the right expertise. If you want to know whether the vaccine is effective, you don't ask the nurse, but ask the research scientists who created and tested it.

Courts rarely hear testimony from research scientists who are classically trained to explain basic principles and methods of science. Only research scientists can counter the claims of practitioners. Anti-expert experts are needed. We count ourselves among this group of experts who are appearing more and more in courts across the country.

Skepticism of firearms identification is not new. A report by the National Research Council criticized the firearms identification field for lacking a defined process and for allowing examiners to declare a match between a bullet and a gun.

The President's Council of Advisers on Science and Technology (PCAST) reported in 2016 that the firearms identification process is circular and that empirical studies are required. PCAST concluded that more than one appropriately designed study was needed to confirm the field.

firearms examiners attacked the NRC and PCAST reports. The reports had little impact on judicial rulings, but they did inspire additional tests of firearms identification accuracy. The low error rates of these studies emboldens examiners to testify that their methodology is nearly flawless. Courts and juries have been bamboozled into accepting specious claims by the way the studies arrive at these error rates.

In fieldwork, firearms examiners usually reach one of three conclusions: the bullets are from the same source, a different source, or the same source. The way this category has been presented in court is flawed and seriously misleading.

There is a problem with how to classify a response in the research. Researchers studying firearms identification in laboratory settings create the bullets and case to use in their studies. They know if the comparisons came from the same gun or a different gun. There are only two answers to the research studies, and they know the ground truth.

There is no explanation or justification for the fact that existing studies count inconclusive responses as correct. The reported error rates are affected by these responses. The false positive error rate was reported in the Ames I study. Sixty-five percent of the 2,178 comparisons they made between non matching cases were correctly called eliminations. The error rate would be 35 percent if those responses are errors. Seven years later, the Ames Laboratory conducted another study, Ames II, using the same methodology and reported false positive error rates for bullet and cartridge case comparisons of less than 1 percent. The overall error rate goes up when the responses are incorrect.

In the Ames II study, researchers sent the same items back to the same examiner to be re-examined, and then to different examiners to see if the same results could be reproduced by another. The examiner looked at the same bullets a second time and found the same conclusion. Different examiners looked at the same bullets and came to the same conclusion. It's not much for a second opinion. Studies of firearms identification demonstrate an exceedingly low error rate, yet firearms examiners continue to appear in court.

Scientific American newsletters are free to sign up for.

In most contexts, judges display an uncommon degree of common sense. Judges need the help of scientists to translate science into courtroom use. Scientific reports and published articles must not be the only source of help. Scientists are needed in the courtroom to serve as anti-expert experts.

The views expressed by the author or authors are not necessarily those ofScientific American.