Forensic DNA Laboratory Error Rates
(Last updated 22 April 2002)

On this page I provide a review of the current status of error rates in forensic DNA typing and provide documents detailing false matches. To view and print these documents below requires Adobe Acrobat Reader 3.0 or higher. Some of these files don't always open properly with Microsoft Internet Explorer or Netscape 6.0. Netscape Navigator 4.7 seems to be the most reliable.

I. The use of error rates in forensic DNA statistics.
II. Cellmark CACLD errors.
III. Genelex paternity error.
IV. California DOJ proficiency test error.
V. Cellmark Kocak case error.
VI. SERI proficiency test error.
VII. APEX proficiency tests results.
VIII. BCA sample switching.
IX. Philadelphia Police Department False Match
X. BCA case error
XI. Las Vegas case false match

 

I. The use of error rates in forensic DNA statistics.

The results of DNA typing may either qualitatively include or exclude a potential suspect as the donor of DNA in an evidence sample. Exclusions require no statistical analysis since they are absolute. Inclusions require some assessment of how likely it would be to get the observed genetic match if, in fact, the suspect did not leave the evidence DNA. To complete this statistical assessment requires a quantification of the chance of two events. (i) How likely is it that the suspect coincidentally has a DNA profile that matches the profile of the unknown person who really did leave the DNA evidence. (ii) How likely is it that the laboratory would declare a match between the evidence and the suspect when in fact their DNA profiles do not match? The events described in questions (i) and (ii) could both lead to a declared match when the suspect was not the source of the DNA and thus the chances of each event must be statistically evaluated.

The product rule answers only the first of the two important statistical questions in a forensic DNA case. The second question requires some estimate of laboratory false positive rates. All forensic laboratories I am familiar with fail to provide any estimate of lab error rate. The only circumstance under which the lab error would be unimportant is when it is much less than the chance of a coincidental genetic match. In many STR cases using Profiler Plus, for instance, the chance of coincidental genetic matches may be in the range of 1 in billions to 1 in several quadrillion. When lab errors are less than this they can be safely ignored

The second National Research Council Committee (NRC II) in choosing to disagree with the first NRC committee never argued that lab errors rates are this small. They made several other arguments. These arguments have been criticized by the scientific community and as I will document are flawed either in logic or by current scientific evidence that the NRC II committee was apparently unaware of.

There were four basic arguments offered by the NRC II committee to support their conclusion that lab errors rates can be ignored. (i) Errors in a specific forensic case depend on many variables that can't be accounted for by proficiency tests or any single number. (ii) Estimates of lab error rates would require excessively large numbers of proficiency tests. (iii) While the number of total proficiency tests could be increased by pooling tests from many labs this would unnecessarily punish good labs for the errors of bad labs. (iv) When labs make errors they take steps to improve and time has shown, like in the case of Cellmark, that errors now are less common than they were in 1988 and 1989.

To evaluate these comments we need to step back and reassess what we are really trying to do. As stated above what we need to know is: are false matches likely to occur at a rate of about 1 in several billion or less or is it more likely that the rate is, say, 1 in 1000. While the points raised by NRC II and summarized in (i) and (ii) above clearly prevent accurate discrimination between errors rates like 1 in 800 vs. 1 in 1000 they certainly don't prevent one from distinguishing between rates of 1 in 1000 vs. 1 in 200 billion! This point is even more obvious by noting that almost all proficiency tests where errors are typically observed are much less demanding than actual forensic cases. Thus, it would be ludicrous to assume that if errors occur 1 in 1000 times on simple proficiency tests they will occur less often in real case work. Point (iii) is true to a limited extent. Some labs might be penalized by using an industry wide error rate while others would have their apparent error rate diminished. While it stands to reason that no two labs will have exactly the same error rate there is no reason to believe that labs will differ over a range that really matters like 1 in 1000 to 1 in 200 billion. This last conclusion is supported by the observation that lab errors are not generated by only one lab and they usually involve human error. The final argument made by the NRC II committee (iv) is simply wrong. For instance the NRC II committee used Cellmark as an example. They cited the facts that the laboratory had made 2 false matches in 1988 and 1989 but based on 450 additional tests through 1994 had made no additional errors. For some reason the NRC II committee was not aware that on 17 November 1995, seven months before their report was released, Cellmark discovered a false match that occurred in an actual case (People of the State of California vs. John Ivan Kocak, No. SCD110465). Apparently in reaching their conclusion about the improved stature of DNA testing the NRC II committee was not aware of (i) the false match in a proficiency test by a technician in the California Department of Justice in July, 1993, (ii) the false conclusion of paternity made in a DNA test by Genelex in October of 1993, or (iii) the Kocak case. Since the NRC II report additional evidence of these types of problems continue to appear such as (i) the false match made in a proficiency test by SERI in California in September of 1997, (ii) the switching of evidence samples that occurred at the Minnesota BCA in October of 1997, (iii) the errors in the APEX proficiency tests, and (iv) the Philadelphia Police Department false positive.

II. Cellmark CACLD errors. There were two different proficiency tests administered by the California Association of Crime Lab Directors in 1988 and 1989. Cellmark made one false match in each of these tests. These false positives are discussed in more detail in a paper by W. Thompson and S. Ford [In: Forensic DNA Technology (Proceedings of the American Chemical Society, Science and Law Conference, Los Angeles, CA, 1988), M. Farley and J. Harrington (eds), Lewis Publisher, Chelsea, Michigan].


III. Genelex paternity error. A lab error in the private paternity and forensic laboratory Genelex, led to a false declaration of paternity. In the accompanying complaint the details of this case are laid out. The president of Genelex in another affidavit agreed to points III 5-7, which are the important issues regarding the lab error.


IV. California DOJ proficiency test error. The transcript of testimony by the technician that commited the error is provided here. An interesting twist on this errror is that initially the supervisor did not catch the error and then when the technician was asked to correct the error she made additional errors.


V. Cellmark Kocak case error. This was an actual sexual assault case analyzed by Cellmark. The vaginal swab apparently had DNA from only the victim. However, Cellmark switched labels on the known samples from the victim and suspect and thus declared a match between suspect and the DNA in the sperm fraction of the vaginal swab. Charlotte Word at Cellmark catches the error while testifying in court. Cellmark's spin on this is that (i) the ultimately caught the mistake and (ii) the DNA technology worked OK there was a transcription error only. Does this mean that court testimony is now considered a regular part of Cellmark's review process? What if there had been a plea bargain before the Cellmark employee was asked to testify? In the end a report that erroneously says a person matches some evidence DNA does as much damage whether the error was clerical or had some deep scientific genesis.


VI. SERI proficiency test error. This is simply a classic false match due to human error. How will improved technology that permits us to look at more loci or smaller samples ever counteract the kinds of lapses in concentration exhibited here?


VII. APEX proficiency tests results. The document here provides a summary of results from a proficiency testing agency. The names of the participating laboratories are not revealed but the results were shocking to me. The test is a relatively simple sexual assult: a known sample from the victim, a known sample from the suspect and a mock vaginal swab made up of 0.5 million white blood cells from the victim and 1.4 million sperm from the suspect. As an example of how proficiency tests differ from real case work consider how the evidence sample was handled after preparation (air dried, vacuum sealed with desiccant and stored at -20o C). There was also a hair sample from the victim provided for mitochondrial DNA analysis. In this test it is clear that the major error that can be committed when comparing the sperm fraction of the evidence to the suspect is a false negative NOT a false positive. Nevertheless, 3 out of 135 labs reported that the sperm fraction was only consistent with the victim. Obviously these labs reveal how a switch of the knowns (as in the Kocak case) could then lead to a false positive. Since the hair sample could theoretically match either the victim or the suspect this test provided an opportunity for a false positive (e.g. if the lab declared a match to the suspect). Only 14 laboratories did mitochondrial DNA tests but among this group one lab made a false match. For those of you without a calculator this is an error rate of 7.1%!

VIII. BCA sample switching. In this example evidence sample from different cases were actually switched. While there was not false match as a direct consequence of this error it is again not difficult to imagine that under different circumstances a false match might have resulted.

IX. Philadelphia Police Department False Match. This error occurred in an actual case and was detected by my colleague at U. C. Irvine, William Thompson. This error is similar to the Kocak case where the known samples had apparently been switched. At the time Bill Thompson was asked to review this evidence the defense attorney had convinced his client to take a plea bargain due to this "damaging" DNA evidence. The link is to a powerpoint file with documents detailing this case.

X. BCA case work false match. Sometime in 1999 the Minnesota BCA sent out a report that contained a false match. The report had been reviewed by three people at the BCA. A routine audit of this report discovered the error and two to three weeks later an ammeded report was sent to the Warren Police Department. An interesting side issue that is revealed by this testimony is Ann Gross' answer to Pat Sullivan's question about how often the defense requests the entire DNA file the lab keeps for each case. Ms. Gross estimates that except for Hennepin County (Pat Sullivan's County) it happens less than 5% of the time. Thus, the defense is doing little scrutiny of these types of test results, at least in Minnesota.

XI. Las Vegas Case False Match. This lab error involved both a sample switching error and a database match. A man in jail for an alleged immigration violation was accused by his cellmate of rape. Samples were taken from both men but were apparently mislabeled. The known samples were then compared to evidence samples from unsolved assaults and several hits were made, but to the wrong person.