Detail Archives    Discuss This Issue    Subscribe to The Detail Fingerprint News Archive       Search Past Details

G o o d   M o r n i n g !
Monday, September 15, 2008

The purpose of the Detail is to help keep you informed of the current state of affairs in the latent print community, to provide an avenue to circulate original fingerprint-related articles, and to announce important events as they happen in our field.
Breaking NEWz you can UzE...
by Kasey Wertheim
Prosecution: Race had weapons, live ammunition, maps
Plattsburgh Press Republican, NY - Sep 13, 2008
The qualified latent-print examiner also described lifting prints off the interior windows of Manor's truck. Those prints were later matched to Race, ...
Fingerprint on basement window leads officers to men accused of ...
Gaston Gazette, NC - Sep 12, 2008
Officers got a break in the case when fingerprints taken from a basement window were matched to Christian Allen Schiller, 19, of Remount Road in Charlotte ...
'Fingerprints lead to triple-rapist'
The Age, Australia - Sep 12, 2008
Fingerprints left on a bedside lamp and fuse box in the 1980s have lead to the arrest of a 44-year-old truck driver accused of raping three women at ...
Fingerprints link Ark. man to fatal 2003 Peters home invasion
Pittsburgh Tribune-Review, PA - Sep 9, 2008
By Karen Roebuck An Arkansas man's fingerprint and DNA were found at a Peters home following a 2003 home invasion and murder, Washington County prosecutors ...

Recent CLPEX Posting Activity
Last Week's Board topics containing new posts
Moderated by Steve Everist and Charlie Parker

Public CLPEX Message Board
Moderated by Steve Everist

Black Electrical Tape
by Alphabrit on Fri Sep 12, 2008 9:51 am 9 Replies 120 Views Last post by Ernie Hamm
on Sat Sep 13, 2008 6:21 pm

Tribunal for McKie print expert
by charlton97 on Thu Sep 11, 2008 1:00 am 3 Replies 263 Views Last post by Outsider
on Sat Sep 13, 2008 11:20 am

Photoshop method to print fingerprints 1:1
by antonroland on Fri Sep 12, 2008 8:41 am 11 Replies 108 Views Last post by antonroland
on Sat Sep 13, 2008 2:28 am

Dr. Henry Lee makes life interesting for the rest of us.
1, 2, 3by Cindy Rennie on Thu May 22, 2008 5:57 am 32 Replies 4179 Views Last post by Alphabrit
on Fri Sep 12, 2008 10:19 am

Ridge Count: ridge detail or holistic attribute
by Boyd Baumgartner on Wed Jul 23, 2008 10:20 am 7 Replies 548 Views Last post by antonroland
on Fri Sep 12, 2008 9:15 am

The Lockerbie Connection.
1 ... 13, 14, 15 by Iain McKie on Wed Jun 20, 2007 11:10 am 220 Replies 43593 Views Last post by Big Wullie
on Thu Sep 11, 2008 6:47 pm

Evidence Fabrication in South Africa
1 ... 17, 18, 19by Pat A. Wertheim on Fri Nov 30, 2007 12:48 pm 281 Replies 32161 Views Last post by Truthseeker
on Thu Sep 11, 2008 1:59 pm

Statistics and Misidentifications - The weeks Detail
1 ... 12, 13, 14by Michele on Mon Feb 12, 2007 11:31 am 206 Replies 46331 Views Last post by Outsider
on Thu Sep 11, 2008 10:01 am

Heads-Up to Yahoo Email Users
by Steve Everist on Mon Sep 08, 2008 9:40 pm 3 Replies 242 Views Last post by Boyd Baumgartner
on Tue Sep 09, 2008 2:47 pm

New Technique for Firearms Prints
by L.J.Steele on Fri Sep 05, 2008 9:39 am 3 Replies 312 Views Last post by Veronica Rauch
on Mon Sep 08, 2008 12:30 pm

Bottom Up Analysis
by Charles Parker on Sat Aug 30, 2008 10:39 pm 12 Replies 476 Views Last post by Gerald Clough
on Mon Sep 08, 2008 9:12 am

Other forensic comparison methodologies
by Boyd Baumgartner on Fri Sep 05, 2008 8:58 am 2 Replies 219 Views Last post by Charles Parker
on Sun Sep 07, 2008 5:19 pm

IAI Conference Topics -
Louisville, Kentucky 2008:
Moderator: Steve Everist

No new posts

Documentation issues as they apply to latent prints
Moderator: Charles Parker

No new posts

Historical topics related to latent print examination
Moderator: Charles Parker

by gerritvolckeryck on Tue Sep 09, 2008 1:51 pm 2 Replies 29 Views Last post by gerritvolckeryck
on Wed Sep 10, 2008 5:05 pm



Updated the Fingerprint Interest Group (FIG) page with FIG #61; Movement/Smudging; submitted by Charlie Parker.  You can send your example of unique distortion to Charlie Parker:  For discussion, visit the forum FIG thread.

Updated the forum Keeping Examiners Prepared for Testimony (KEPT) thread with KEPT #35; Accreditation - the Meaning of Accreditation, submitted by Michelle Triplett.  You can send your questions on courtroom topics to Michelle Triplett:

Updated the Detail Archives

Last week

we looked at how Dr. Bond's technique helped a Georgia cold case. 

This week

we look at an article by Simon Cole from a recent edition of the Tulsa Law Review entitled "Symposium: Daubert, Innocence, and the Future of Forensic Science".  Although Simon brings up some good points, he continues to spin things against the discipline to discount work that has been done to bring our science in higher legal regard.  One example in reference to a recent JFI-published error rate pilot study on examiner accuracy is reference to the attendees of Ridgeology Science Workshop courses as just "trainees", and dismissing the results as "far from ideal".  Anyone in those workshops knows they are put on for experts to become better, and while there were some "trainees" in the courses, they were eliminated from the data pool.  While Simon's article further confirms how the response of the discipline will never be enough for him, he does bring up a few good points.  Perhaps we could do better about admitting the possibility of error, explaining the 3-D to 2-D clarity bridge, or the presence of close non-identifications that could mislead an examiner if presented with limited quality and quantity of ridge detail.  Unfortunately, Simon's antics make the real message become somewhat lost in the smell of his usual spillage, but for what it's worth here is his latest.

Toward Evidence-Based Evidence: Supporting Forensic Knowledge Claims in the Post-Daubert Era
by Simon Cole

As legal scholars begin to take stock of what we might call the “Daubert regime,” the treatment of expert evidence in law in the period following the United States Supreme Court’s watershed decision in Daubert v. Merrell Dow Pharmaceuticals, Inc. in 1993, one of the most persistent issues has been a perceived lack of rigor in the application of Daubert’s gate keeping requirement to forensic evidence.  Scholars have observed that, at least when it comes to civil law, Daubert and its progeny decisions did not have the liberalizing effect on the admission of evidence that early readers of the opinion thought it might.  To the contrary, some scholars have described an “exclusionary ethos” surrounding the Daubert regime.  Scholars who focus on criminal law, however, have detected the opposite situation; they have decried the weakness with which Daubert has been applied in criminal law, particularly in regard to forensic evidence.  What accounts for this disparity?  Professor Risinger’s comparative examination of outcomes of admissibility decisions across criminal and civil law is disconcertingly consistent with what a legal realist would predict: Trial judges operating under the Daubert regime are extremely unlikely to exclude expert evidence proffered by the government in criminal cases, and in civil cases they are far more likely to exclude expert evidence proffered by plaintiffs than by defendants.


[read the full online version for the detailed information on Daubert and the admissibility of evidence, Evidence-Based Medicine (EBM) and its extension to policy, Applying the Evidence-Based metaphor to the law, and Applying the notion of Evidence-Based Evidence to forensic evidence.]


Latent print individualization is a forensic assay by which a trained analyst, commonly known as a “latent print examiner” (LPE), seeks to determine whether a print of unknown origin, commonly known as a “latent print” but which we will here, adopting Champod et al.’s parlance, call a “mark,” was made by a particular individual (often a suspect, but sometimes a victim or other person).  The claim that the targeted individual made the mark is an inference based on the determination that the mark is not excessively inconsistent with an area of what is known as “friction ridge skin” found on that individual’s body.  “Friction ridge skin” is an anatomical term for the corrugated skin found on primate fingertips, palms and soles.  Examination of these areas show that they are traversed by lines (known as “ridges”) and that these ridges often curve, branch, and end abruptly.  The result is that these anatomical areas are covered with a complex weave of curving, branching, and connecting ridges that bears the appearance of an extremely intricate railroad switching yard.


The inference that the mark came from the targeted individual is typically not made from friction ridge skin itself, but rather from what is commonly known as an “inked print,” but which we will call, again adopting Champod et al.’s terminology, simply a “print,” to be distinguished from a “mark.”  A “print” is a deliberately recorded image of the friction ridge skin.  Historically, prints were typically made using ink pressed onto paper; today they are often digitally scanned.  In either case, the print is, of course, not an exact replica of the friction ridge skin, but in imperfect two-dimensional representation of a three-dimensional structure.


Thus, the examiner seeks to determine whether the mark is consistent with the print.  Since the origin of the print is factually known – because it was taken from an individual in custody or from an individual (such as a cooperating witness) whose identity is otherwise, for practical purposes, unquestioned – if the mark and print are consistent, the examiner infers that the individual who is factually known to be the source of the print is also the source of the mark.  Reasoning thusly, the examiner testifies that a particular individual is the source of a mark, which can be enormously powerful testimony in resolving legal matter.


One obvious question raised by this description of the process is: What is a finding of consistency?  It is important to note that latent print identification is not based on a finding of identicality – that the mark and the print are identical.  In fact, while the (rather unspecific, as the reader of the above several paragraphs will now recognize) truism “no two fingerprints are exactly alike” is well known, latent print examiners hold equally to the truism that no two prints, even from the source are exactly alike. [60]  And, indeed an examination of marks and prints from cases shows that they are not identical at all.  The claim, after all, is not that they are identical, but that they derive from a common source.


In short, latent print analysis generates evidence that a particular area of skin is the source of a particular mark.  What sort of evidence would allow a court to assess the reliability of this form of evidence?  (It should be noted that this is quite a different question from asking whether or not the evidence is “science,” “useful,” or generally “good.”)  The most obvious answer would be some sort of measurement of the accuracy of LPEs’ source attributions – a test of their ability to make correct source attributions.  How often are LPE source attributions correct and how often incorrect?


Such a measurement cannot be derived from casework because in casework we lack access to “ground truth,” knowledge of the true source of any particular mark.  Not even the corroboration of a second expert, known in the trade as “verification,” or even corroboration by an adversarial expert, hired on behalf of the accused, can provide us with ground truth.  Nor does a jury’s conclusion that a defendant is guilty beyond a reasonable doubt of a particular criminal offense constitute ground truth that that defendant was, in fact, the source of a mark found at the scene of that offense.  Indeed, this is even more so when the jury’s conclusion is based, in whole or in part, on an LPE’s opinion that the defendant was the source of the mark.  Because accuracy cannot be measured through casework, accuracy measurement requires deliberately conducting a simulation in which ground truth can be controlled by the experimenter.  The obvious method would be to manufacture marks deliberately so that their true origin is known to the experimenter.


Of course, a sophisticated study of this empirical question would not yield a simple binary answer, such as “95% correct.”  Rather, a sophisticated study would presumably yield accuracy rates that varied according to certain parameters.  The most obvious ones are the quality and quantity of information available in the latent print and the skill level of the examiner, but other parameters might also have an impact of the accuracy of latent print analysis.  For example, Professors Denbeaux and Risinger argue that they are essentially willing to assume that identifications made from good-quality impressions of all ten fingers are always correct. [61]  They are also willing to assume that this finding of absolute correctness would extend down to some smaller amounts of information, though how much smaller they do not know.  Professors Denbeaux and Risinger correctly point out that Kumho Tire’s “task at hand” requirement dictates that courts should distinguish inquiries into reliability according to the difficulty of various tasks.  That is to say: The question “How accurately can LPEs make source attributions for complete sets of ten prints of good quality?” is quite different from the question “How accurately can LPEs make source attributions for single partial latent prints of marginal quality?”  The two questions are quite different and clearly should not yield a single common (or “global,” as Professors Denbeaux and Risinger put it) answer.


In this, Professors Denbeaux and Risinger are undoubtedly correct, but the more vexing question is what to do in the current situation in which the proponent of the evidence has not differentiated its claim into appropriate subtasks.  In latent print admissibility hearings, the government has put forward a “global” claim: That latent print source attributions are reliable for all items of evidence from which latent print examiners choose to make source attributions.  In earlier historical periods, and in some countries still, latent print examiners limited their claims by a number of corresponding ridge characteristics, or “points.” [62]  That is, the claim was “latent print identification is accurate for latent prints containing more than twelve [or some other number] ridge characteristics.”  Today, for most U.S. practitioners and law enforcement agencies, the claim is no longer limited in this fashion.  Instead, latent print examiners are expected to report conclusions of identity only for those latent prints for which they believe accuracy is assured.  This yields the rather vaguer claim: “latent print identification is accurate for those latent prints which examiners believe are ‘identifiable.’”  In other words, the proponent of the evidence does not concede the seemingly self-evident notion that there must be a gradation of accuracy according to the amount of information in the object being analyzed.


The situation is further complicated by the fact that no scale exists upon which the amount of information in a mark can be specified.  Whereas, in the document examination area that is the primary focus of Professor Denbeaux and Risinger’s work, the subdivision of tasks into subtasks is seemingly self-evident, it is not obvious how to subdivide latent print source attribution tasks, especially without a scale with which to measure the amount of information in a mark.


One question in this situation is how a court should respond when presented when a global claim of this sort.  It seems to me that a court would have difficulty imposing a differentiation of tasks upon the proponent of the evidence, and the court would simply have to evaluate the evidentiary claim as it is given by the proponent.  Another question is how a scholar should respond when presented with a global claim of this sort.  Here Professors Denbeaux and Risinger and I part company in that they appear to feel a greater obligation to differentiate tasks and concede the reliability of latent print source attributions at the easier end of the continuum of task difficulty.  I tend to think that it is the responsibility of the expert making a knowledge claim to specify their claim and have it evaluated as they specify it.  I, therefore, feel less obligated to differentiate latent print examiners’ tasks since, in the face of all reason, they make global claims to accuracy for all tasks.  One possible rationale for such a stance, is that the expert community should bear a cost for making what, as Professors Denbeaux and Risinger correctly point out, is an excessively global claim.


In any case, an accuracy measurement, preferably gradated according to the amount of information contained in the latent print and perhaps other variables as well, is the sort of evidence about latent print evidence that the court might expect to find.  The reason there is currently a legal controversy over the admissibility of latent print evidence, however, is that no such evidence has yet been proffered by the government in response to any challenge to the admissibility of latent print evidence.


This is not the same as saying that no such evidence exists.  The accuracy data that does exist is quite poor, but some data from simulations in which ground truth was known does, in fact, exist.  One source of such data derives from proficiency tests conducted between 1983 and the present by Collaborative Testing Services (CTS) in conjunction with the American Society of Crime Laboratory Directors. [63]  This is not ideal data from which to generate accuracy measurements.  First, the proficiency tests were conducted by mail.  The amount of time taken to complete the test and the number of individuals who completed each test are not known.  The qualifications of the individuals who completed each test are not known.  The difficulty level of the test items is not known.  Finally, the proficiency tests were not “masked.”  In other words, the test takers knew that they were taking a test.  A masked proficiency test would arguably better replicate the accuracy of actual casework.  For all of these reasons, it can be argued that the CTS proficiency tests provide only a very crude accuracy measurement for actual latent print casework.  Nonetheless, in the absence of any other data, some researchers have compiled the accuracy rate on CTS tests. [64]


Another source of accuracy data is a study conducted by Wertheim et al., of the accuracy of trainees during instruction in latent print analysis [65]  Again, the data is far from ideal.  The examiners were trainees, with varying levels of experience in latent print casework.  They were able to choose the difficulty of the prints they undertook to attribute.  They were given “hints” by the instructors.  The study’s authors characterized many apparent errors as “clerical errors.” [66]  Again, these are good reasons to argue that this study provides only a very crude accuracy measurement for actual latent print casework.


The stereotypical contours of argument in legal battles over expert evidence typically consists of studies being put forward by one party followed by methodological critique of those studies by the opposing party.  Actors from both sides of the controversy agree that the proficiency test data cited above suffers from numerous flaws.  Were the data to be offered as the “evidence” from which the accuracy of latent print identification should be inferred, it would surely be attacked for those flaws.  It is important to note, however, that this is not the nature of the legal battle over the admissibility of latent print evidence. Instead, the government ahs not put forward the above potential sources of accuracy data in defending against admissibility challenges to latent print evidence.  Indeed, LPEs have publicly criticized defendants’ experts for mentioning these sources of data in such hearings. [67]  Further, both sources of data contain disclaimers that essentially inoculate them against being used as sources of accuracy data for latent print analysis.  Each CTS Report states: “Since it is the laboratory’s option how the samples are to be used (e.g. training exercise, known or blind proficiency testing, research and development of new techniques, etc.), the results compiled in the Summary Report are not intended to be an overview of the quality of work performed in the profession and cannot be interpreted as such.” [68]  Similarly, the Wertheim et al. study states “[t]hese data should …not be used as a predictor of error or an estimate of reliability for an examiner on the witness stand.” [69] 


It is important to emphasize, therefore, that, as far as courts engaged in admissibility determination are concerned, the above two sources of accuracy data do not exist.  They have never been proffered by the government as evidence of the reliability of latent print evidence.  Why this is so can only be known for sure by the prosecutors who have handled the admissibility challenges to latent print evidence and those who have advised them, but some speculation is possible.  The explanation cannot be that the government feels that these accuracy rates would result in exclusion of the evidence.  Although Daubert is vague as to precisely how low the error rate of a proffered technique needs to be in order to render it admissible, it seems unlikely that the relatively low false positive error rates found in these studies are above this threshold.  Since the government cannot be concerned about the admissibility of the evidence, it must be concerned about its weight.  Given that LPEs apparently believe and often testify that latent print identification is “100% accurate” and that it has an error rate of “zero,” [70] one can see why the government might be concerned about introducing even these high accuracy rates into evidence in an admissibility hearing.  Once introduced in an admissibility hearing, they would presumably become fodder for cross-examination.  Astonishingly, even data showing very high accuracy would have the effect of downgrading the probative value of the evidence from the current status quo.


The analogy with medicine offers a potential explanation for the absence of accuracy measurements of latent print analysis.  As noted above, there are many medical interventions that cannot practically, ethically, or cost-effectively generate success rate measurements.  In such cases, even proponents of EBM are satisfied to rely on clinical judgments.  Is latent print analysis analogous to one of those areas of medicine?  Should latent print analysis simply be treated as a clinical judgment?


Certainly as I have argued elsewhere drawing on the work of the historian Carlo Ginzburg, one can conceive of latent print analyses as clinical judgments [71]  However, I have suggested that they are, in fact, clinical judgments that have been presented to their consumers as something more accurate and precise. [72]  In any case, there is nothing about latent print analysis that makes it like one of those areas of medicine for which is practically or ethically unfeasible to generate success rate measurements.  While it seems reasonable to deem admissible clinical judgments that cannot practically or ethically generate success rate measurements, this exemption would not appear to apply to latent print evidence.


VI. Latent Print Evidence in Trial Court Daubert Inquiries


The upshot of this, of course, is that as far as courts are concerned, there is no accuracy data for latent print source attributions.  In other words, there is no evidence, of the sort that practitioners of EBM would consider “evidence,” as to the accuracy of this form of evidence.  In the absence of conventional accuracy data, what sort of evidence have courts relied on in finding latent print expert testimony admissible?  It is not possible to answer this question comprehensively because such determinations are made at the trial court level.  Many trial courts make such decisions without issuing written rulings, as did the court in the first such challenge in United States v. Mitchell.  Even if the trial court does issue a written ruling it may not be published. [73]  Below, I will discuss the evidence that trial courts have cited in support of the claim that latent print analysis is reliable.  For each evidentiary claim, I will explain why it does not constitute evidence of reliability of latent print analysis.  Although there are some appellate court rulings concerning the admissibility of latent print evidence, I will not discuss them here.  Instead, I restrict my discussion here to direct reports of trial court rulings.  Although some of the appellate court rulings do invoke purported evidence of the reliability of latent print evidence, strictly speaking, the issue before the appellate court is not the reliability of latent print evidence itself, but rather whether the trial court’s decision was an abuse of discretion. [74]  In addition, the appellate court rulings have already been extensively discussed and critiqued in the legal literature. [75]


  1. Evidence of Legal Admission and Use of Latent Print Evidence
  2. Evidence that Latent Print Identification Has Been Used in Court for around a Century
  3. Testimonial Claims That One Laboratory (the FBI Laboratory) Was Not Aware of Having Rendered any Erroneous Conclusions of Individualization
  4. Latent Print Conclusions Can Be Verified by Other Experts


E. Summary


None of this evidence, even if taken at face value, addresses the question of the accuracy of latent print individualization.  In addition, none of the literature defending latent print individualization offers any evidence concerning the accuracy of latent print individualization. [93]  In the absence of any information as to the accuracy of latent print individualization conclusions, an informed, reasonable observer certainly might not “accept” conclusions of individualizations.  Indeed, while not all expert knowledge claims necessarily lend themselves to conventional validation through controlled experiments, just as not all medical interventions lend themselves to RCTs, given the nature of the latent print examiners’ claim – that they can correctly identify the source of latent print to the exclusion of all other possible sources in the universe – any “rationalist” would demand some sort of empirical measurement of their accuracy rate. [94]


It is important to emphasize that this is not a situation in which adversaries dispute the persuasiveness of competing evidence.  Government responses to admissibility challenges to latent print evidence consist of arguments, but they do not produce anything that would be recognized as evidence in any rationalist endeavor, like science, medicine, policy, or journalism.  Latent print evidence is not evidence-based evidence.


F.      Trial Court Rulings Finding an Absence of Evidence Supporting the Reliability of Latent Print Evidence


A minority of trial court admissibility rulings have acknowledge that latent print evidence is not evidence-based evidence.  In United States v. Sullivan, the Eastern District of Kentucky, noted “that, while the ACE-V methodology appears to be amenable to testing, such testing has not yet been performed.” [95]  However, the court found “that this concern does not render fingerprint evidence unreliable for the purposes of Daubert,” reasoning that lack of testing went to the weight, not the admissibility, of the evidence. [96]


Another such decision is Rose, the first case mandating a blanket exclusion of latent print evidence. [97]  Press attention has focused on the court’s discussion of the notorious Mayfield case, in which the FBI committed a misidentification. [98]  But the commission of a misidentification, even a high-profile misidentification by the FBI and its ratification by an examiner retained by the defendant, does not logically support exclusion of the evidence.  First, misidentifications have been known to the courts since the 1920’s.  More importantly, no admissibility standard demands an absence of error as a condition of admissibility – such a demand would be absurd.  Instead, admission requires evidence of reliability.


Rather than being undone by the Mayfield case, a closer reading of the trial court’s opinion would seem to suggest that the government simply did not put forward any evidence supporting the reliability of the latent print source attributions.  As the court put it, “the State did not prove in this case that the opinion testimony by experts regarding the ACE-V method of latent print identification rests upon a reliable factual foundation.” [99]  The court noted that, “While the ACE-V methodology appears amenable to testing, such tests have not been performed.  The principles underlying ACE-V, that is the uniqueness and permanence of fingerprints, cannot substitute for testing of ACE-V.  There have been no studies to establish how likely it is that partial prints taken from a crime scene will be a match for only one set of fingerprints in the world. [100]  In its denial of the State’s motion for reconsideration, the court further noted that “the Defendant demonstrated that there are no studies of the ACE-V method to determine the reliability of the methodology.” [101]


Crucial in this regard is the issue of the burden of proof in an admissibility hearing.  Authorities agree that the burden of proof in an admissibility hearing rests upon the proponent of the evidence. [102]  However, the Rose opinion was among the few opinions in the line of latent print admissibility challenges to acknowledge this.  In one such case, Virgin Islands v. Jacbos, the court excluded latent print evidence in which the government put forward no evidence whatsoever concerning the reliability of latent print evidence [103]  But, in what is probably the best known such case, United States v. Llera Plaza II, the court shifted the burden of proof to the defendant making the absence of evidence concerning the accuracy of latent print evidence count against the opponent of the evidence. [104]  In United States v. Mitchell, the court unabashedly shifted the burden to the opponent of the evidence. [105]  In Rose, however, the court noted that “the burden of proof to the defendant making the absence of evidence concerning the accuracy of latent print evidence count against the opponent of the evidence. [104]  In United States v. Mitchell, the court unabashedly shifted the burden to the opponent of the evidence. [105]  In Rose, however, the court noted that “the burden is on the proponent of the evidence to prove the reliability “ of the evidence. [105]  It concluded that “the State did not meet that burden in this case.” [107]  In its denial of the State’s Motion for Reconsideration, the court admitted that it was “surprising… to this Court that the State was not able to meet its burden of proof in this case,” and stated that “it has been shocking to the community.” [108]


VII. Conclusion


It is indeed shocking that the government appears unable to muster any evidence of reliability for a technique as venerable as latent print identification.  The fact that the government cannot support the claim of reliability does not, of course, necessarily mean that the technique is highly inaccurate.  Perhaps one reason that it is so difficult to muster evidence in support of latent print evidence, however, is that courts have been shielding the government from the demand for evidence of reliability.  In the pre-Daubert era courts allowed latent print evidence to win admissibility based on the ipse dixit of its practitioners. [109]  In the post-Daubert era, they continued to allow admissibility without demanding what any rationalist enterprise would treat as evidence of reliability.  These rulings not only protected the government from generating evidence about the reliability of latent print evidence, but may have actually discouraged the government from generating it.


In the case of forensic evidence the situation is similar to that which obtained in medicine at the time of EBM.  That is, there are some “treatments” that had been used for a long period of time on the assumption that they are effective without any evidence that they are, in fact, effective.  Similarly, there are some forensic techniques that the criminal justice system has been relying on for a long period of time on the assumption that they are reliable without any actual evidence that they are, in fact, reliable.  As in the case of medical treatments, we should expect that in some cases our assumptions have been well founded, and in other cases they have not.  The legal admissibility problem, however, is easier to solve: Daubert demands evidence of reliability; it does not allow for the assumption of reliability.  This, I would suggest, is part of the explanation for the vexing nature of admissibility challenges to forensic evidence.  Many forensic techniques, however accurate they may actually be, simply lack evidence concerning their accuracy.  In this situation, a strict reading of Daubert demands exclusion even of evidence that may turn out to be highly accurate, until such time as evidence of its accuracy is amassed.


The history of latent print admissibility challenges serves to illustrate the need for conceptualizing Daubert inquiries as demands that evidence used in trials be “evidence-based.”  If courts take seriously the notion that Daubert hearings are trials that demand the production of evidence about the reliability of the evidence that parties propose to use in the enveloping trials, perhaps the American legal system will move a step closer to joining the “evidence-based society.”

KEPT - Keeping Examiners Prepared for Testimony - #35
Accreditation - the Meaning of Accreditation
by Michele Triplett, King County Sheriff's Office


Disclaimer:  The intent of this is to provide thought provoking discussion.  No claims of accuracy exist. 


Question – Benefits of Accreditation:

What does it mean when a lab is accredited?


Possible Answers:

a)      It means that the laboratory has external audits done to insure that they are meeting certain standards.

b)      It means a lab adheres to requirements set by the organization they are accredited by.

c)      It means an agency has quality assurance measures and protocols in place and they follow them.

d)     It means that an agency meets minimum standards set by the accrediting organization, some of which are essential requirements and others are listed as desirable requirements. Some of the requirements include having external audits done, participating in proficiency testing, and having standard operating procedures.



Answers a-d:  All of these answers seem to be adequate answers.



Feel free to pass The Detail along to other examiners.  This is a free newsletter FOR latent print examiners, BY latent print examiners. With the exception of weeks such as this week, there are no copyrights on The Detail content.  As always, the website is open for all to visit!

If you have not yet signed up to receive the Weekly Detail in YOUR e-mail inbox, go ahead and join the list now so you don't miss out!  (To join this free e-mail newsletter, enter your name and e-mail address on the following page:  You will be sent a Confirmation e-mail... just click on the link in that e-mail, or paste it into an Internet Explorer address bar, and you are signed up!)  If you have problems receiving the Detail from a work e-mail address, there have been past issues with department e-mail filters considering the Detail as potential unsolicited e-mail.  Try subscribing from a home e-mail address or contact your IT department to allow e-mails from Topica.  Members may unsubscribe at any time.  If you have difficulties with the sign-up process or have been inadvertently removed from the list, e-mail me personally at and I will try to work things out.

Until next Monday morning, don't work too hard or too little.

Have a GREAT week!