Detail Archives    Discuss This Issue    Subscribe to The Detail Fingerprint News Archive       Search Past Details

G o o d   M o r n i n g !
Monday, April 27, 2009

The purpose of the Detail is to help keep you informed of the current state of affairs in the latent print community, to provide an avenue to circulate original fingerprint-related articles, and to announce important events as they happen in our field.
Breaking NEWz you can UzE...

by Stephanie Potter

Macon police link two to string of burglaries
Macon Telegraph - Macon,GA,USA
By Amy Leigh Womack - Police lifted fingerprints from a burglarized house ... that led to the arrest of two men who have been linked to 12 ... burglaries committed between Jan. 19 and April 4....

Newly installed print database helps ID suspect in Arlington ...
Fort Worth Star Telegram - Fort Worth,TX,USA
Police linked a palm print found on a window and armoire in their apartment to Sanchez using prints [from] his arrest last year on a misdemeanor public intoxication charge. ...

Bain: Rifle 'completely covered in blood' - New Zealand
Kim Jones, a fingerprint technician with the Christchurch police, said ... the Bain murder rifle had "sharp, definitive and recent'' fingerprints ... deposited by fingers covered in blood. ...

Hutchinson News Online
Reno County District Judge Tim Chambers declared a mistrial after ... a Reno County Sheriff's deputy testifying in the case mentioned ... the fingerprint match to Cullison was made because Cullison's prints were on file from "prior" cases....

Fingerprints, Gun Tie to Craigslist Suspect
ABC News - USA
...his fingerprints were found on plastic restraints at the scene of two of the three suspected Craigslist victims... [and] on the wall of a Warwick, R.I., hotel room where a third woman was robbed...

Recent CLPEX Posting Activity
Last Week's Board topics containing new posts
Moderated by Steve Everist and Charlie Parker

Public CLPEX Message Board
Moderated by Steve Everist


Updated the Detail Archives

Last week

we looked at the first known mention of the NAS Committee Report in a publically-available motion to exclude document on impression evidence.

This week

Gerald Clough brings us part 1 of a 2-part series that defines what is (and what is NOT) validation in fingerprint identification.


Validation in Fingerprint Identification Part I - The Nature of the Beast


We share a cage with a tiger. It’s quite a large cage and, until now, quite a sleepy tiger. We’ve been stepping around this tiger, treating it more as a tiger skin rug than a real tiger, judging that it could not cause us much harm unless we decided to stick a hand between its toothed jaws. Sometimes, we jump over the beast to avoid waking it. But recently, others who join us in the cage have been pointing out that there is indeed a fully grown tiger in here, and they ask what is to be done about it. Some of them have been poking at it with sticks, hoping it would awaken and force us to take notice. As the adage goes (sort of), you can ride a tiger or wait until the thing gnaws your leg off. We need to think about riding this tiger. It’s not the only tiger in here, but it’s the one we’re least familiar with.


This tiger is a metaphor for the notion and reality of validation. I like metaphor, and I like tigers, so there. This essay came into being because I found myself having to exert some effort to gain and maintain a correctly useful view of validation. If I reiterate and reword at various points, it’s because I found that I had to keep reminding myself of what I was trying to explore, and I expect some others will need the same kind of reorientation from time to time. If you “get it,” please bear with the rest of us.


Validation is a concept deeply embedded in issues addressed in the NAS Report on the state of forensic science disciplines. It is not the only issue, but it is a fundamental one. It addresses the raison d’être for each discipline. It demonstrates why we should care about fingerprints, tool marks, bullet striations, bite marks, DNA, or any other associative evidence, and it explores how much we should care. And it helps validate our claim to scientific accuracy. Arguably, much of what the NAS Report urges is directly aimed at validating forensic sciences, including fingerprint identification.


The scientific and technical community uses the term to denote testing of various theories, procedures, and techniques. Here, I take a more narrow view of validation than is often taken. I address validation of the fundamental principles of fingerprint identification. Validation can certainly refer to testing of human methodology to determine such things as error rates, but limiting its meaning in this way facilitates the discussion, and the reader should not allow this narrow use to cause any real confusion.

Validation is often taken in writings on fingerprint identification to refer to testing both the fundamental principle and the technique, the actual method, in combination. Those references to “method,” “technique,” and “procedure” often confuse the subject. The eminent law professor and evidence specialist Edward J. Imwinkelried helps clarify in his comments on “appropriate validation” with reference to Daubert:


“The validity test is designed to ensure to the extent possible that correct test procedures are used; in a validity test, the researchers attempt to eliminate any concern about the use of a proper test procedure because they want to reach the central question of how often the scientific technique itself will produce inaccurate results despite proper test protocol.”


Because our discipline is applied largely by visual examination and interpretation of features, I suggest that validation efforts be isolated from human factors as we seek to answer that “central question.” No study of reliability of any flock of humans who might examine fingerprint impressions can ever apply to an individual examination, except to direct attention to potential error. Validation tests the reliability of the premise of fingerprint identification. (And that is the last time I will use “reliability” in that way. Henceforth, reliability applies to human error or procedural flaws.)

Validation, as the term is used here, answers the critics’ claims that latent print examiners prefer to cite the generally unquestioned uniqueness of friction ridge skin, ala Meagher, and leap past the question of what it takes to apply the principle accurately. Validation addresses that question.



What is validation in a practical sense?


It is about the general validity of reaching conclusions from observations.  A validation study is a cold-blooded inquiry that seeks to determine if accurate observations that accumulate to an objective threshold produce accurate conclusions. There are various kinds of conclusion. Some are statistical, DNA conclusions for instance. Other conclusions may be stated as practical uniqueness or as class membership. Validation addresses conclusions of all kinds.


Validation, for our purpose, is an issue of conclusions made at the intersection of science and law and is therefore more within the realm of useful practicality than a matter of stating ultimate truth. That doesn’t mean we can be less than precise in expressing conclusions.


Validation seeks to differentiate conclusions grounded in rational scientific theory in conjunction with appropriate observations relevant to application of that theory, contrasting those with statements made from mere experience. In validation, the theory/observation combination is objectively tested for its ability to produce accurate conclusions.


In medical evidence, the distinction is made between a conclusion relying on a physiological theory and observations relevant to the issue to which the theory applies and, on the other hand, a conclusion based merely on clinical (practice-based) experience that leads the expert to state a belief or best guess. The conclusion from clinical experience cannot be validated, because it cannot be defined, neither in specific relationships among observation, physical reality, and physical theory, nor in objective threshold for conclusion. Clinical decision making in medicine is part of the art and is necessary to make practical choices among alternate diagnoses and therapies. Conclusion from theory and data and meeting tested threshold criteria is science.


Along the meandering path from applied theory to evidentiary weight, we must, for most processes, consider multiple issues. Validation does not specifically address many critical concerns within a discipline. Whenever human judgment must be applied to the creation or interpretation of data, validation presumes observations will be accurately made and interpreted and does not test those presumptions. And it is not quite accurate to refer to validating a discipline. A single validation study cannot address everything that may be concluded within a discipline that is being applied to its full potential.


Validation does not test method. We must avoid being misled into seeking validation by testing method. For example, the term “validation” as used here does not refer to the purpose and result of a study of method. ACE‑V is an articulation of a sequence of activities intended to represent a best practice that tends to prevent some kinds of bias and ultimate misinterpretation. ACE-V can be clearly defined, and a right-minded operator can know and document when they are following that protocol. One may study how well it controls error or how it promotes efficiency, but it is not a subject for validation, for it does not itself provide the nexus of theory and conclusion. It is method.


Validation is disinterested in how an examiner might make observations in cases. In individual evidentiary cases, there may well be disputes among examiners over interpretations. Those disputes call for credibility judgments. They are not issues of validity.


Validation is not a matter of how “hard” an identification an examiner can make. The question in validation is one of simple physical commonality, and does not involve any judgment of interpretations of ambiguous or arguable details. It asks, “What does it take?” The practitioner supplies the answer to, “What do I have?” Degree of difficulty in supplying that answer depends on the subject of the examination and the training, talent, and experience of the examiner.


Validation studies are not reliability studies. Reliability is the degree to which one can depend on individual conclusions, measured by studying many actual or well-modeled analytical operations. Any measure of reliability must consider sources of potential error and must be conditioned by operator variables and the quality of data and its interpretations. Validation is a prerequisite of scientific reliability, but reliability, although a critical forensic science issue, need not be determined to establish validity. A discipline practiced by bunglers may be frankly unreliable but valid. One must gain a solid understanding of the differences among validation, reliability, and credibility to understand the role of each and the nature of validation.


Only a scientific principle of association can be validated. A validation study of astrology is pointless. No coherent theory of the physical world underlies its method, and determining the accuracy of its conclusions is inherently subjective. If a reliability study were to show that astrologers predicted generally agreed upon characteristics of individuals with some stated degree of reliability, it would still not be scientific. The characteristics’ physical relationships to reality could not be established, and there is nothing to validate.


Validation is a statement of potential. It ignores error and bias. Error and bias are factors in reliability. Reliability studies test performance, not the scientific rationale. The results of reliability studies assist those who must judge relative credibility of individual conclusions by orienting them to how often the process fails under stated conditions. A reliability study might test only certified examiners. Or it might test reliability against experience. Reliability might be measured for different procedural methods.


Lack of validation is the source of objections to the “zero error rate” dogma. Zero error rate claims are conditioned by references to examiner performance. But the central issue is validity. In a scientific environment, absent validation, any claim to accuracy is strictly an expression of clinical track record.


Validation is not a test of theory. The theories underlying comparative forensic sciences are accepted for all practical purposes to be correct. Fingerprints are hugely variable to the point of being usefully considered unique and may be assigned to specific classes, and examiners know why and which characteristics may be observed that relate to uniqueness and which speak to class. Various manufacturing processes create both unique* and class characteristics observable by their examiners. Conclusive proof of theory is not required for validity.  Obviously, a disproved or irrational theory or a theory whose falsity could never be discovered cannot support a scientific forensic identification discipline.


* It is arguably erroneous to consider any natural phenomenon to be absolutely unique. In order not to repeatedly qualify every occurrence of the term, I ask that the reader take its meaning to imply an infinitesimally small probability of duplication. Note, though, that in the statement of the theory underlying fingerprint identification, the word may be taken to mean absolute uniqueness without doing harm to the validation concept. 


One of the most revealing aspects of the nature of validation is the realization that validation does not require expertise in the discipline being validated. One need not be a firearms examiner to study the validity of conclusions in firearms examination. Validation does require the student to be able to formulate an acceptable scientific study and apply any statistical or other tools required to produce a generally acceptable validation. Validation does not reexamine cases. Validation does not reach conclusions on identifications. It validates the justification and limits for claimed ability to conclude.


Validation does not test whether or not one methodology is better than another. It does not study sources of error or the frequencies of their occurrences or the conditions under which they are more or less likely to occur. Reexamination and error source determinations require intimate knowledge and experience in the discipline. This is not to say that no one other than adepts may contribute to inquiries into error or even into specific cases, but neither is a direct validation issue.



Validation does not settle all arguments. Validation presents a scientific argument that the applied combination of theory, data, and threshold is a scientifically valid way reach a conclusion. And science does not step back, dust off, and say, “Whew! All done.” There may be many approaches to studying the validity of the same combinations. As advertisements warn, “Results may vary.”


So, what does validation require? I will present four requirements for what must be used in a validation study and discuss each.



Validation requires a meaningful and relevant theory of physical reality.


The theory must be rational and not contrary to established scientific knowledge. To be a theory, it must be scientifically falsifiable. It must be stated so that it is amenable to being disproved. For fingerprint identification, the theory need only state that:


Human friction ridge skin has characteristic formations of pore structures that persist absent deliberate or accidental physical alteration and that are observably unique to the individual.


The theory is rationally consistent with scientific knowledge and a very large number of opportunities to observe an inconsistent physical reality, and we can easily state an observable falsifying condition. It is appropriately parsimonious in that it is the simplest possible theory-tool for the job. We need not qualify or quibble over the meaning of “unique.” We’re not going to be testing the theory for truth. We will test it as a component of demonstrated validity.


We may observe various fingerprint characteristics in various ways to apply the theory to the problem of discrimination of sources. The theory does not refer to how it is applied. Although our knowledge of how some characteristics form and other physiological knowledge may go toward the statement of theory and its general acceptance, validation does not prove the theory.



Validation requires a clearly reportable statement of the relevant data that is presumed to be observable and reliably interpreted as to character and relationship.


What can the examiner say in generally accepted objective terms about objects he observes and the similarities or differences he uses to form a conclusion? Some potential forensic disciplines have not developed any coherent way to express their observational data. Data definition would be the first task of anyone hoping to validate such things as ear print identification. Bertilion’s anthropometry system began with data definition.


This is the data that will accumulate during an examination and which will, if sufficient, lead to a conclusion. Because validation is neither a test of competency nor a test of reliable interpretation of observations, the data must be idealized. It may, for instance, derive from well-recorded inked impressions or clear scans. Or it may be generated as a set of data statistically similar to the population, the realm within which the conclusions will apply. Validation studies for fingerprint identification must derive data from or test models against records of human fingerprints so that the study may explore the limits of conclusions made from genuine characteristics.


The data must be articulated in an unambiguous and standard manner. If the observations include the characterization of ridge behavior, descriptions must conform to a clear standard, crafted so that every adept will agree on what the terms mean, if not necessarily the relative virtue of those terms. If a relevant observation establishes relationship among regions within which a particular characteristic is defined, the distance, direction, or other relationship must be clearly defined.

The members of the data set must be described in those unambiguous ways so that agreement between the characteristics of the members of the set may be articulated. And that agreement must be articulated in such a way that the products of comparing two members comprise a specific accumulated degree of agreement.


Note that these descriptions need not include any metric tolerances. Tolerance in describing direction or distance is a matter for interpretations of observations and therefore not a validation issue. For instance, validation is not concerned with reconciling distortion effects, because validation utilizes data from or modeled on reality as ideal records.



Validation requires an objective definition of what products of those observations are proposed to determine that a particular conclusion will be accurately declared.


The definition must refer only to the observations and interpretations amenable to clear and unambiguous articulation. Only then can the student of validation determine if the study is applying the definition of sufficiency that will be applied by examiners, the threshold of conclusion.


The threshold being tested need not be a single score or a single list of minimum observations that must agree. Any number of variously constructed thresholds may be articulated, each representing a product of observation and interpretation that represents valid grounds for conclusion. Of course, each distinctly defined threshold must be tested to determine that, when satisfied, it can accurately associate an impression with its source.


Assuming all terms and characterizations are unambiguously stated, a minimum point count in agreement represents a kind of simple threshold. Realistic thresholds will likely include logical operations combining some number of observations of different aspects of an impression, including pertinent negatives, and might incorporate what is known about relative frequency of occurrence of different types of features. Validations studies may reveal new valid thresholds not previously unconsidered. Indeed, the validation process may be used to discover new ways to combine examination products.


Validation studies apply only to examinations in which the same types of data are obtained both in the examination and the validation. If we consider examinations of multiple impressions in a particular anatomical sequence, a kind of simultaneous impression, we must undertake validation studies in that mode.



Validation requires, for each combination of observation and conclusion, known truth against which to test the accuracy of the conclusion.


This, of course, is the essential question. It is a trivial matter in our essential validation studies, because we are really asking what it takes to associate an object with itself as a unique entity, no other object being mistaken for it.


Crafting a validation study, then, is indeed a challenge. But remember that it is really a study of what we mean by fingerprints being identifiable when we apply the theory to declare an identification. Through validation, we can indeed conclude an identification, because we (1) apply a theory of physical reality that is generally accepted, even if unproven, (2) can precisely define what it means to recognize the unique source, and (3) have validated the threshold of conclusion.


Neither can any validation study purport to have exhaustively explored every possible definable threshold. Nor must it purport to have tested the most liberal threshold that will ever be validated. But any lesser or never-tested combination of observational data and threshold must stand as not yet validated until an appropriate study is crafted, conducted, and found acceptable.


It is entirely up to the discipline to propose just how it wishes to represent its conclusions, meaning, of course, which conclusions it wishes to validate. That does not mean it will play out so cleanly in the validation study. Any study that draws on a finite set of subjects must inevitably produce a result subject to probabilistic interpretation. To what degree that result directly addresses the practice of the discipline and dictates how valid conclusions may be stated may be argued. That is the nature of science. But the point is that validation studies put that discussion on firm ground, rather than “I know.” - “No, you don’t.” debates. In a very real way, that is the value of validation. It makes the discipline and its practice amenable to scientific treatment.


We must particularly remember that lack of validation study does not render a mode of practice within a discipline invalid. It merely means that it has not been scientifically validated. No one can correctly attack a discipline as invalid on account of lack of validation studies. An examiner may apply the theory generally accepted as valid. The examiner may accumulate observations and interpret those observations absolutely. The examiner may then apply a threshold that is not precisely defined but that the examiner believes is more conservative than any threshold the examiner would reasonably believe to be validated for the conclusion. In the absence of scientific validation, the examiner can still make a significant argument for validity. The examiner is not home free, for the judgment of threshold is a clinical best guess. and lack of solid scientific validation is grounds for attacking reliability.


It is not true that all admissible testimony by experts must reflect validated discipline. Psychology is pointedly excluded from scientific validation inquiry on just this ground. Such testimony is commonly viewed as clinical, if not mere guesswork, the lay juror or judge relying on their own interpretations of the experts’ observations, often attempting to detect bias, and sometimes simply being swayed by rhetorical talent. Validation tests whether particular physical phenomena can be used to reach accurate conclusions in a rational scientific context. Psychological conclusions can never be validated. Fingerprint identification conclusions can be validated. It just hasn’t been done yet.

I would like every latent print examiner to consider the following excerpt from a six-volume report of a 1963 study titled “Psychology – A Study of Science,” headed by Sigmund Koch, commissioned by the APA:


"The hope of a psychological science became indistinguishable from the fact of psychological science. The entire subsequent history of psychology can be seen as a ritualistic endeavor to emulate the forms of science in order to sustain the delusion that it already is a science."


(Pause for a moment of silence. While the crickets chirp, consider your response to the same passage with “psychology” replaced by “fingerprint identification.”)


It is well to point out that validation need not be limited to absolute conclusions of identification. Validation speaks to the accuracy of the conclusion, no matter what conclusion may be considered. Dogma is not reason to reject out of hand any rational conclusion. Our general position has been that probabilistic conclusions are not supported. That is true, for there has been no validation. But that does not imply they cannot ever be supported as having been scientifically validated. We may, as participants in the criminal justice system, have concerns about over-probity on account of cultural factors or questions about what if any weight such a conclusion should be given, but within our scientific arena, they might well be validated. And such probabilistic conclusions may be preferable to testimony about observations alone that leave the finder of fact no expert assistance in giving them more or less weight. One of the tests of whether or not a practice is scientific is if research has the potential to alter the practice.


We should stand well back from history and not view the call for validation studies as inimical. Validation should be welcomed. What might we expect from such studies?


Validation studies, properly conducted, published, reviewed, replicated, and generally accepted, can allow a forensic examiner to present an analysis and conclusion that is subject only to specific challenge of observations and interpretations and demands to demonstrate credibility and perhaps membership in a class of examiner shown to produce particularly reliable results. A latent print examiner would, for example, be free from demands to admit that he hasn’t “examined the fingerprints of every person on Earth.” It no longer matters that the uniqueness theory has not been and almost certainly cannot be proven, because validation studies show that when certain observations are made, the theory can be relied upon in producing accurate conclusions. No proper validation study using properly stated data will ever find fingerprint identification generally invalid.


And again, without presupposing that historical conditions will continue unchanged, consider the position of an examiner confronted by opposing expertise contradicting the conclusion. If the examiner has concluded within the bounds of validation, the challenging expert may only (1) dispute the validation to challenge the threshold used or (2) challenge the observations. The former expert, to oppose a generally accepted validation study, must present an alternative study or point out a significant flaw in a reported, reviewed, and replicated study and explain why that “flaw” wasn’t brought up earlier. The latter challenge(2) is subject to demonstrative evidence, rational argument, and assessment of credibility.


We should think about the possibility that validation studies may not validate a sufficiently low threshold to admit some conclusions of identification that have been made from “clinical experience.” But we cannot remain within forensic science while concluding beyond the limits of validation. Nevertheless, the observations that fail to achieve the threshold for absolute conclusion may constitute more or less weighty evidence helpful to a finder of fact. I leave to others (or at least another time) considerations of whether or not the admissible products of examination should or should not be extended to similarities.


Validation or the lack of it is decidedly a two-edged sword in court. It may feel like a victory for a defendant to exclude a conclusion from fingerprint expert testimony, but it is not so simple. Facing such exclusion, a prosecutor limited to discussion of similarities may well devote considerable time at trial to demonstrations. Defendants often wish to move quickly through damning fingerprint testimony. Without conclusions, finders of fact are left largely to their own conclusions, and memes regarding the power of fingerprint evidence simply cannot be swept from their minds. The situation seems ripe for some spirited arguments of probity versus prejudice.


Latent print examiners must think carefully about how they will respond to questions from more knowledgeable attorneys. If an examiner confronted with a challenge cannot cite actual validation studies, the examiner must craft a position and perhaps an exposition on what tends to validate the examiner’s own conclusions. And attorneys who will sponsor fingerprint evidence must be briefed on the issue.


Work on validation studies may well lead to or require development of more precise terminology in a discipline. It may lead to validated conclusions of probability when applied to a limited population. It may reveal new aspects of the discipline that can be used to reach conclusions. And it may provide models and experience that may be applied to new disciplines or speculative fields that have not evolved standard descriptors and theories. I don’t think it fantasy to imagine that one day more generalized “biometric examiners” might conduct conclusive examinations of multiple aspects of human physiology.


Part II – Striping the Tiger will take a closer look at the issues in crafting a validation study for fingerprint identification.


Bookmark and Share


Feel free to pass The Detail along to other examiners for Fair Use.  This is a not-for-profit newsletter FOR latent print examiners, BY latent print examiners. The website is open for all to visit! 

If you have not yet signed up to receive the Weekly Detail in YOUR e-mail inbox, go ahead and join the list now so you don't miss out!  (To join this free e-mail newsletter, enter your name and e-mail address on the following page:  You will be sent a Confirmation e-mail... just click on the link in that e-mail, or paste it into an Internet Explorer address bar, and you are signed up!)  If you have problems receiving the Detail from a work e-mail address, there have been past issues with department e-mail filters considering the Detail as potential unsolicited e-mail.  Try subscribing from a home e-mail address or contact your IT department to allow e-mails from Topica.  Members may unsubscribe at any time.  If you have difficulties with the sign-up process or have been inadvertently removed from the list, e-mail me personally at and I will try to work things out.

Until next Monday morning, don't work too hard or too little.

Have a GREAT week!

View Archives