_________________________________________ |
__________________________________________
Breaking NEWz you can UzE... |
compiled by Jon Stimac |
Fingerprints
Match Accused Murderer –
ROCKY MTN. NEWS, CO - Aug 31, 2005
...fingerprints found on trash bags covering the body
of a woman matched those of a roommate...
Fingerprint Expert Has Sharp Eyes
–
DAILY REVIEW ONLINE, CA
- Aug 30, 2005
...during her last college semester she worked at her
father's forensics consultation company and realized that forensics
was her niche...
All Sorts of People End Up at Las Vegas Fingerprint Center –
BILLINGS GAZETTE, MT - Aug 27, 2005
...it's a bureaucratic oddity that draws
the unusual mix...
Fingerprint Challenge Fails; Robber Gets Jail
–
NEWS JOURNAL, DE - Aug 18, 2005
...a man who challenged the accuracy of
fingerprints in federal court was sentenced to 30 years in jail for
bank robbery... |
__________________________________________
Recent Message Board Posts |
CLPEX.com
Message Board |
Ridgeology Science Workshop instructed by Glenn Langenburg
Jan LeMay Sun Sep 04, 2005 11:06 pm
Bloody receipt
Harres Sat Sep 03, 2005 11:33 pm
Photoshop Actions
Steve Everist Thu Sep 01, 2005 5:54 pm
Fingerprint Debate
Jason Covey Thu Sep 01, 2005 5:08 pm
Katrina
charlton97 Wed Aug 31, 2005 10:02 pm
RTX Solution
lloydthomas Wed Aug 31, 2005 9:38 pm
Enhancement of bloody prints
Justin Bundy Wed Aug 31, 2005 3:38 pm
Cindy Homer
Use of Cyanoacrylate on Vehicles
Bob, Holly, Barbara (DPD) Tue Aug 30, 2005 2:41 pm
(http://clpex.com/phpBB/viewforum.php?f=2) |
UPDATES ON CLPEX.com
Updated the Smiley Files with 4 new Smileys!
_________________________________________
we saw the first of several issues on Adobe
Photoshop Actions related to latent print examination, by Steve Everist.
we review an article written from the perspective of
a defense attorney looking to attack and discredit the forensic scientist and
his/her discipline. Although it deals with more than just fingerprints, it
lists some specific questions and addresses the concepts surrounding those
issues. The understanding of the defense position provided by this article
makes its length seem more bearable. If you are interested in what defense
attorney's are saying about the below
questions, this article will be an interesting read:
Exactly How and Why Was Peer
Review Performed?
Did the examiner’s case file have all the preliminary crime scene reports and
investigative reports? If so, what examination-irrelevant data was contained in
these reports?
Did the examiner document any of her supporting data? If documented, in
what manner was the data documented? Can the
laboratory’s activities, observations, and results be reconstructed exclusively
on the basis of the available records?
Is the examiner required
to identify a certain number of corresponding points of similarity before an
identification can be rendered?
If a minimum number of
matching points is not required, how does the examiner go about rendering his
conclusion?
Has it been empirically
established that the evidence can be individualized?
What was the condition
of the evidence?
_________________________________________
Psychological Influences & the State Employed
Forensic Examiner: How to Elicit Evidence Concerning Observer Effect Errors
through Cross-Examination and Discovery
Illinois Association of Criminal Defense Lawyers
Newsletter, Summer 2003
http://www.law-forensic.com/iacdl_newsletter_summer_2003.htm
by Craig M. Cooley
Investigator, Illinois Appellate Defenders Office, Death Penalty Trial
Assistance Unit. J.D. candidate (2004), Northwestern University School of Law;
M.S. (Forensic Science), University of New Haven; B.S. (Psychology), University
of Pittsburgh. Mr. Cooley can be contacted via his website:
www.law-forensic.com
If one were to question state employed forensic scientists about
their objectivity and affiliation with law enforcement or the prosecution many
would contend, as Dr. Irving Stone does, that,“[T]he role of the modern forensic
scientist has moved from that of a prosecution or defense advocate to an
objective reference source for the judicial system.”[2] While this statement
symbolizes the definitive objective of any forensic investigation [i.e.,
objective fact finding], it fails to represent the profession’s true and current
state of affairs. All too often, given the forensic community’s current
organizational configuration, conscious and unconscious biases saturate forensic
examiners’ alleged unprejudiced interpretations. These overt or covert
influences profoundly impact their ultimate conclusions. This Article’s
objective will be to discuss the covert forms of bias and error that are
ubiquitous to the forensic science community—observer effect errors [or
expectation/examiner bias]. Once discussed, attention will be directed at how
criminal defense attorneys can elicit evidence of these seemingly imperceptible
biases and errors through discovery and cross-examination.
A. Forensic Science & the Principles of Psychology:
Observer Effect & Expectation Bias
Forensic science pioneer Paul L. Kirk once commented that, “Physical evidence
cannot be wrong; it cannot be perjured; it cannot be wholly absent. Only in its
interpretation can there be error.”[3] As the defense bar is well aware,
forensic interpretation, regrettably, represents a significant, if not the most
important, issue in many criminal cases. The unfortunate aspect of any endeavor
that necessitates interpretation is that such tasks require a “scientific
observer.” As cognitive psychologists have repeatedly documented, however, “the
scientific observer [is] an imperfectly calibrated instrument.”[4] The
scientific observer’s imperfections stem from the fact that subtle forms of
bias, whether conscious or unconscious, can easily contaminate the observer’s
seemingly objective perspective. Identifying and curtailing such biases are
significant when one considers the forensic community’s affiliation with law
enforcement and the prosecution. This relationship has fashioned an atmosphere
where scores of forensic professionals have become biased for the
prosecution.[5] So strong are these biases that forensic examiners have
deliberately fabricated evidence or testified falsely just so the prosecution
can prove its case.[6]
Outside premeditated falsification, however, forensic practitioners remain
susceptible to far more invasive but normally overlooked inaccuracies. These
errors are by-products of context effects (a.k.a. observer effects) where an
examiner’s perceptions and inferences are affected by domain-irrelevant data
(e.g., an investigator’s desired outcome, the conclusions of other experts on
the same case).[7] In certain respects, these errors are more bothersome than
deliberate fraud and misconduct because they are often undetectable if certain
procedural safeguards are not incorporated into forensic examinations. Given the
indiscernible nature of observer effects, accomplished and well-intentioned
forensic examiners can offer genuine conclusions that are imprecise and
erroneous even when they are employing well validated forensic techniques.
Again, these forms of errors may occur in large quantities, entirely devoid of
examiner awareness. As Professor Michael J. Saks explains, “Indeed, such
distortions will be more ubiquitous and more insidious precisely because they
are not intended and their presence goes unnoticed.”[8]
As indicated above, observer effects or expectation bias are governed by the
fundamental principle of cognitive psychology that asserts that the needs and
expectations individuals possess shape their perceptions and interpretations of
what they observe, or in the forensic science context, what they examine. To
fall prey to such bias, examiners typically must (a) confront an ambiguous
stimulus capable of producing varying interpretations and (b) be made aware of
an expected or desired outcome.[9] In short, examiner bias is the “tendency to
resolve ambiguous stimuli in a manner consistent with expectations.”[10]
1. Sources of Subjectivity and Uncertainty in Forensic Examinations
With respect to ambiguity, the individualizing forensic sciences (e.g.,
fingerprints, bite marks, hair, toolmarks, lip prints, etc.) are top-heavy with
subjectivity and ambiguity.[11] In the forensic science context, subjective
tests are those where identifications and interpretations rest entirely on the
examiner’s experience or conviction.[12] As we will come to see, many, if not
all, forensic identifications are premised on an examiner’s unyielding belief
that his or her experience is all that is required to render an absolute
identification.
Forensic examinations represent two layers or phases of subjectivity. The first,
and most obvious, instance of subjectivity occurs when examiners are determining
whether two pieces of physical evidence are ‘consistent with’ or ‘match’ one
another. Forensic examiners accomplish this by identifying an unspecified number
of corresponding points of similarity. The majority of forensic fields do not
require their examiners to isolate a specific number of matching points before
they can claim an absolute identification.[13] Likewise, examiners regularly
utilize divergent criteria that are typically not published or even
articulated.[14] Interwoven into this subjective decision making is the
probabilistic determination that the match is not a coincidental match.[15]
Given that many of the individualizing forensic sciences are armed with no
serviceable probabilistic models and no base rate data (e.g., no research),
forensic examiners rely entirely on intuition, instincts, impressions, and
subjective probability estimations (termed “experience” or “judgment”) when
determining the likelihood of a coincidental match.[16] Consequently, as
mentioned beforehand, identifications are simply manifestations of an examiner’s
experience rather than empirical research.[17]
For instance, consider how one forensic scientist defined the individualizing
process. According to the forensic scientist, “An item is said to be
individualized when it is matched to only one source and the forensic scientist
is morally certain that another matching item could not occur by chance.”[18]
This definition clearly indicates the subjective nature of the individualization
process because morality is an entirely personalized concept. Morality differs
from person to person and from examiner to examiner. Why can’t the examiner be
scientifically or empirically certain that another matching item could not occur
by chance? The answer, as alluded to above, is that the forensic science
community has conducted an inadequate amount of research over the past century
in order to provide such answers.
In many forensic sectors, a second layer of subjectivity enters into the
equation. For example, once fingerprint examiners intuitively determine the
improbability of a coincidental match, they must then instinctively decide
whether all other fingerprint examiners would reach the same conclusion before
they are permitted to claim an absolute identification. Put simply, “fingerprint
examiners must draw subjective impressions about other people’s subjective
impressions.”[19]
(removed section)
The term subjective, it should be noted, does not signify that a particular
manner of examination is invalid. Rather, subjective evaluations can, in certain
instances, provide reliable evidence. This reliability, nonetheless, must be
objectively and empirically evaluated and established (e.g., through blind
proficiency testing). Subjectivity merely implies that room for disagreement
exists. It’s when like minds are able to disagree, however, that the likelihood
of error increases. Moreover, as the opportunity or room for disagreement swells
so does the chance for error.[23]
2. Expectation Bias & Forensic Science:
Forensic and Law Enforcement Practices that Generate Expectation
In the forensic science sector, examiners are confronted daily with many
opportunities that cultivate certain expectations. The most common expectation
nurtured in the forensic science environment is one concerning the suspect or
defendant’s guilt (i.e., namely that the defendant is guilty). This section
details certain forensic science and law enforcement practices that are capable
of inducing expectation biases on the part of forensic practitioners.
(a) Single Sample Testing
For the most part, when criminal investigators turn over evidence to forensic
examiners the evidence typically falls into two groups—(1) samples taken from
the crime scene; and (2) samples provided by the suspect. Single sample testing
undoubtedly affects whether an examiner’s report will associate the suspect to
the crime scene or the victim. For instance, in one study researchers found that
fewer than 10% of forensic reports failed to associate a suspect to the crime
scene or the victim.[24] Stated conversely, 90% of forensic reports inculpated
the suspect in some fashion. As Professor D. Michael Risinger and his colleagues
explain,
“This high rate of inculpation comes from the fact that each piece of evidence
connected with any suspect has a heightened likelihood of being inculpatory,
since investigators do not select suspects or evidence at random, but only those
they have some reason to think were connected to the crime. Thus, forensic
scientists have a continuing expectation that the evidence before them is
inculpatory…”[25]
(removed section)
(b) Communication Between Investigators and Examiners:
Domain Irrelevant Information
When forensic examiners are presented with a new case they typically receive the
case information in one of two ways; they either meet directly with the lead
investigator[s] or the key case information is forwarded to them via mail,
facsimile or electronically. Both circumstances are easily capable of inducing
certain expectations on part of the forensic examiner.
Before discussing how the content of investigative reports and discussions can
bias an examination, the forensic scientist’s proper role must be delineated.
The forensic scientist’s purpose is to provide answers to questions that pertain
solely to his or her forensic discipline. Moreover, when answering these
inquiries the forensic scientist is ethically and legally obligated to employ or
make use of only those applications and domain-relevant bits of information that
are sanctioned by their designated area of expertise. For instance, if the
question presented is outside the forensic scientist’s area of specialized
knowledge he or she theoretically is ethically and legally barred from offering
any answer to the question, even if he or she is capable of providing an honest
an accurate response. Likewise, when forensic scientists interweave
domain-irrelevant information into their evaluation and ultimate conclusion[s]
they are abusing their special stature bestowed upon them by the courts. Expert
testimony regarding a litany of forensic science issues is permissible not
because forensic practitioners are more superior at deducing inferences about
the meaning of ordinary pertinent evidentiary information than detectives or
jurors; rather, admissibility is warranted because the law has conceded that
forensic experts are better equipped, than unaided juries and detectives, at
producing reliable and justifiable answers to specialized questions.
(removed section)
This brings us to the question of where and how forensic examiners become
cognizant of domain-irrelevant information. Forensic examiners, for the most
part, simply do not receive the physical evidence when assigned a new case.
Rather, the lead investigator[s] frequently supplements his or her forensic
examination requests with detailed crime scene and investigative reports.
Likewise, investigators customarily provide forensic examiners with extraneous
domain-irrelevant information in their transmittal correspondences (e.g., fax
cover letters). For instance,
(removed section)
For instance, suppose a fingerprint examiner is having difficulty identifying an
adequate number of corresponding points between two prints. If the examiner is
already aware of domain-irrelevant information (i.e., the defendant’s DNA was
discovered on the victim’s undergarments), an expectation has been covertly or
overtly planted into his or her conscious. The expectation being that the
defendant must be the culprit because the crime scene DNA corresponds with the
defendant’s DNA. Couple this expectation with fingerprinting’s subjective nature
and the stage is set for a subconscious and inadvertent bias to negatively
impact a competent and ethical fingerprint examiner’s identification.
(removed section)
B. Recommendations that Can Minimize Observer Effects in Forensic Science
Practice
To decrease personal bias, conscious or unconscious, the forensic science
community must utilize many of the same checks and balances being utilized in
other scientific communities. For example, retesting within the same laboratory
or another laboratory, having two or more examiners re-confirm the original
examiner’s interpretation, and identifying samples so the examiner cannot
foresee a particular test’s outcome. The ensuing discussion will focus on two of
the more important procedural reforms that would curtail observer effects in
forensic science—blind testing and evidence line ups.
1. Evidence Line-Ups
A procedural reform that can minimize or counteract subconscious biasing
influences involves employing ‘evidence line-ups.’[40] In an evidence lineup,
multiple samplings are presented to the forensic examiner. Some samples, though,
are “foils.”[41] Examiners would be blind to which samples constitute the foils
and which samples constitute the true questioned evidence. For instance,
“[A] firearms examiner might be presented with a crime scene bullet and five
questioned bullets labeled merely ‘A’ through ‘E.’ Four of those bullets will
have been prepared for examination by having been fired through the same make
and model of firearm as the crime scene bullet and the suspect’s bullet had
been. The task for the examiner would then be to choose which, if any, of the
questioned bullets was fired through the same weapon as the crime scene bullet
had been.”[42]
As the example illustrates, forensic examiners and eyewitnesses perform
comparable evaluations. Not surprisingly, then, both have the potential to
suffer from similar methodological shortcomings when they are viewing their
respective forms of evidence—be it a fiber or an alleged assailant.[43]
Consequently, evidence lineups would serve many of the same purposes as a
properly structured eyewitness lineup. Currently, forensic examinations are
equivalent to eyewitness identification show-ups.[44] In both situations the
single-suspect or sample configuration implies that the assumption of guilt is
correct. Evidence line-ups would resolve these methodological deficiencies. The
Justice Department’s recent report concerning eyewitness identification
procedures discusses many guidelines that are similarly relevant to physical
evidence line-ups.[45] In the end, the Justice Department’s objective in
publishing its eyewitness identification procedures—reducing the frequency of
false positive errors without diminishing the occurrence of true positive
identifications—is equally applicable and just as easily achievable for the
forensic science community.
2. Blind Testing
As Barry Scheck points out, “forensic laboratories have historically resisted
external blind proficiency testing and other efforts to assess laboratory error
rates.”[46] This must change to minimize rates of error, especially covertly
caused mistakes such as observer effect errors. All forensic testing, be it
proficiency or case examinations, should be conducted blindly.[47] According to
Professor Saks and his colleagues,
“The simplest, most powerful, and most useful procedure to protect against the
distorting effects of unstated assumptions, collateral information, and improper
expectations and motivations is blind testing. An examiner who has no
domain-irrelevant information cannot be influenced by it. An examiner who does
not know what conclusion is hoped for or expected of her cannot be affected by
those considerations.”[48]
The key, as the passage indicates, is to create an impenetrable wall between
forensic examiners and any examination-irrelevant information.[49] This can be
accomplished by providing examiners with the information they require to carry
out the tests, and only that information.[50] With respect to proficiency
testing, although many contend that the “logistics of full-blind proficiency
tests are formidable,”[51] its practical implementation is by no means
impossible.[52] Likewise, developing a system that filters out all unnecessary
domain-irrelevant data before examiners perform their evaluations is also
achievable.[53]
C. Eliciting Evidence of Potential Observer Effects:
What to Ask and What to Ask For
Whether the abovementioned reforms are implemented is for the forensic science
community to determine. Lawyers, for the most part, especially those in criminal
defense, will only have a peripheral role in determining whether these reforms
are incorporated if they remain uncritical of forensic evidence and crime labs.
On the other hand, if defense attorneys become more judicious and cautious in
how they view crime labs and forensic evidence they can have a more profound
impact on forensic science reform than initially perceived. Why, some may ask?
As the wrongful conviction cases continually demonstrate, faulty science and
fraudulent misfits have typically been identified where critical thinking
defense attorneys have shunned their blind faith belief in science and
questioned the findings of so-called forensic scientists. With an increasing
number of defense attorneys critically evaluating crime lab and examiner reports
there has been a corresponding increase in the number of cases where faulty
forensics played a substantial role in generating a wrongful conviction or
accusation.[54] As the number of identified mishaps increase,[55] so to does the
notion that forensic science reform is essential to ensure that victims of crime
and criminal defendants are afforded justice and liberty. Accordingly, to
facilitate change and reform the criminal defense bar must intensify its
scrutiny of crime labs and forensic evidence.
As mentioned, procedural safeguards (e.g., blind testing & evidence line-ups)
have not been implemented in the overwhelming majority of crime labs. Likewise,
given the defense bar’s limited, yet increasing, ammunition to bring about these
reforms, defense attorneys must rely on their advocacy skills to ensure that
defendants are not unjustly accused or convicted because of observer effect
errors. Advocacy, as many experienced defenses litigators know, is more about
knowing what to look and ask for than anything. With respect to observer effect
issues, the key is to elicit any evidence that may call into question the
forensic examiner’s alleged objectivity. The fact that the examiner is employed
by a prosecutorial or law enforcement agency is not the principal issue here.
For the most part, thanks to the numerous forensic science T.V. shows, jurors
are already cognizant of this affiliation.[56] Rather, attention should be
directed at obtaining evidence that emphasizes the (1) subjective (or ambiguous)
nature of the examination and (2) the examiner’s predisposed expectations. The
objective is to implant into the jurors’ minds that the forensic examiner,
though highly skilled and well-intentioned, may have been adversely affected by
unnoticed context effects. Consequently, the following sections will briefly
identify various questions and bits of information that defense attorneys should
ask for or obtain when mounting an attack against a forensic identification
examiner.
(1) Subjectivity & Ambiguity Questions and Inquiries
These questioned are intended to expose the subjective nature behind many forms
of identifications. For the most part, examiners will answer that many of their
conclusions are based on their [subjective] experience and specialized training.
1. What was the condition of the evidence?
As previously discussed, physical evidence identified and collected at the crime
scene is rarely in pristine condition when forwarded to the examiner. Often
times, physical evidence can be exposed to contaminating environments or, worse
yet, altered or damaged by post-collection circumstances. These alterations can
be a by-product of incompetence (e.g., improperly trained evidence technicians)
or environmental or situational factors (e.g., when removing a bullet from a
wall the bullet may be innocently damaged). During cross examination question
the examiner whether he is capable of differentiating between original crime
scene markings and post-crime scene artifacts & alterations. Unless the examiner
is privy to evidence collection photographs that were taken immediately after
the evidence was collected and processed, it will be extremely difficult for him
to distinguish between original markings and post-crime artifacts. Likewise, if
the evidence, for instance, is a smudged partial fingerprint, you may want to
consider enlarging the print so the jury can witness firsthand its non-pristine
nature.
2. Can the evidence deteriorate or change over time?
This question is primarily directed at handwriting, bite mark, toolmark,
firearms, and shoe print evidence. As mentioned, two fundamental assumptions
underlie all forensic identifications—individuality and permanency. Here, we are
attacking the permanency assumption because it is possible to alter the
individualistic features created by these forms of evidence. For instance,
“[H]andwriting may change substantially in both systematic and random ways over
both the short and long term, depending on the writer’s health, the speed of
writing, the positioning of the writing surface, maturation, and so on.”[57]
With respect to bite marks, an individual’s bite pattern can easily be altered
if he or she loses a tooth, chips a tooth, has braces removed, has braces
installed, or receives dentures. Likewise, the surfaces of tools and the
inter-workings of firearms will undoubtedly deteriorate and change over time
through repeated usage. Lastly, everyone at one time or another has worn a pair
of shoe so frequently that they eventually, and literally, deteriorate right
before your eyes.
The point being emphasized here is that while investigators or examiners maybe
in possession of a crime scene shoe print [toolmark, bullet, etc.], they
typically are not able to immediately compare this print with an actual shoe
until a suspect has been identified. Depending on the type of crime,
apprehending or identifying a suspect maybe a lengthy process. It’s this
interval of time between when a print is collected and when it is ultimately
evaluated against an actual shoe that is of crucial importance here. When this
gap represents a protracted period of time, examiners must intuitively factor
into their evaluation the “wear and tear” factor. As the word intuitively
suggests, the “wear and tear” consideration makes the subjective identification
process even more subjective.
3. Has it been empirically established that the evidence can be individualized?
Forensic scientists have dealt with induction and deduction rather haphazardly.
Induction is a form of inference that advances from a set of specific
observations to a generalization, called a premise. This premise is an
operational hypothesis, but its validity may not always surface. Conversely, a
deduction is a type of conclusion that progresses from a generalization to a
specific case. For the most part, deduction is the preferred type of inference
within the forensic science community. Providing that the generalizations or
premises that buttress the inference are valid, the examiner’s conclusion will
be valid. As John Thornton and Joseph Peterson explain however,
“knowing whether the premise is valid is the name of the game here; it is not
difficult to be fooled into thinking that one’s premises are valid when they are
not.”[58]
Over the past century, forensic examiners have been ignorant of the fact that
hypothesis testing’s equivalent is not deduction but instead induction. Once
scientists have inductively developed a working assumption or premise, they
verify the validity of their premise through testing. Unfortunately, forensic
examiners often incorrectly confuse a hypothesis with a deduction. For a
deduction to be considered scientifically legitimate its supporting premises
must be validated through testing. If, however, the examiner’s supporting
premises have never undergone such testing then all that we have is an assertion
awaiting verification through testing.[59]
Consider, for example, forensic science’s fundamental principle of
individuality. Many forensic examiner’s claim that they can individualize a
particular form of evidence (e.g., fingerprints, lip prints, shoe prints, bite
marks, etc.). More specifically, when testifying in court they assert that they
were able to deduce that the defendant deposited the crime scene fingerprint.
For this deduction to be correct, its supporting premises must be validated
through testing.
“ Individuality’s forensic context is supported by three premises. First,
numerous forms of biological, physical, and psychological entities exist in
unique, one-of-a-kind form. Second, these entities leave equally distinctive
traces of themselves in every environment they encounter. Third, the methods of
observation, measurement, and inference employed by forensic science are
adequate to link these traces back to the one and only object and/or individual
that produced them.”[60]
Unfortunately, numerous forensic science sectors, in particular the
identification fields, have conducted little, if any, systematic research geared
toward validating the basic premises that constitute their continued
existence.[61] Moreover, when research has been performed the findings seriously
weaken one, two or all three of the aforementioned premises.[62] Consequently,
the examiner’s deduction is not in fact a deduction; rather, it is an assertion,
supported by ‘specialized training’ and/or ‘experience’, which is pending
confirmation through testing. Because examiners’ experience will differ
considerably from one another, experience is a subjective criterion supporting
an identification.
4. Is the examiner required to identify a certain number of corresponding points
of similarity before an identification can be rendered?
This question, more than any other, will expose individuality’s subjective
nature. As mentioned, many forensic professions do not require their examiner’s
to isolate a predetermined number of corresponding points before an
identification can be made.[63] If an examiner states that his agency or
certifying organization requires a specified number he should then be asked
whether this number was derived at through testing or whether it was simply
agreed upon by the examiners in that agency or organization through informal
discussions. For the most part, if a specified number is required it is so
because of the later rather than the former.
If a minimum number of matching points is not required, how does the examiner go
about rendering his conclusion? For the most part, as alluded to earlier,
identifications typically rest exclusively on intuition, instincts, and
probabilistic estimations. (i.e., the examiner’s experience).
5. Is the identification premised on a statistics?
Two issues surface with respect to the statistical determination of
individuality. First, is the examiner appropriately trained in statistics?
Second, what statistical database was relied on?
As Professor Peterson and Thornton note, “Behind every opinion rendered by a
forensic scientist there is statistical basis.”[64] The statistical basis
“provides… an evaluation of the likelihood that his testimony reflects the
truth, rather than his personal belief or bias.”[65] Consequently, besides
understanding science and the scientific method, forensic scientists must be
competent consumers of statistics.[66]
Unfortunately, forensic scientists are rarely forced to take one statistics
course, let alone a series of classes, during their undergraduate or graduate
education. As a result, practicing forensic scientists are poor consumers of
statistics.[67] Many recent examples clearly illustrate this problem.[68]
Forensic practitioners seem to bungle even the most basic statistical rule—the
“multiplication” rule.[69] For instance, even though “[i]t should be common
knowledge to criminalists that properties must be statistically independent
before the probability of a conjunction of these properties can be derived from
the multiplication rule,”[70] forensic scientists routinely fail to consider the
“independence” issue.[71]
Nonetheless, forensic practitioners who are statistically ignorant routinely
testify as if they are knowledgeable consumers of statistics.[72] Such conduct
is inexcusable because,
“Without a firm grasp of the principles involved, the unwary witness can be lead
into making statements that he cannot properly uphold, especially in the matter
of claiming inordinately high probability figures.”[73]
Similarly, experts have testified in numerous cases to specific probabilities
based on statistical studies of unexplained origin.[74] Couple this incompetence
with the fact that no functional databases exist for many of the identifications
fields. Once more, when push comes to shove, the forensic examiner’s
probabilistic determination is more likely to be a by-product of his subjective
and personalized experience and training rather than his statistical acuity.
6. Documentation?
Three questions must be answered regarding documentation. First, did the
examiner document any of her supporting data? Second, if documented, in what
manner was the data documented? Last and most importantly, can the laboratory’s
activities, observations, and results be reconstructed exclusively on the basis
of the available records? In many cases, it is imperative that you look outside
the examination report to ascertain how the examiner documented her work. The
court-intended forensic report should be corroborate by a case file that
includes all the notes, worksheets, printouts, charts, and other data or records
used by the examiner to support her conclusions.[75] More importantly, as
forensic scientist Janine Arvizu stresses,
“A laboratory case file is the repository for records generated during the
analysis of evidence from a case. It should be an internally consistent,
unbroken chain of records that document all activities, observations,
measurements, and results relating directly to evidence from a given case. It
should provide sufficient detail, so that someone who is versed in the
technique, but not involved in the laboratory’s work, can understand what was
done and the basis for the reported conclusions.”[76]
As these questions make clear, defense attorneys must obtain a copy of the case
file (see infra next §). It is crucial that the entire case file is requested to
determine whether the laboratory’s reported results are technically valid and
whether the quality and uncertainty of the reported results can be supported
based on the laboratory’s records. If the laboratory records and underlying
documentation are noticeably absent from the case file, any results reached by
are no more reliable, or for that matter verifiable, than eyewitness testimony.
Science is premised on reproducibility; not on the forensic examiner’s ever
fading memory. Consequently, the analysts and the lab must be able to produce
any supporting documentation that verifies the quality and validity of its
reported results.[77]
Moreover, if the lab’s results are to be reproducible the supporting
documentation must be comprehensively and legibly written so that an independent
expert is capable of retesting the original analysts hypotheses and conclusions.
Again, as Janine Arvizu explains,
“The required information includes documentation regarding the integrity of the
evidence sample(s), the procedures used during testing, the qualifications of
the responsible analyst(s), the traceability of standards and measurements,
instrument operating conditions and maintenance, results obtained for unknown
samples and known controls, and the assumptions and basis for any statistical
analyses and interpretation of results.”[78]
Quite often, especially with respect to forensic identifications, the supporting
information is lacking given the subjective nature of the identification
process. Customarily, forensic examiners cloak their conclusions in terms of
their experience or specialized knowledge rather than any verifiable
documentation. The fact that an identification is buttressed by no supporting
documentation should make the obvious even more obvious—that forensic
identifications are purely subjective decisions.
(2) Expectation Bias Questions & Inquires
These question and inquiries are intended to illicit evidence that calls into
question whether the examiner was blind to certain expectations. Given the
manner in which evidence is typically processed and the high rate of interaction
between investigators, prosecutors and the examiner, it would be a shocking
discovery to find that a state employed forensic examiner was not made privy to
certain inculpatory expectations—conscious or unconscious.
1. The Case File & Domain-Irrelevant Information
As mentioned previously, in order to mount a legitimate challenge concerning
questionable scientific conclusions or results by a state-employed forensic
examiner, it is imperative that the defense obtain the underlying raw data (aka.
the case file).[79] Obtaining the underlying data will permit an independent
expert to evaluate the legitimacy of the forensic examiner’s conclusions. More
importantly, however, the case file information can prove invaluable in
determining whether the examiner was made privy to domain-irrelevant data or
other examiners’ conclusions—information that is easily capable of cultivating a
negative expectation against your client. For instance, did the examiner’s case
file have all the preliminary crime scene reports and investigative reports? If
so, what examination-irrelevant data was contained in these reports? For
instance, a firearm expert’s case file would not require a crime scene report
indicating that twelve eyewitnesses saw a man matching the description of your
client running from the murder scene. This information has absolutely no bearing
on the expert’s examination. Likewise, does the odontonlogist’s case file
contain the victim’s statement that the defendant bit her right breast? Again,
considering the nature of the odontologist’s examination, this superfluous data
is not only irrelevant but it is entirely prejudicial given its expectation
inducing power. Similarly, does the case file possess other experts’ reports?
For example, given the nature of a toolmark expert’s examination, his case file
need not contain the results of inculpatory DNA tests. Correspondingly, a
fingerprint examiner’s case file would not require the odontologist’s report
that inculpates your client. Once more, these bits of information are not
relevant to the examiner’s ultimate responsibility. His duty is to answer one
question—whether the defendant deposited the crime scene print. To answer this
question, the examiner need only to focus his attention on the two prints—and
only the two prints. Any superfluous data can only call into question the
examiner’s conclusion[s].
2. Written Correspondence and Notes of Communications Between the Investigating
Officers and the Examiner
Defense counsel must obtain any documented communications [e.g., handwritten,
fax, e-mail] between the forensic examiner and the investigators. Such
documentation can indicate whether the examiner was made aware of
domain-irrelevant data or the investigators’ expectations with respect to the
defendant’s guilt. For example, as previously discussed, transmittal
correspondences that supplement a submission to a crime lab frequently convey
more about the case than is required to carry out the required examination[s].
This information at times advises examiners about other inculpatory evidence and
may include what the submitting investigator expects or hopes the requested
tests will conclude.
George Castelle provides an excellent example of such a letter.[80] The example
concerns a Fred Zain case.[81] Stephan Casey was arrested and charged with
sexually abusing a five-year-old child. Prior to the West Virginia crime lab
obtaining the physical evidence, Zain resigned from the lab and accepted a
position with the Bexar County Medical Examiners Office in San Antonio, Texas.
Once the West Virginia crime lab received the physical evidence, technicians who
examined a carpet sample could not identify any semen stains on the carpet.
Investigators had hoped that the carpet sample would contain the offender’s
semen [aka. Mr. Casey’s semen]. Undisturbed by the crime lab’s failure to
discover inculpatory evidence, investigators sent the carpet sample to Zain in
San Antonio. The carpet sample was accompanied by the following letter:
“Mr. Zain:
This is the carpet that we discussed via Public Service. The W.Va. State Police
Lab was unable to show any evidence of sperm or blood being present on it.
The suspect was arrested for 1st Degree Sexual Abuse on a 5-year-old female. Any
evidence you can find pertaining to this crime will greatly increase our chances
of conviction.
Thank you,
Det. R.R. Byard
Huntington Police Department”[82]
The letter contains both domain-irrelevant information and expectation cues. The
irrelevant data being the fact that the West Virginia lab failed to identify any
semen; and the fact that the defendant was being charged with sexually abusing a
5-year old. Both bits of information are completely irrelevant to whether Zain
can identify a semen stain and whether that semen stain is consistent with the
defendant’s semen. The expectation cue is quite obvious, “Any evidence you can
find pertaining to this crime will greatly increase our chances of conviction.”
When both of these factors are intertwined and spoon-fed to an individual like
Zain, it is quite clear what the expected outcome must be (i.e., discover semen
and then match this semen to the defendant); regardless of whether he can
actually and truthfully accomplish such a task. As might be expected, Zain found
what his forerunners failed to find.
Likewise, examiners may become deeply involved with investigators as the
evidence in a case develops. This may encourage an increasing number of phone
calls, emails or facsimiles between the examiner and investigators. These
communications, like the initial transmittal correspondences, typically involve
superfluous information that can only bias the examiner’s evaluations and
ultimate conclusions. If the correspondences were via fax or email they should
be easily obtainable and decipherable. Likewise, given the nature of the case,
examiners and/or investigators may document, in handwriting, each conversation
or visit with one another. If this is the case, it is imperative to obtain this
documentation so you can ascertain whether the examiner was subjected to any
irrelevant and potentially biasing information. Again, such information may
include other inculpatory evidence, the defendant’s post-crime behavior, his
jail behavior [if he is incarcerated pending trial], whether the defendant
committed similar crimes or other offenses, etc.
One way to ascertain whether the examiner worked closely with investigators is
to obtain copies of the crime lab’s attendance log. Most labs, especially those
that are accredited, keep a detailed list of who enters and exits the crime lab
and who they are visiting—even if those who are entering the lab are law
enforcement or other examiners from a neighboring crime lab. Once in possession
of the log, identify all instances where investigators met with the examiner.
During cross-examination or a deposition, defense counsel is encouraged to ask
the examiner to explain the contents of each visit. If he documented these
visits, then his documentations should be turned over for inspection. If he
failed to document when, why and what they discussed, scrutinize why a
particular meeting was not worthy of documentation.
3. Peer Review—Exactly How and Why Was it Performed?
The fact that an examiner’s results were peer reviewed by another co-worker
means absolutely nothing in certain contexts. Peer review, like the original
examination it is reviewing, can be affected by context effects. Defense counsel
must make a concerted effort at identifying and distinguishing between a
“formalistic review” and an “independent confirmation.”
“Formalistic reviewing” is the type of “peer review” advocated by ASCLD standard
1.4.2.16.[83] Under this form of review, the peer reviewer merely acts as a
process check on the procedures utilized by the initial examiner. His role is to
make certain that the report satisfactorily documents and justifies its findings
and conclusion[s]. ASCLD asserts that this form of “peer review” is designed “to
ensure that the conclusions of its examiners are reasonable and within the
constraints of scientific knowledge.”[84] Despite whether “formalistic” peer
reviewers are exposed to the contaminating data that an initial examiner was
exposed to, the reviewer normally knows the original examiner’s conclusions,
itself a strong form of contamination. Nonetheless, if the reviewer’s role is
simply to ensure that the report is satisfactorily documented and its conclusion
adequately supported, then the fact that the reviewer is aware of the initial
examiner’s outcome is arguably necessary. More importantly, however, if the
reviewer’s sole purpose is to make certain that the initial examiner dotted his
“i’s” and crossed his “t’s” the question becomes what is a review like this
supposed to accomplish?
By no means does this form of peer review “independently confirm” the original
conclusions’ correctness. All the peer reviewer did was read the final report
and determine whether the initial examiner’s conclusions were reasonable and
supported by the appropriate documentation. Reasonableness does not mean
correctness. Given the subjective nature of many forensic assessments, examiner
reasonableness may vary according to the nature of the examination. Take two
fingerprint examiners for example. Fingerprint examiner A is able to identify
fifteen corresponding points between two prints. Fingerprint examiner B is able
to identify only eight corresponding points between two other prints. From this
scenario, it’s obvious that the more reasonable identification would be examiner
A’s given that he was able to identify twice as many corresponding points.
However, no matter how reasonable a particular identification or conclusion may
seem, the intensity of its reasonableness says nothing about whether the
conclusion is correct. For instance, examiner A’s evaluation skills may not be
as cautiously refined as examiner B’s. Consequently, examiner A may incorrectly
identified various points on both prints. Again, correctness cannot be verified
under these circumstances because the results have not been “independently
confirmed” by another expert.
“Independent confirmation” involves having another examiner who is entirely
blind or ignorant to the facts of the case and the original examiner’s
conclusions. Under these circumstances, the reviewing examiner will not be
persuaded by any extraneous information or expectations. This is a far cry from
the cross-contamination of expectation inducing information that surfaces in
“formalistic reviewing.” (e.g., the original examiner is aware of expectations;
so is the reviewing examiner). Independent confirmation essentially involves
evaluating all of the underlying documentation to determine whether the
reviewing examiner can recreate the initial examiner’s conclusions. If the
reviewing examiner cannot replicate the initial conclusions then the validity of
the original examiner’s conclusions must be called into question. Again, it
needs to be stressed that independent confirmation is a blind-testing procedure,
in that the reviewing examiner has no active knowledge with respect to any
superfluous data or anticipated outcomes.
In short, if confronted by an examiner who asserts that his conclusions were
peer reviewed, defense counsel must dig deeper to ascertain the specific manner
of peer review. This essential inquiry should be whether the independent
reviewer was made privy to any information that could have potentially biased
his review.
D. Conclusion
Crime labs and forensic science have increasingly played roles in numerous
wrongful convictions.[85] As one criminal defense attorney recently commented,
“Although the law enforcement community and the courts bear the heaviest
responsibility for convictions driven by police and prosecutorial misconduct, a
substantial number of these cases involved faulty forensic evidence that should
have been exposed and vigorously challenged by the defense.”[86]
Forcefully challenging forensic evidence involves not only attacking the
legitimacy of the science that supports the alleged area of expertise, but also
identifying context cues that could potentially skew the examiner’s evaluation.
As this Article has repeatedly emphasized, “The most obvious danger in forensic
science is that an examiner’s observations and conclusions will be influenced by
extraneous, potentially biasing information.”[87] More importantly, however,
while this Article has identified the more common forms of observer effect
errors within the forensic science context, “there are other potentially
error-producing sources of expectation beyond those induced by intentional or
unintentional suggestion” that this Article has not covered.[88] Accordingly, it
is incumbent upon defense attorneys to become more knowledgeable with respect to
these covert forms of unintentional bias. Once aware that these unobserverable
biases permeate crime labs and many forensic examinations, defense attorneys
will be better equipped at eliciting evidence and testimony that can identify
whether an examiner’s conclusions may have potentially been contaminated by
observer effect errors. These inquiries will typically involve ascertaining
whether the examination is subjectively top heavy and whether the examiner was
made aware of certain expected or desired outcomes.
--------------------------------------------------------------------------------
[1] Investigator, Illinois Appellate Defenders Office, Death Penalty Trial
Assistance Unit. J.D. candidate (2004), Northwestern University School of Law;
M.S. (Forensic Science), University of New Haven; B.S. (Psychology), University
of Pittsburgh. Mr. Cooley can be contacted via his website: www.law-forensic.com;
or by email: c-cooley@law.northwestern.edu or Craig.Cooley@osad.state.il.us
[2] Irving Stone, Capabilities of Modern Scientific Laboratories, 25 Wm. & Mary
L. Rev. 659, 659 (1984) (emphasis added).
[3] Paul L. Kirk, Crime Investigation 2 (2d. John I. Thornton ed. 1974)
(emphasis added).
[4] Robert Rosenthal, Experimenter Effects in Behavioral Research 3 (1966).
[5] See Paul C. Giannelli, The Abuse of Scientific Evidence in Criminal Cases:
The Need for Independent Crime Laboratories, 4 Va. J. Soc. Pol’y & L. 439 (1997)
(discussing how the forensic community’s structural configuration has created
many pro-prosecution forensic scientists).
[6] See Scott Bales, Turning the Microscope Back on Forensic Scientists, 26
Litigation 51 (2000) (discussing many instances concerning FBI examiners).
[7] See D. Michael Risinger et al., The Daubert/Kumho Implications of Observer
Effects in Forensic Science: Hidden Problems of Expectation and Suggestion, 90
Cal. L. Rev. 1, 12-27 (2002).
[8] Michael J. Saks, Ethics in Forensic Science: Professional Standards for the
Practice of Criminalistics, 43 Jurimetrics J. 359, 363 (2003) (book review).
[9] See Ulric Neisser, Cognition and Reality: Principles and Implications of
Cognitive Psychology 43-45 (1976).
[10] David C. Thompson, DNA Evidence in the O.J. Simpson Trial, 67 U. Colo. L.
Rev. 827, 845 (1996),
[11] See Victoria L. Phillips et al., The Application of Signal Detection Theory
to Decision-Making in Forensic Science, 46 J. Forensic Science 294, 298 (2001)
(“[F]orensic scientists often encounter ambiguous and murky decision-making
situations.”).
[12] See John I. Thornton & Joseph L. Peterson, The General Assumptions and
Rationale of Forensic Identification, in Science in the Law: Forensic Science
Issues § 1-8.0 at 26-27 (David L. Faigman et al. 2d eds., 2002).
[13] Consider the wide ranging point ‘system’ in fingerprint, see Commonwealth
v. Hunter, 338 A.2d 623, 624 (Pa. Super. Ct. 1975) (fourteen points); United
States v. Durant, 545 F.2d 823, 825 (2d Cir. 1976) (fourteen points); Alexander,
571 N.E.2d 1075, 1078 (Ill. App. Ct. 1991) (eleven and fourteen points); State
v. Starks, 471 So.2d 1029, 1032 (La. Ct. App. 1985) (twelve points); People v.
People v. Garlin, 428 N.E.2d 697, 700 (Ill. App. Ct. 1981) (twelve points);
Garrison v. Smith, 413 F. Supp. 747, 761 (N.D. Miss. 1976) (twelve points);
State v. Murdock, 689 P.2d 814, 819 (Kan. 1984) (twelve points); Magwood v.
State, 494 So.2d 124, 145 (Ala. Crim. App. 1985) (eleven points); State v. Cepec,
1991 WL 57237, at *1 (Ohio Ct. App. 1991) (eleven points); Ramirez v. State, 542
So.2d 352, 353 (Fla. 1989) (ten points); People v. Jones, 344 N.W.2d 46, 46
(Mich. Ct. App. 1983) (ten points); State v. Jones, 368 S.E.2d 844, 846 (N.C.
1988) (ten points); Commonwealth v. Ware, 329 A.2d 258, 276 (Pa. 1974) (nine
points); State v. Awiis, 1999 WL 391372, at *7 (Wash. Ct. App. 1999) (eight
points); Commonwealth v. Walker, 116 A.2d 230, 234 (Pa. Super. Ct. 1955) (four
points). See also Robert Epstein, Fingerprints Meet Daubert: The Myth of
Fingerprint “Science” Is Revealed, 75 S. Cal. L. Rev. 605 (2002).
[14] See Victoria L. Phillips et al., The Application of Signal Detection Theory
to Decision-Making in Forensic Science, 46 J. Forensic Science 294, 299 (2001).
[15] Determining whether the match is a coincidental match—examiners are
essentially asking how probable is it to find a match by pure chance.
[16] See id. (“[W]ith the exception of such areas as biological fluids… the
forensic sciences possess little empirical data to assist examiners in
interpreting the meaning of their test results and affixing a probability or
confidence to their findings.”).
[17] See id. at 299 (“Ordinarily, the examiner does not have access to a
database that assists in quantifying the rarity of the marks, or which even
records them, but must rely on memory of other samples viewed in the past.”).
[18] Barry Gaudette, Basic Principles of Forensic Science, in 1 Encyclopedia of
Forensic Science 300 (Jay A. Siegel et al. 2000) (emphasis added).
[19] Michael J. Saks, Banishing Ipse Dixit: The Impact of Kumho Tire on Forensic
Identification Science, 57 Wash. & Lee L. Rev. 879, 882 (2000).
[20] Joan Griffin & David J. LaMagna, Daubert Challenges to Forensic Evidence:
Ballistics Next on the Firing Line, Champ. (Sept.-Oct. 2002), at 20, 58.
[21] See Lynn C. Hartfield, Daubert/Kumho Challenges to Handwriting Analysis,
Champ. (Nov. 2002), at 24 (discussing various manners to challenge handwriting).
[22] See Joan Griffin & David J. LaMagna, Daubert Challenges to Forensic
Evidence: Ballistics Next on the Firing Line, Champ. (Sept.-Oct. 2002), at 20,
58.
[23] See Paul C. Giannelli , The Twenty-First Annual Kenneth J. Hodson Lecture:
Scientific Evidence in Criminal Prosecutions, 137 Mil. L. Rev. 167, 184-85
(1992).
[24] See Joseph L. Peterson, Steven Mihajlovic & Michael Gilliland, Forensic
Evidence and the Police 117 (National Institute of Justice Research Report,
1984).
[25] See D. Michael Risinger et al., The Daubert/Kumho Implications of Observer
Effects in Forensic Science: Hidden Problems of Expectation and Suggestion, 90
Cal. L. Rev. 1, 47 (2002).
[26] See Technical Working Group for Eyewitness Evidence, United States Dep’t of
Justice, Eyewitness Evidence: A Guide for Law Enforcement (1999) (discussing the
various methods of eyewitness identification).
[27] See Gary L. Wells et al., Eyewitness Identification Procedures:
Recommendations for Lineups and Photospreads, 22 Law & Hum. Behav. 603 (1998).
[28] See D. Michael Risinger et al., The Daubert/Kumho Implications of Observer
Effects in Forensic Science: Hidden Problems of Expectation and Suggestion, 90
Cal. L. Rev. 1, 47-50 (2002).
[29] D. Michael Risinger et al., The Daubert/Kumho Implications of Observer
Effect in Forensic Science: Hidden Problems of Expectation and Suggestion, 90
Cal. L. Rev. 1, 29 (2002). Consider, for example, the following quote by a
forensic scientist advocating the teaching of forensic science in high school.
“Forensic science appeals to the detective in all of us as evidenced by the
growth in popularity of media including TV, best-selling novels, and movies that
portray some aspect of crime solving.”
Editorial, Forensics: The Thrill Is The Detective Work, Wall St. J., Mar. 5,
2002, at A17, available at 2002 WL WSJ3387690 (emphasis added). I would argue
that for aspiring scientists, forensic science would appeal to the scientist in
all of them. It is comments like this that create the incorrect notion that
forensic scientists are suppose to solve the case by acting like investigators.
When forensic practitioners act and think like detectives, wrongful convictions
are bound to surface.
[30] D. Michael Risinger et al., The Daubert/Kumho Implications of Observer
Effect in Forensic Science: Hidden Problems of Expectation and Suggestion, 90
Cal. L. Rev. 1, 28 (2002).
[31] Clive A. Stafford Smith & Patrick D. Goodman, Forensic Hair Comparison
Analysis: Nineteenth Century Science or Twentieth Century Snake Oil, 27 Colum.
Hum. Rts. L. Rev. 227, 259 (1996) (emphasis added).
[32] Larry S. Miller, Procedural Bias in Forensic Science Examinations of Human
Hair, 11 Law & Hum. Behav. 157, 158 (1987) (emphasis added).
[33] Michael J. Saks, Banishing Ipse Dixit: The Impact of Kumho Tire on Forensic
Identification Science, 57 Wash. & Lee L. Rev. 879, 886 (2000).
[34] See Office of Inspector General, U.S. Dep’t of Justice, The FBI Laboratory:
Investigation Into Laboratory Practices and Alleged Misconduct in
Explosive-Related and Other Cases (April 1997).
[35] Scott Bales, Turning the Microscope Back on Forensic Scientists, 26
Litigation 51 (2000) (discussing the OIG’s findings and recommendations).
[36] See John I. Thornton & Joseph L. Peterson, The General Assumptions and
Rationale of Forensic Identification, in Science in the Law: Forensic Science
Issues § 1-1.1 at 2 (David L. Faigman et al. 2d eds., 2002) (“Most forensic
examinations are conducted in government-funded laboratories, usually located
within law enforcement agencies, and typically for the purpose of building a
case for the prosecution.”).
[37] See Maurice Possley, New tests requested on victim’s bite marks, Chi. Trib.,
July 25, 2003, at 4.
[38] See id. See also Chase Squires, Dentists cleared in wrongful arrest suit,
St. Petersburg Times, Dec. 22, 2000, at 1 (discussing how Dale Morris was jailed
for four month for murdering a nine year old neighbor because a forensic dentist
mistakenly concluded that Morris’ bite pattern matched those on the victim;
Morris was exonerated through DNA evidence); Katherine Ramsland, Whose Bite Mark
is it, Anyway?, at http://www.crimelibrary.com/criminal_mind/forensics/bitemarks/5.html?sect=21
(discussing Ricky Amolsch’s case); Ellen O’Brien, From DNA to Police Dogs,
Evidence Theories Abound, Boston Globe, Jan. 22, 1999, at B1 (discussing Edmund
Burke’s case).
[39] See generally David W. Peterson & John M. Conley, Of Cherries, Fudge and
Onions: Science and its Courtroom Perversion, 64 Law & Contemp. Probs. 213,
227-32 (2001) (discussing ‘cherrypicking” in the legal and scientific context).
[40] D. Michael Risinger et al., The Daubert/Kumho Implications of Observer
Effect in Forensic Science: Hidden Problems of Expectation and Suggestion, 90
Cal. L. Rev. 1, 47-50 (2002) (advocating the use of evidence lineups); see also
Larry S. Miller, Procedural Bias in Forensic Science Examinations of Human Hair,
11 Law & Hum. Behav. 157, 159 (1987) (same).
[41] Professor Saks and his colleagues employ this word, see D. Michael Risinger
et al., The Daubert/Kumho Implications of Observer Effect in Forensic Science:
Hidden Problems of Expectation and Suggestion, 90 Cal. L. Rev. 1, 48 (2002).
[42] Id.
[43] See Anne Constable, Eyewitness Identification Has Shortcomings, Some Say,
Sante Fe New Mexican, Dec. 8, 2002,a t A-1 (discussing the various shortcomings
concerning ‘old-school’ eyewitness identification methods); Amy L. Bradfield et
al., The Damaging Effect of Confirming Feedback on the Relation Between
Eyewitness Certainty and Identification Accuracy, 87 J. Applied Psych. 112
(2002) (highlighting more weaknesses regarding certain methods of eyewitness
identification).
[44] A ‘show-up’ is an identification procedure where the witness is presented
with a single suspect for identification.
[45] See Technical Working Group for Eyewitness Evidence, United States Dep’t of
Justice, Eyewitness Evidence: A Guide for Law Enforcement (1999).
[46] Barry C. Scheck, DNA and Daubert, 15 Cardozo L. Rev. 1959, 1997 (1994).
[47] See Peter Straton & Nicky Hayes, A Student’s Dictionary of Psychology 82
(3d ed. 1999). Defining double-blind control as,
“An experimental control in which neither the person conducting the experiment
nor the research participants in the study are aware of the experimental
hypothesis or conditions. Double blind controls are necessary as precautions
against experimenter effects, and are considered essential in tests on new drugs
or assessments of therapeutic procedures.”
[48] D. Michael Risinger et al., The Daubert/Kumho Implications of Observer
Effect in Forensic Science: Hidden Problems of Expectation and Suggestion, 90
Cal. L. Rev. 1, 45 (2002) (emphasis added).
[49] See id. at 31 (“forensic examiners should be insulated form all information
about an inquiry except necessary domain specific information…”); Steve Selvin &
B.W. Grunbaum, Genetic Marker Determination in Evidence Bloodstains: The Effect
of Classification Errors on Probability of Non-discrimination and Probability of
Concordance, 27 J. Forensic Sci. Soc’y 57, 61 (1986) (“The possibility of bias
is minimized for any testing procedure if all analysts are unacquainted with
circumstances of the alleged crime and unaware of any previous results.”); Janet
C. Hoeffel, The Dark Side of DNA Profiling: Unreliable Scientific Evidence Meets
the Criminal Defendant, 42 Stan. L. Rev. 465, 486 (1990) (“[E]xaminers should be
told neither the origin of the samples nor the prosecution’s theory of the
case.”).
[50] This notion is by no means novel. Rather, in 1894 Hagan expressed the fact
that,
“The [document] examiner must depend wholly upon what is seen, leaving out of
consideration all suggestions or hints from interested parties… Where the expert
has no knowledge of the moral evidence or aspects of the case… there is nothing
to mislead him…”
William E. Hagan, Disputed Handwriting 82 (1894).
[51] National Research Council, The Evaluation of Forensic DNA Evidence 79
(1996).
[52] Considering that the forensic community “deals mostly with inanimate
objects, the blinding procedure will be simpler than in fields that work with
humans and animals, such as biomedical research and psychology. Those fields
must construct double-blind studies, while forensic science needs only
single-blind procedures.” D. Michael Risinger et al., The Daubert/Kumho
Implications of Observer Effect in Forensic Science: Hidden Problems of
Expectation and Suggestion, 90 Cal. L. Rev. 1, 45 n. 205 (2002).
[53] See R. Cook, et al., A Model for Case Assessment and Interpretation, 38 Sci.
& Just. 151 (1998) (discussing the ‘filtering’ process system that has been
developed in the United Kingdom’s Forensic Science Service).
[54] For a list of cases see http://www.law-forensic.com/cfr_science_myth.htm
(website maintained by author).
[55] See Associated Press, A Year of Scandals With Forensic Evidence, Wash.
Post, July 27, 2003, at A05 (discussing the various forensic mishaps over the
past year; many would not have been identified without the assistance of
skeptical defense attorneys).
[56] See Dan Cray et al., How Science Solves Crimes From ballistics to DNA,
forensic scientists are revolutionizing police work—on TV and in reality. And
just in time, Time Mag., Oct. 21, 2002, at 36.
“TV viewers can tune into a forensics drama almost every night of the week,
starting with the trend setting CSI on CBS; its first-season spawn, CSI: Miami,
also on CBS; and Crossing Jordan on NBC. On cable, The Forensics Files is Court
TV’s biggest prime-time show ever, while Autopsy is wooing—and spooking—viewers
on HBO.”
[57] See Victoria L. Phillips et al., The Application of Signal Detection Theory
to Decision-Making in Forensic Science, 46 J. Forensic Science 294, 299 (2001).
[58] John I. Thornton & Joseph L. Peterson, The General Assumptions and
Rationale of Forensic Identification, in Science in the Law: Forensic Science
Issues § 1-5.3 at 14 (David L. Faigman et al. 2002).
[59] See id.
[60] Craig M. Cooley, Forensic Individualization Sciences and the Capital Jury:
Are Witherspoon Jurors More Deferential to Suspect Science than Non-Witherspoon
Jurors?, 27 S. Ill. U. L. J. 221, 250 (2003) (citing to Michael J. Saks,
Implications of the Daubert Test for Forensic Identification Science, 1
Shepard’s Expert & Sci. Evidence 427 (1997)).
[61] By no means is this a recent realization. Various commentators have noted
the community’s lack of research for nearly half a century, see James Osterburg,
A Commentary on Issues of Importance in the Study of Investigation and
Criminalistics, 11 J. Forensic Sci. 261, 261 (1966) (“In criminalistics,
however… [published data] is almost nonexistent. Testimony reported in the
hearings emphasizes unintentionally the scarcity of published data through
failure to mention any journals in which such vital information is available.”);
Paul Kirk, The Ontogeny of Criminalistics, 54 J. Crim. L. & Criminology 235, 238
(1963) (“Its unfortunate that the great body of knowledge which exists in this
field is largely uncoordinated and has not yet been codified in clear and simple
terms.”); Alfred Biasotti, The Principles of Evidence Evaluation as Applied to
Firearms and Tool Mark Identification, 9 J. Forensic Sci. 428, 428 (1964)
(“[Forensic] authors have given many theoretical and an few practical
applications of statistical probability to criminalistics problems and have
pointed out the serious lack of fundamental data which would allow broader
practical applications.”).
[62] See e.g., John J. Harris, How Much Do People Write Alike?, 48 J. Crim. L. &
Criminology 637 (1958) (finding that, contrary to apparent belief of handwriting
experts, it is not true that no two people write indistinguishably alike).
[63] See e.g., Simon Cole, Suspect Identities: A History of Fingerprinting and
Criminal Identification (2001) (discussing how the American fingerprint
examiners are not required to identify a certain number of matching points).
[64] John I. Thornton & Joseph L. Peterson, The General Assumptions and
Rationale of Forensic Identification, in Science in the Law: Forensic Science
Issues § 1-7.2 at 24 (David L. Faigman et al. 2002); Christophe Champod & Ian W.
Evett, A Probabilistic Approach to Fingerprinting Evidence, 51 J. Forensic
Identification 101, 103 (2001) (“[T]he process of identification… is essentially
inductive and… probabilistic.”).
[65] Paul L. Kirk & Charles R. Kingston, Evidence Evaluation and Problems in
General Criminalistics, 9 J. Forensic Sci. 434, 437 (1964) (emphasis added).
[66] See Norah Rudin & Keith Inman, Principles and Practice of Criminalistics:
The Profession of Forensic Science 302 (2001) (“Now, more than ever, the
onslaught of technology obligates the criminalist to draw on a strong background
in the physical sciences, including an understanding of statistics and logic.”).
[67] See Paul L. Kirk & Charles R. Kingston, Evidence Evaluation and Problems in
General Criminalistics, 9 J. Forensic Sci. 434, 435 (1964) (noting that
“criminalists… do not understand statistics, and do not know how to use them
constructively.”).
[68] David Derbyshire, Misleading statistics were presented as facts in Sally
Clark trial, Daily Telegraph (London), June 12, 2003, at 04 (discussing how
erroneous statistics lead to Sally Clark’s wrongful conviction); Carlos Miller,
Phoenix police lab errs on DNA, Arizona Republic, May 6, 2003, at 1B (discussing
how Phoenix crime lab technicians miscalculated the likelihood that a person’s
DNA was present on evidence in nine cases; the cases ranged from aggravated
assaults to rapes and murders); Melody McDonald, DNA tests sways prosecutor,
Star-Telegram (Ft. Worth, TX), Oct. 10, 2002, at 1 (discussing how a Ft. Worth
DNA analyst’s miscalculations forced Ft. Worth prosecutors to drop the death
penalty against Jamien Demon Nickerson). Probably the most glaring example of
statistical incompetence is Arnold Melnikoff. Melnikoff’s statistical errors
helped erroneously convict Jimmy Ray Bromgard. See Wrong conviction brings
scientist’s work into question, San Diego Union-Trib., Jan. 5, 2003, at A4
(discussing Bromgard’s wrongful conviction). According to Walter Rowe, a
forensic chemistry professor at George Washington University,
“Everything about the estimates in Melnikoff’s [statistical] testimony is just
bullshit… It is nonsense on stilts. The most glaring is this idea that
microscopic features of head hair and pubic hair are not correlated, that those
are independent probabilities… This kind of theorizing flies in the face of
every adult’s common knowledge. Anyone can look at that and rightly ask what in
the world he was talking about.”
Charlie Gillis, Scandal in the forensic labs: Hundreds of cases undergoing
review in Montana, Nat’l Post, Feb. 1, 2003, at B01. Josiah Sutton’s wrongful
conviction, can be attributed to a Houston crime lab technician’s statistical
miscalculation concerning DNA evidence. See Roma Khanna, DNA from conviction of
teen will be retested, Hous. Chron., Feb. 8, 2003, at 33. Likewise, in Gary
Dotson’s 1979 trial for the rape of Cathleen Crowell Webb,
“a state forensic scientist testified that according to a genetic marker test he
performed, Dotson was one of only 10% of Caucasian men who could have been the
source of the semen found on her panties. Years after Dotson had gone to prison
for the rape, Webb recanted her story. To bolster their claim that Dotson was
guilty, authorities retested the semen, using the same test, but a different
operator. The retest showed that two-thirds of the white male population could
have been the source of the semen.”
Anthony Pearsall, DNA Printing: The Unexamined ‘Witness’ In Criminal Trials, 77
Cal. L. Rev. 665, 674 (1989).
[69] The multiplication rule states that the “chance that two things will both
happen equals the chance that the first will happen, multiplied by the chance
that the second will happen given that the first has happened.” David Freedman
et al., Statistics 229 (3d ed. 1998); see also Michael J. Saks, Merlin and
Solomon: Lessons From the Law’s Formative Encounters With Forensic
Identification Science, 49 Hastings L.J. 1069, 1086 (1998) (“The essential idea
of this concept is that if objects vary on a number of independent (i.e.,
uncorrelated) dimensions, the probability of occurrence of any one combination
of characteristics is found by multiplying together the probabilities associated
with each dimension.”).
[70] Charles R. Kingston & Paul L. Kirk, The Use of Statistics in Criminalistics,
55 J. Crim. L. & Criminology 514, 516 (1964).
[71] See Charlie Gillis, Scandal in the forensic labs: Hundreds of cases
undergoing review in Montana, Nat’l Post, Feb. 1, 2003, at B01 (discussing how
Arnold Melnikoff misapplied the multiplication rule by failing to correctly
consider the independence of variables). See also David Freedman et al.,
Statistics 229 (3d ed. 1998) (“Two things are independent if the chances for the
second given the first are the same, no matter how the first one turns out.
Otherwise, the two things are dependent) (emphasis in original).
[72] According to Professor Andrea A. Moenssens,
“Experts use statistics compiled by other experts without any appreciation of
whether the data base upon which the statistics were formulated fits their own
local experience, or how the statistics were compiled. Sometimes these experts,
trained in one forensic discipline, have little or no knowledge of the study of
probabilities, and never even had a college level course in statistics.”
Andrea A. Moenssens, Novel Scientific Evidence in Criminal Cases: Some Words of
Caution, 85 J. Crim. L. & Criminology 1, 18 (1993).
[73] Paul L. Kirk & Charles R. Kingston, Evidence Evaluation and Problems in
General Criminalistics, 9 J. Forensic Sci. 434, 437 (1964).
[74] See Clive A. Stafford Smith & Patrick D. Goodman, Forensic Hair Comparison
Analysis: Nineteenth Century Science or Twentieth Century Snake Oil, 27 Colum.
Hum. Rts. L. Rev. 227, 257-58 (1996) (discussing various instances).
[75] See Scott Bales, Turning the Microscope Back on Forensic Scientists, 26
Litigation 51 (2000)
[76] Janine Arvizu, Shattering the Myth: Forensic Laboratories, CHAMP. (May,
2000), at 18,23.
[77] See id.
[78] Id.
[79] See George Castelle, Lab Fraud: Lessons Learned from the ‘Fred Zain
Affair’, Champ., May 1999, at 12; Janine Arvizu, Shattering the Myth: Forensic
Laboratories, CHAMP. (May, 2000), at 18.
[80] See George Castelle, Lab Fraud: Lessons Learned from the ‘Fred Zain
Affair’, Champ., May 1999, at 12.
[81] See Matter of Investigation of West Virginia State Police Crime Laboratory,
Serology Div., 438 S.E.2d 501 (W.Va.1993) (discussing, at length, Zain’s
fraudulent conduct and incompetence).
[82] George Castelle, Lab Fraud: Lessons Learned from the ‘Fred Zain Affair’,
Champ., May 1999, at 12.
[83] See American Society of Crime Laboratory Directors, Laboratory
Accreditation Board Manual (2000).
[84] Id. at § 1.4.2.16.
[85] See Robert Tanner, Crime Labs Placed Under a Microscope: Miscues Lead to
Calls for Changes in Forensic Labs, Wash. Post, July 27, 2003, at A05
(discussing recent forensic mishaps and wrongful convictions attributable to
crime labs or suspect forensic science).
[86] Michele Nethercott, Indigent Defense, Champ. (June 2003), at 61 (emphasis
added)
[87] D. Michael Risinger et al., The Daubert/Kumho Implications of Observer
Effect in Forensic Science: Hidden Problems of Expectation and Suggestion, 90
Cal. L. Rev. 1, 9(2002).
[88] Id. at 9-10.
______________________________________________________________________
Feel free to pass The Detail along to other
examiners. This is a free newsletter FOR
latent print examiners, BY latent print examiners. There are no copyrights on
The Detail (except in unique cases such as this week's article), and the website
is open for all to visit.
If you have not yet signed up to receive the Weekly Detail in YOUR e-mail inbox,
go ahead and join the list now
so you don't miss out! (To join this free e-mail newsletter, send a blank
e-mail from the e-mail address you wish to subscribe, to:
theweeklydetail-subscribe@topica.email-publisher.com) If you have
problems receiving the Detail from a work e-mail address, there have been past
issues with department e-mail filters considering the Detail as potential
unsolicited e-mail. Try subscribing from a home e-mail address or contact
your IT department to allow e-mails from Topica. Members may
unsubscribe at any time. If you have difficulties with the sign-up process
or have been inadvertently removed from the list, e-mail me personally at
kaseywertheim@aol.com and I will try
to work things out.
Until next Monday morning, don't work too hard or too little.
Have a GREAT week!
|