UPDATES ON CLPEX.com
Updated the Training page to include
Ivan Futrell's new Fingerprint Comparison and Identification Course.
Four classes have been scheduled in Texas, California, and Missouri.
Check out his course page for details.
we looked at
a report on the McKie
settlement and saw indications of additional controversy as events continue
Several weeks ago we looked at a critical view on the accuracy of latent
we look at a study that addresses many of the
concerns in that critical viewpoint in a positive way for our community
A Report of Latent Print Examiner Accuracy During
Comparison Training Exercises
Identification, 2006, 56
Kasey Wertheim 
Glenn Langenburg 
Andre Moenssens 
Abstract: During comparison training exercises, data from 108 participants
were collected. For each participant, the following were recorded: the
number of comparisons performed, the number of correct individualizations
made, the number of erroneous individualizations made, the number of
clerical errors made, and the assessments of the latent prints regarding the
quantity and quality of information present in the latent prints in the
exercises. Additional information regarding the training and experience of
the participant was also gathered in such a manner that preserved the
anonymity of the participant.
Because the training courses were open to participants of any skill level,
including participants with no training and experience, the authors
separated the data of participants with more than one year of experience
from the data of participants with one year of experience or less. The 92
participants with more than one year of experience made 5861
individualizations (identifications) at the highest level of confidence.
Fifty-eight hundred of these individualizations were correct and 61 of these
individualizations were one of two types of error: 59 were clerical in
nature and 2 were erroneous individualizations. This resulted in an
erroneous individualization rate of 0.034% and a clerical error rate of
1.01% for the participants with more than one year of experience during
these training exercises.
A follow-up experiment was performed involving verification of the errors
reported by previous participants. Sixteen participants with more than one
year of experience acted as verifiers to previous participants’ results.
Each verifier was given a packet to verify containing the results of eight
correct individualizations and two errors. These 16 independent reviewers
did not verify any of the errors given to them in the verification packet
Prior to the Daubert decision, the standard for the admission of novel
scientific evidence was the one first articulated in Frye v United States
. In the Frye case, the court stated:
Just when a scientific principle or discovery crosses the line between the
experimental and demonstrable stages is difficult to define. Somewhere in
this twilight zone the evidential force of the principle must be recognized,
and while courts will go a long way in admitting expert testimony deduced
from a well-recognized scientific principle or discovery, the thing from
which the deduction is made must be sufficiently established to have gained
general acceptance in the particular field in which it belongs. [Emphasis
Thus was born the “general acceptance” test for the admission of novel
scientific evidence. Over the years, the new rule was applied to a wide
variety of forensic techniques in local and state as well as in federal
courts. Even after the Federal Rules of Evidence were promulgated in 1975
, the Frye principle of general acceptance remained the law on novel
expert evidence in criminal cases. Most courts adopted the principle, even
those which by then had formulated evidence codes based on the federal rules
of evidence. The Frye concept was also followed in the majority of federal
circuits, though it was modified here and there.
The United States Supreme Court changed that concept when the Daubert
plaintiffs appealed a decision of the 9th Circuit Court of Appeals, which
had nonsuited them because the testimony of the experts who sought to
testify on behalf of the plaintiffs was deemed inadmissible; their proposed
testimony was held not to meet the Frye test! The case went to the Supreme
Court on the issue of whether the passage of the Federal Rules of Evidence,
and particularly Federal Rule 702 which governs the admissibility of expert
testimony, had done away with the Frye rule. At the time of the Daubert
appeal, Rule 702 stated:
If scientific, technical, or other specialized knowledge will assist the
trier of fact to understand the evidence or to determine a fact in issue, a
witness qualified as an expert by knowledge, skill, experience, training, or
education, may testify thereto in the form of an opinion or otherwise. 
The federal principle that a proffered expert’s opinion testimony had to
assist the trier of fact in order to be admissible under Rule 702 – a
concept almost exclusively applied in civil cases – was generally believed
to favor the admissibility of all kinds of expert testimony. The threshold
was said to be very low. In that regard, Rule 702 was said to represent a
much more liberal standard of admissibility than the reputedly conservative
general acceptance requirement. Indeed, the liberal attitudes toward
admissibility of expert opinions generated somewhat of a hue and cry that
“junk science” was flooding the courtrooms. In criminal cases, on the other
hand, most federal cases had continued to apply the Frye test even after the
passage of Rule 702.
The Supreme Court, when it decided Daubert in 1993, agreed with the
plaintiffs’ argument that the passage of the federal rules of evidence had
superseded the Frye principle as a rule of admissibility. In order to guide
the United States District Courts, whom it made the gatekeepers charged with
keeping junk science out of the courtroom, the Supreme Court suggested that
trial judges facing a decision of whether an expert’s opinion would be
admitted examine the following factors in considering the challenged expert
The factors the Court mentioned were (1) whether the type of evidence can be
and has been tested by a scientific methodology, (2) whether the underlying
theory or technique has been subjected to peer review and has been published
in the professional literature, (3) how reliable are the results obtained in
terms of a potential error rate, and finally (4) that general acceptance can
yet have a bearing on the inquiry .
The Daubert Court was at pains to suggest that the above list of factors was
intended to facilitate a “flexible” inquiry into the reliability evaluation
and that not all of these factors were absolute requirements. It also
suggested that Rule 702 offered a more flexible approach to admitting expert
opinions than the more stringent principle of the Frye case. In a later case
(Kumho Tire v Carmichael)  that also answered the question whether the
same Daubert factors to determine reliability would also apply to
nonscientific expert testimony in the affirmative, the Court again stressed
the flexible nature of the factors to be applied and recognized that not all
of these might apply in a given case. Indeed, the Court recognized that for
some disciplines, different factors might be more appropriate.
Friction ridge impression evidence – historically referred to as fingerprint
evidence – has by now a record of nearly one hundred years of court
acceptance as a reliable means by which a person’s identity can be
established. Despite this success record, it has not been immune from recent
attacks based on Daubert. And when such a Daubert challenge is made, one of
the factors on which litigants focus is the error rate.
The Error Rate Factor
Litigants seeking to attack the admissibility of expert opinion testimony
that is based on an individualization of a crime scene latent print have
argued in some cases that no error rate has ever been established for
friction ridge impression evidence.
Little or nothing can be found in the friction ridge impression literature
on what the Daubert court suggested trial judges should do when considering
“the known or potential rate of error” of the science of friction skin
individuality . When lower courts have sought to apply Daubert’s concept
of error rates to friction ridge impression individualizations, they have
divided this concept into two parts: methodological error and practitioner
In the most authoritative appellate decision, which is also the most recent
one, United States v Byron Mitchell , the court seemed to accept the
premise that the methodological error rate, while impossible to calculate
exactly, might be close to zero. But the Mitchell court was initially
bothered by the lack of definition in what constitutes an error when dealing
with fingerprint identifications. It saw at least two different aspects to
error: false positives (incorrect affirmative identifications) and false
negatives (incorrect findings of dissimilarity). For the purpose of cases
wherein fingerprints were used to tie defendants to crimes or crime scenes,
the court said that only false positives ought to be considered. Seen in
that light, it found that the error rate, “though not precisely quantified”,
was indeed zero or close to zero .
With regard to practitioner error, on the other hand, this issue was seen as
one relating to the competence of the expert witness and therefore not
involving friction ridge comparisons as a discipline. Practitioner error,
then, falls outside the purview of a Daubert inquiry. Despite this
recognition, the Mitchell court suggested that “prosecutors would be
well-advised to elicit testimony about their experts’ personal proficiency,
rather than relying on the discipline’s good general reputation among lay
jurors.”  The court remarked that if prosecutors do not follow that
advice, cross-examiners are likely to seek to expose incompetent examiners
on cross-examination by inquiring about that very issue.
With no research regarding practitioner error rates in the scientific
literature, litigants attacking fingerprint evidence and critics of the
profession have filled this void using inappropriate measures of examiner
reliability. Proficiency tests administered by Collaborative Testing
Services (CTS), failure rates of the International Association for
Identification (IAI) Latent Print Examiner Certification Examination, and
high-profile erroneous individualizations have all been cited as measures of
practitioner error rates. *
* The 1995 CTS latent print proficiency test is a common citation for
critics [6, 7]. This particular test had the highest error rate for
participants (22%) of any CTS latent print test . Contributing to this
substantially high error rate was one impression in blood. This impression
represented the most similar area of a very close pattern shared by an
identical twin brother. The known exemplars of the donor of this impression
were not provided, but the known exemplars of the twin brother who did not
create the impression were provided.
The authors believe that CTS latent print proficiency tests can meet minimum
external proficiency testing standards when used appropriately by agencies
to annually monitor the performance of latent print examiners. However, the
authors do not support the use of the results of these CTS latent print
proficiency tests to estimate practitioner error rate for several reasons:
1) Errors in CTS tests are reported as the number of responses with results
that differ from the “manufacturer’s expected results”. There is no
distinction in the results between the types of errors (e.g., clerical
errors, erroneous individualizations and exclusions, and missed
individualizations and exclusions) that were committed by the participants.
(Examples from one CTS Latent Fingerprint Examination are shown in Appendix
A. [of the article in the JFI])
2) Individuals who are not trained to competency may participate in the CTS
latent print proficiency tests, and no distinction is made between the
results of these individuals and the results of those who are trained to
3) Participants in the CTS tests are not limited to participants in North
America and do include participants from European countries, where a minimum
number of minutiae (“point threshold”) is maintained as operational
procedure or necessary as a legal standard for courtroom admissibility.
Thus, such participants may not declare a match on the test, and this will
be scored by CTS as “not identified”. “Not identified” may be construed by
some critics as an error [7, 9].
In 1996, the National Research Council (NRC) released its second report on
DNA, entitled The Evaluation of Forensic DNA Evidence . The NRC
addressed DNA error rates within the report. It stated that proficiency
testing and audits are both essential components of quality assurance
programs; however, neither is designed to measure error rates. (For further
information regarding the NRC report, please see the discussion in the
A second measure of examiner accuracy that has been cited is the Latent
Print Examiner Certification Examination of the International Association
for Identification . The authors firmly believe that this is a highly
inappropriate measure of accuracy. Approximately half (48%) of the examiners
who meet the minimum requirements to take the certification examination fail
the test . What is not distinguished is the reason for the failure rate.
Any of the following events would result in failing the four-part Latent
Print Examiner Certification Examination :
1) Receiving a less-than-passing percentage on any of the four parts of the
examination: pattern classification (90%); general knowledge, history, and
processing (85%); comparisons (80%); courtroom testimony (pass/fail)
2) A single erroneous individualization made in the comparison portion of
3) A single clerical error in the comparison portion of the exam (those
administering the examination cannot distinguish between clerical errors or
4) Failure to complete the first three portions of the examination within
the required time limit (6 ½ hours)*
* Modified examination requirements were established by the IAI Board of
Directors during the 2005 conference .
A third inappropriate measure of examiner accuracy is the citation of
anecdotal or high-profile erroneous individualizations [15-18]. The
existence of such cases merely confirms that erroneous individualizations
can and do occur. But without knowing the number of correct
individualizations that have been made, these cases of erroneous
individualization are simply anecdotal, misleading, and inappropriate for
measurement of examiner accuracy.
This study does attempt to compensate for the limitations of the
aforementioned inappropriate measures of examiner accuracy. Because this is
experimental research, caution must also be used when examining the data
presented here to draw conclusions regarding examiner accuracy in actual
case work. The limitations of this study are discussed later in this report.
Additional studies by the authors are planned to address some of the
limitations of this study.
Methods and Materials
Data utilized in this study originated from examiners during latent print
comparison training courses. During the courses, participants were given
comparison exercise packets. Each packet contained ten latent prints and
eight sets of inked exemplars. The difficulty of the packets varied in terms
of quality and quantity of friction ridge detail present in the latent
prints, the lack of focal points to aid the examiner when searching, the
source area of the latent prints (e.g., palm prints, sole prints, etc.), and
so forth. The packets ranged in levels of difficulty from 1 (easiest) to 16
(most difficult) and consisted of a pool of approximately 4600 different
latent prints. The rating of difficulty for these packets was predetermined
by the course instructors (three certified examiners with approximately 45
years of combined experience). This decision was subjective and based
primarily on the complexity of the latent prints in the packets and the
challenge of the individualizations [e.g., difficulty to locate; degree of
distortion; lack of focal points (core, delta, scars, etc.); lack of
“helpful searching clues”; and quantity of available minutiae].
The difficulty of the first packet that was received by each participant was
based upon the participant’s declared training and experience. The
difficulty of subsequent packets that were given to the participant was
based on the participant’s performance with the first packet, as monitored
by the course instructor.
Because these were training courses whose aim was to improve the comparison
ability of the participants, the students were encouraged to challenge and
improve their skills by tackling increasingly difficult exercises.
Additionally, all of the exercise packets contained latent prints that had
been deemed “sufficient for individualization” by course instructors, and
all latent prints had been individualized to the exemplars provided (i.e.,
there were no nonmatches).
When the participant was given an exercise packet, the participant was asked
to analyze the ten latent prints in the packet before searching and
comparing to the exemplars. He or she was asked to rate each latent print in
seven* categories: quantity of details, quality of details, presence of
focal points, level of contrast, amount of lateral distortion, amount of
deposition pressure, and level of background interference.
* The participants from one of the training classes were asked to rate the
latent prints in just six categories (lateral distortion was added to
subsequent classes). Furthermore, the rating scheme for this training class
was a scale of 1 through 3 (1 the lowest in all categories). The rating
scheme in the subsequent training classes was a scale of 1 through 5 (1 the
lowest in all categories). The decision to include these data nonetheless
was made primarily because these data included an erroneous
individualization, and the authors did not wish to discard the data merely
on the basis of a slightly expanded analysis of the latent prints.
After the analysis was performed, the participant performed comparisons to
the ten sets of exemplars (standard inked fingerprints, palm prints, or sole
and toe prints). Participants performed comparisons using standard
magnification devices for latent print examiners (typically 4 to 6 X
magnification) and ridge counters. If the participant effected an
individualization, he or she was asked to record the corresponding name from
the exemplar and source (e.g., finger number, right or left palm, etc.). The
participant was also asked to record the time it took to complete the
procedure (including time for analysis of the latent print). Lastly, the
participant was asked to record the level of confidence for the
Recording the level of confidence of the individualization provided a
mechanism that served two purposes:
1) Participants could push their comparison skills in a training environment
and attempt exercise packets that may have been beyond their skill level.
2) The authors could differentiate between individualizations made at the
highest confidence level and individualizations that the participant may not
have felt entirely comfortable making and would not have reported in actual
When the participant completed (or attempted to the best of his or her
ability to complete) an exercise packet, a new exercise packet was given to
the participant, and the results of the completed exercise packet were
examined by the course instructor.
When the course instructor examined the answer sheets of participants,
results that were not in agreement with the known answers for the exercise
packet were noted. On the basis of past experience and common trends, and in
some instances, a discussion with the participant, the instructor made a
determination as to whether the error appeared to be an erroneous
individualization or a clerical error. If it could not be clearly
determined, then the error was determined to be “indeterminate”.
Participants were also asked to provide additional information regarding
their training and experience. Any information that revealed the identity of
a participant was removed from all
A unique alpha-numeric identifier was associated with each participant’s
data, thereby rendering the source of the data anonymous. All data were then
pooled together and assessed.
Results and Discussion
Data were collected from 108 participants. The mean number of years of
experience for the participants was 7.9 years (range = 0 to 30+ years, n =
107*) (Figure 1[in the JFI]).
* One participant did not report this information.
Of the 108 participants, 16 possessed one year of experience or less.
Because the training courses were open to participants of any skill level,
including participants with no training and experience, the authors
arbitrarily selected to separate the participants with more than one year of
experience. (Rather than separating participants on the basis of experience,
more appropriate device to separate participants is to determine whether the
participant is trained to competency and performing unsupervised casework.
This modification will be added to future studies.)
Summary of Results
Table 1 [in the JFI article] shows the total number of correct
individualizations and errors that were made by the participants. The data
in the table are separated into three categories: data from participants
with one year of experience or less, data from participants with more than
one year of experience, and combined data from both groups. A comparison of
the two groups in Table 1 shows a higher percentage of errors were committed
by participants with one year of experience or less. Thirty-seven errors
were committed by 16 less-experienced participants. This equates to 2.3
errors per inexperienced participant. (It must be noted, as will be shown in
Figure 3, that some participants did not make any errors at all). In
contrast, 81 errors were committed by 92 participants with more than a year
of experience. This equates to 0.88 errors per participant with more than a
year of experience. Combining these data, the average errors per participant
in the study was 1.1. Thus, the inclusion of data from inexperienced
individuals penalized the more experienced individuals.
A further examination of the 37 errors made by participants with one year of
experience or less shows that 21 of the 37 errors (57%) were erroneous
individualizations—four of which were made at the highest level of
confidence. This is a sharp contrast to the participants with more than a
year of experience. The more experienced group committed 15 erroneous
individualizations—two of which were made at the highest level of
confidence—out of 81 total errors (19%). These data support the proposition
that CTS results, which include data from relatively inexperienced examiners
or trainees, may have inflated error rates and should not be applied to
latent print examiners who are trained to competency.
In Table 2, the results for participants with more than one year of
experience are separated into categories of confidence. Participants were
instructed to use a designation of confidence when reporting every
individualization. In 93 instances, a participant neglected to report a
confidence. The scale of confidence (1 through 3) is explained below:
3 = Highest level of confidence. The participant recorded this level of
confidence if the participant would report this individualization in
2 = Moderate level of confidence. The participant recorded this level of
confidence if the individualization was beyond his or her ability and
comfort level. Scenarios used to describe the appropriate use of this level
of confidence included “not absolutely certain about the individualization”,
“you would consult another colleague before reporting”, or “you would spend
more time before reporting”.
1 = Lowest level of confidence. The individualization is far beyond the
participant’s ability and comfort level. Scenarios used to describe the
appropriate use of this level of confidence included “a strong guess” or
“indications of the source of the latent print”.
The above confidence rating scale allowed participants to push their
comparison skills beyond their comfort and skill level in a training
environment. Participants were encouraged to complete increasingly difficult
exercise packets that exceeded their skill level.
On the basis of this designation of confidence, the data of greatest
interest are the individualizations made at a confidence rating of 3. Of
5861 individualizations made by examiners with more than a year of
experience at a confidence rating of 3, two were deemed erroneous
individualizations and 59 were deemed clerical errors. This equated to an
erroneous individualization rate of 0.034% and a clerical error rate of
The determination of the error type (erroneous individualization, clerical,
or indeterminate) was a decision made on-site by the course instructor. When
determining clerical errors, obvious trends were observed to support this
decision. If the instructor could not determine the intent of the student,
the error was classified as indeterminate.
Clerical errors in this training course usually represented the incorrect
recording of the source of the latent print. For example, the source of the
latent print was recorded as the left ring finger (#9 finger) when the
participant meant the right ring finger (#4 finger). This clerical error
will be referred to as transposition transcription error. A second type of
clerical error was for the participant to record the correct finger number
but incorrectly record the name on the fingerprint card. Both of these types
of errors are easily identified by the instructor by merely examining the
latent print and recorded exemplar. For instance, if the latent print is a
left-slant loop pattern and the exemplar bears a right-slant loop pattern,
it is highly unlikely that a participant with even minimal training and
experience would effect such an individualization. See Figures 2A and 2B [in
the JFI article] for actual examples.
Table 3 [in the JFI article] provides further analysis of the clerical
Approximately twice as many clerical errors were made on latent prints
originating from a left hand. However, the number of latent prints
originating from right hands in the exercise packets was approximately equal
to the number of latent prints originating from left hands in the exercise
packets. Participants were more likely to err when recording the source of a
lefthanded latent print. Subconscious biases or expectations may contribute
to this effect.
There were 15 clerical errors at the highest level of confidence that were
not transposition transcriptions, but were identified to the correct
individual. Of these, 10 were the next sequential finger (e.g., #7 left
index finger was recorded, but the source was #8 left middle finger). This
could possibly be from an error when translating the plain impressions at
the bottom of the fingerprint card. It would also be of interest to see
whether those participants with significant experience or some tenprint
experience would be less likely to commit such clerical errors.
Lastly, there were 2 clerical errors at the highest level of confidence in
which the participant recorded the correct finger or palm but recorded the
incorrect name from the exemplar. On the basis of previous trends noted
above, it is highly unlikely that the participant was effecting the
individualization to the pattern type or print present in the exemplar. This
type of error, though deemed clerical, could have a serious impact in the
case, as it incorrectly associates an individual with the case who otherwise
could have been excluded. Although clerical errors are often not deemed as
serious as erroneous individualization, it is important to recognize that
the consequences of an undiscovered clerical error could potentially be
quite serious. The obvious difference is that a clerical error will, under
most circumstances, be readily apparent when it is reexamined.
Table 4 shows the percentage of each type of error for the total
individualizations attempted at each level of confidence rating. An
examination of Table 4 shows that clerical error is independent of
confidence, although the number of erroneous individualizations increased by
two orders of magnitude as confidence decreased.
Some participants made multiple errors. The total number of errors made by
each participant with more than one year of experience ranged from 0 to 5.
Figure 3 shows the number of participants and the distribution of the total
number of errors committed by each participant. Over half (48 out of 92) of
the participants did not make a single error. Additionally, each column in
Figure 3 is separated into the relative percentages of total errors
committed attributed to erroneous individualizations and clerical errors.
For example, one column in Figure 3 shows that eight participants committed
a total of three errors each and 90% of the errors in this column were
clerical errors. Figure 3 includes all errors made at all levels of
Figure 3 illustrates that a decreasingly smaller fraction of the
participants in the course were responsible for multiple errors. As the
number of multiple clerical errors increased for an individual, so did the
number of erroneous individualizations (for all levels of confidence). This
trend could be an indicator of an examiner’s sloppy work habit, an absence
of double checking work product, or simply rushing through the exercises,
keeping in mind that this was not a case-work environment. These data
support the argument that the courts should look to each expert’s
practitioner error rate rather than the wholesale exclusion of fingerprint
evidence in the courtroom or as commentary on an industry-wide error rate.
Anatomy of Errors
Of more interest and value to the latent print examiner community than the
reporting of occurrences of errors is why the error occurred. Some insight
can be provided by an examination of the two erroneous individualizations
reported at the highest level of confidence by participants with more than
one year of experience.
Erroneous Individualization #1
The participant (Participant No. 125A) who effected this erroneous
individualization reported the following information with respect to
training and experience:
2 years of experience
30% of duties are analyzing and comparing latent prints
0 times testifying in court to latent print evidence
Seventy-nine (79) total individualizations were attempted by this individual
during the course:
75 at confidence = 3
3 at confidence = 2
1 confidence not reported
1 erroneous individualization was made (confidence = 3)
3 clerical errors were made (confidence = 3)
The latent print that was erroneously individualized is shown in Figure 4[in
the JFI article]. The latent print is from an exercise package of difficulty
rated at 5 (on a predetermined scale ranging from 1 to 16). This participant
worked with eight exercise packets ranging in difficulty from 1 to 7. The
participant reported having only two years of experience. An important
question arises when analyzing the background of the participant who made
this error: Was the latent print (PP5-079) beyond the ability and experience
of this particular examiner? The combination of a latent print beyond the
ability level of the participant and relative inexperience may have
contributed to the erroneous individualization. This trend will be examined
in future studies.
Erroneous Individualization #2
The participant (Participant 2100B) who effected this erroneous
individualization reported the following information with respect to
training and experience:
0 years of training (the question “Years of fingerprint training” had been
added to the background survey when this participant took the course)
6.5 years of experience
% of duties are analyzing and comparing latent prints (not answered)
12 times testifying in court to latent print evidence
The average rating for all 79 latent prints examined by this participant was
2.30 +/- 0.57 (SD). The average rating for latent PP5-079 was 1.50. This
latent print was 1.3 standard deviations below the average difficulty
attempted for this participant.
Sixty (60) total individualizations were attempted by this individual during
60 at confidence = 3
No other confidence level was reported by this individual
1 erroneous individualization was made (confidence = 3)
2 clerical errors were made (confidence = 3)
The latent print that was erroneously individualized is shown in Figure 5
[in the JFI article]. The latent print is from a package of difficulty rated
at 8. This participant worked with seven packets of latent prints ranging in
difficulty from 5 to 12. Was the latent print (F28-127) beyond the ability
of this particular examiner? The participant rated the latent print F28-127
as shown in Table 6. The average rating for all 60 latent prints examined by
this participant was 3.75 +/- 0.88 (SD). The average rating for latent
F28-127 was 4.00. This latent print was within the average range of
difficulty for this examiner. It is also important to note that this
examiner incorrectly individualized an exemplar that is very similar in
appearance to the correct exemplar for this comparison (i.e., a close
Comparison of the Two Erroneous Individualizations Clearly, there is a
greater cause for concern with Erroneous Individualization #2 than with
Erroneous Individualization #1. The participant who made Erroneous
Individualization #2 was experienced and has testified in court. The latent
print was not beyond the average difficulty for this examiner in this
course. The participant who made Erroneous Individualization #1 was
relatively inexperienced, has not testified in court as an expert, and the
print was above the average difficulty for this examiner in this course. It
is also possible that the less experienced of these two examiners was still
in training and was not performing casework at the time. The demographic
questions that were initially asked of participants in this study did not
adequately determine whether an individual was still in training or
performing unsupervised casework. As addressed previously, this is a
limitation of the study and the appropriate questions will be asked in
More information is needed regarding work habits, training, and examiner
performance before any meaningful conclusions can be drawn. This is an
important area for future study, because the answers to these questions
could provide additional guidelines for training, application of
methodology, and standards for expert qualifications.
There are a number of limitations that must be addressed. Some of these
limitations will be minimized or eliminated in future studies proposed by
the authors. Each of these limitations is discussed below.
The known limitations of this study were:
• Training environment versus casework
• Limited equipment and facilities
• Pushing difficulty during a training environment
• No nonmatches
• No verification
• Population of participants may not be wholly representative
• In limited instances, classification of error type was subjective
• Background information of participants was not known
Training Environment Versus Casework
An important limitation of this study, and one explanation for why the
erroneous individualizations occurred, is that all of these comparisons were
performed in a training environment. It is possible that a participant may
place less seriousness or value on training product than on casework
product. Some participants may have placed undue emphasis on competing with
other students in the training courses to make more individualizations.
Competitive behavior would not be expected in normal casework.
On the other hand, casework examinations may be affected by a psychological
mindset not present in a training environment. Context bias* was recently
shown to affect four latent print examiners during one study . Context
bias and the pressure of high-profile cases were not present in our study
* Context bias or “observer effects” is the phenomenon where context
information or expectations of the observer can influence the observer’s
conclusions, judgment, perceptions, and decision-making processes .
Limited Equipment and Facilities
Some examiners may not have performed as they typically would have had they
been in their normal environments. In the training environment, examiners
were limited to the tables, chairs, and lighting devices present at the
facilities, and the examination equipment consisted of only magnifiers and
ridge counters. Digital enhancement software, scanners, and computers were
not available to the student. The training course was held during working
hours (0800 to 1700), and participants who normally perform comparisons
during later hours or on shift work may have had difficulty adjusting to the
Additionally, because this was a training course with specific dates and
times for the course, daily factors that can affect performance could have
an effect on training product that may not necessarily be present in
casework. There is generally no set time limit with casework, and the length
of time an examiner spends on a case is flexible and discretionary. For
example, if an examiner is ill, casework can be postponed until the examiner
recovers. Some examiners may be accustomed to minimal comparison activity in
one day, unlike the intensive exercise periods required in a training
environment. This flexibility was not present in the training environment
where the participant was required to be present and to work during the
hours of the course. In future studies, additional questions can be added to
the survey to identify some of these daily variables that could affect
Pushing Difficulty During a Training Environment
Participants were encouraged to push their comparison abilities by
attempting increasingly more difficult exercises as they progressed through
the course. If a participant expressed to the instructor that he or she had
progressed “too far”, then exercises for the participant’s appropriate skill
level were administered. The participant always had the option of reporting
an individualization with lower confidence (e.g., less than 3); however, it
is not known by the researchers whether this option was always utilized
appropriately by the participant.
Another limitation of this study was the absence of nonmatches. Every latent
print had a match in the packets and the participants were aware of this.
This design was due to the nature of the course intent, because participants
were taking the course to become more efficient at finding and making
individualizations. This important issue will be addressed in future studies
with the inclusion of exercise packets with nonmatches.
A participant’s knowledge of the absence of nonmatches may have affected the
comparison results. One possible result was an increase in the number of
erroneous individualizations, because participants were aware that there
must be a match in the exemplars. The participant may have attempted to
“force” the individualization and found one that was quite similar, yet
still a nonmatch. This effect may explain the sharp increase in erroneous
individualizations at lower levels of confidence (confidence less than 3) in
Table 4, but, notably, the rate of clerical errors did not change.
Conversely, knowledge that nonmatches did not exist may have encouraged the
examiner to continue searching for an individualization long after that
participant may have given up (and potentially erroneously excluded an
individual as having made an impression) in actual casework.
Individualizations were not verified by other examiners in this experiment.
In casework, an examiner following the ACE-V methodology will have all
individualizations verified by another examiner who is trained to
competency. This did not occur in this course. With a second examiner
performing an independent examination of the declared individualization, it
seems likely that the number of errors reported (clerical and erroneous
individualizations) will be reduced. A follow-up experiment was performed to
test this very notion, and the results of that experiment are included
following the limitations listed here.
Population of Participants May Not Be Wholly Representative
Because this is a training course to increase the efficiency of the
examiner, there may be several subsets of the latent print examiner
population that are not represented by this sample of examiners. In
particular, examiners certified by the International Association for
Identification are unlikely to take this course, because this course is
promoted as a helpful tool to pass the certification examination. It is also
possible that examiners who need improvements to their skills or training
may be more likely than highly skilled examiners to attend this course.
Therefore, highly skilled examiners may be inadvertently excluded from this
study. On the other hand, examiners working for agencies that minimize or
ignore training needs may not receive the opportunity to attend this course.
Therefore, poorly skilled examiners may be inadvertently excluded from this
study. Both of these scenarios could lead to a sample population that may
not be entirely representative of the latent print community at large.
In Limited Instances, Classification of Error Type Was Subjective
Clearly, the choice to classify an error as erroneous individualization,
clerical, or indeterminate was an important decision. As previously
addressed, specific trends were observed in many clerical errors
(transposition transcription, sequential fingers, etc.). In some instances,
student behavior was a strong indicator of evidence (typically students who
effected an erroneous individualization in the course often appeared
irritated, upset with their performance, and were immediately very
conservative and less confident in their performance). Beyond identifying
obvious trends, ultimately, the decision to classify an error is a
subjective decision by the course instructor that is based on the observed
trends, student behavior, student’s performance, and honesty of the student.
Though the authors are confident in our classification of clerical versus
erroneous individualization errors, the worst case scenario is that two
errors, which were deemed clerical in nature, erroneously associated another
individual with “the evidence”. Thus, of the 61 total errors reported at the
highest level of confidence by participants with more than one year of
experience, 57 of these errors associated the correct individual to the
evidence (but listed the wrong finger, palm, or foot), and 4 of these errors
associated the incorrect individual to the evidence. However, our contention
is that the clerical errors are likely to be easily spotted during
verification by a second examiner.
Background Information of Participants Was Not Known
Initially when the data were gathered, participants in the course were not
informed that the data may be used in this or future studies. Therefore, to
preserve anonymity, minimal
questions were asked regarding the individual’s background. The data were
then pooled for calculations. Thus, in an effort to preserve the source of
the data, potentially critical conclusions regarding training and experience
In spite of the anonymity, other colleagues raised concerns that the
students were unaware of how the data might be used. The authors wish to
assure the readers that these concerns were taken very seriously. In an
effort to address these concerns, an attempt was made to contact all past
students whose data may or may not have been included in the study. Because
of the anonymity of the alpha-numeric identifier given to each student’s
data, even the authors can no longer determine the source of the data.
Therefore, mass mailings of notification forms were distributed to potential
contributors. Included in the notification forms was an advance draft of
this article, in an effort to show how the data were utilized.
Future studies by the authors will include notification and consent forms
prior to data collection. It is not known how prior knowledge of testing
before participation may affect future results. Also, future studies will
include questions that will address training, experience, and background,
while still preserving participant anonymity.
It is difficult to quantify an error rate for the human expert. It is a
moving target. The error rate for a particularly easy latent print could be
significantly different than the error rate for a very difficult, highly
distorted latent print. The problem is further compounded by the fact that
the error rate will be directly tied to the ability of the examiner.
When examining the errors and assessing the reliability of examiners in this
study, it is imperative to note that these are experimental data. This is
one experiment, under a specific set of conditions, for a limited sample of
experts looking at a closed set of latent prints.
The data in this study could be of value if the population of examiners and
the population of latent prints in the course begin to approach an accurate
sampling of examiners and latent prints in case work. With more than 4600
different latent prints of varying quantity and quality of ridge detail in
the exercises, it is reasonable to assume that the prints in this study
would contain an adequate sampling of latent prints encountered in actual
casework. However, as previously addressed, because this is a training
course, segments of the latent print expert community may continue to be
excluded (e.g., certified examiners, experts from underfunded agencies).
This is an important limitation of the study. A training environment can
provide massive amounts of data and comparisons that are essential for this
type of experiment; however, one drawback is that the population of
participants will most likely be limited.
Another caution is how one should use these data. The data in this study are
descriptive, not predictive. The errors that were identified were specific
to the participants, latent prints, and conditions of the study. These data
should therefore not be used as a predictor of error or an estimate of
reliability for an examiner on the witness stand. At best, attorneys
ignorant of science or at worst, unscrupulous attorneys may attempt to apply
the data from this study improperly. Questions of the witness, such as,
“Isn’t it true that there is a .034% chance that the identification before
the court is erroneous or at least a 1% chance that it was a clerical error
and therefore not my client?” should be summarily dismissed and would be a
gross misuse of these data. The expert witness should be prepared to discuss
the limitations of this study and error rates in general.
Historically, the concerns of error rates in forensic science have also been
addressed by other disciplines. In the 1996 Report from the National
Research Council for the forensic use of DNA, the NRC committee stated the
following points with respect to error rates and forensic DNA examinations
• At issue is not the general error rate for a laboratory or laboratories
over time but rather whether the laboratory doing the testing in this
particular case made a critical error.
• To accurately estimate error rates from proficiency tests, it would
require laboratories to undergo an unrealistically large number of
proficiency trials. In effect, laboratories would be performing more
proficiencies than actual case work.
• The pooling of proficiency test results across laboratories has been
suggested as means of estimating an “industry-wide” error rate. This would
penalize the better laboratories; multiple errors on a single test by one
laboratory could substantially affect the overall estimated false match
error rate. Initial studies have shown that the preponderance of errors
originated in a small population of laboratories.
• Using descriptor error rates from a previous set of tests to predict or
estimate error rates in future tests (for false matches) is almost certain
to be incorrect. When errors are discovered, they are investigated
thoroughly so that corrections can be made. A laboratory is not likely to
make the same error again, so the error probability is correspondingly
The NRC further summarized by stating:
…for all those reasons, we believe that a calculation that combines error
rates with match probabilities is inappropriate. The risk of error is
properly considered case by case, taking into account the record of the
laboratory performing the tests, the extent of redundancy, and the overall
quality of the results. However, there is no need to debate differing
estimates of false-match error rates when the question of a possible false
match can be put to direct test… .
Their final recommendation is that the best insurance against false
incrimination is the opportunity for retesting and minimization of errors
through proper quality programs as recommended by the guidelines of
appropriate Scientific Working Groups and accrediting bodies (e.g., ASCLD/LAB).
Follow-up Experiment – Verification
A significant limitation of the initial experiment was the lack of
verification. In casework, an examiner who is following the ACE-V
methodology will have all individualizations verified by another examiner
who is trained to competency. This did not occur in this course. With a
second examiner performing an independent examination of the declared
individualization, it seemed likely that the number of errors reported
(clerical and erroneous individualizations) would be reduced.
As a follow-up experiment, verification packets were prepared for a new
group of 25 participants. Two verification packets were created. The first
packet (marked PP) contained the erroneous individualization #1 (PP5-079)
and a second error (an erroneous individualization that was previously
reported by a participant at confidence level of 1). The second packet
(marked F2) contained the erroneous individualization #2 (F28-127) and a
clerical error (a transposition transcription error). Both packets also
contained eight correct individualizations reported by previous
In this experiment, participants were given a worksheet previously completed
by students, but with no indications of the errors that were present (an
example is provided in Figure 6[in the JFI article]). Participants were
asked to indicate whether they “AGREE” or “DISAGREE” with the conclusions of
the previous participant. Participant were given the option to make comments
or notes regarding their verification conclusions. Some participants chose
to make notations; some participants did not.
Table 7 shows the results of the verification follow-up experiment. These
data are reported similarly to the initial experiment, dividing the data
into three groups: an inexperienced group, an experienced group, and
combined data for both groups.
Of the 50 possible errors (25 participants, each receiving two errors in his
or her packet), 49 errors were detected by the verifiers. Only one error
(the erroneous individualization F28-127) was not detected by a verifier,
and this verifier was a trainee, had less than one year of experience, and
was not performing unsupervised case work.*
* A modified questionnaire and the addition of a consent form were
introduced in the follow-up experiment. From the modified questionnaire, it
was determined exactly how much latent print comparison experience was
possessed by a participant and whether a participant was trained to
competency and reporting out unsupervised casework results. This information
was not previously available in the initial experiment. For proper
comparison of the data between the two experiments, participants were still
separated into two groups: those participants with one year or less
experience and those with more than one year of experience. Only two
participants in the followup experiment possessed more than one year of
experience, but were not declared “trained to competency” by their agency,
nor were they generating their own casework reports.
Table 7 only displays the results of the errors that were present in the
verification packets. Interestingly, not all of the correct
individualizations were agreed upon by verifiers. Of the eight verifiers
with more than one year of experience who examined verification packet #1
(PP), four verifiers wrote “DISAGREE” for at least one of the correct
individualizations. Of the eight verifiers with more than one year of
experience who examined verification packet #2 (F2), five verifiers wrote
“DISAGREE” for at least one of the correct individualizations.
Verifiers were given several reasons why they could disagree with a previous
participant’s conclusion. Examples included, but were not limited to: an
error was committed, insufficient agreement to call, inconclusive, would
need to spend more time examining with different equipment in the
laboratory, beyond my expertise, and so forth. If students possessed any of
these concerns, they were instructed to state “disagree”. They were to state
“agree” only if they were certain of the conclusion as it was stated on the
worksheet, would sign off on it, and would send it out the door, willing to
testify to their verification in court.
The results suggest that a more conservative attitude was taken by the
verifier. Not only did the experienced verifiers catch all the errors, but
they were also hesitant to agree on some of the correct individualizations.
One possible explanation is that some of the individualizations (although
correct) were above the level of difficulty at which the examiner was
comfortable. Unlike the standard exercises in this course, the verification
packets were fixed at specific levels of difficulty [levels 5 and 8 (rated
out of 16) for packet PP and packet F2, respectively].
Another explanation for the unverified correct individualizations is that
verifiers assume a more critical and conservative opinion knowing that they
are in a position of quality control. By inference, this may suggest that
the original examiner in case work may be less conservative, knowing that
someone else is going to review his or her results. Further study of this
dynamic within the ACE-V methodology is recommended by the authors.
It should also be noted that the verification worksheet had the results
listed by the previous participant. This is similar to the procedures used
by many U.S. agencies when performing verification (i.e., not blind
verification). Some critics and researchers are calling for blind
verification of all results [20-23], and, in fact, one researcher stated
that verification is a misnomer; that verification with respect to the ACE-V
methodology is “ratification, not verification”. The fact that not all
examiners agreed with all of the correct individualizations placed before
them, and all examiners with more than one year of experience disagreed with
all of the individualizations in error, suggests that the verification
process is not “ratification”.
Furthermore, the results of the follow-up verification experiment draw
attention to two issues. The first issue is that because one of the errors
(and notably, the close nonmatch), Erroneous Individualization #2, was not
caught by a trainee, this instance illustrates a potential danger when
inexperienced individuals are asked to verify difficult individualizations
above their skill, ability, or experience level. The second issue is that in
our study, all of the experienced examiners did not verify any errors, even
though they were told the results of the initial examiner. The verifying
examiners in our study, however, were not given any context information
about the initial examiner (e.g., identity, experience, etc.); they were
simply given the conclusions of the initial examiner. These results are in
contrast to the results of the Dror study , in which four out of five
examiners were influenced by the context information provided and did
provide an erroneous result. Clearly, there is a need to further investigate
the potential for bias effects in latent print examinations, and this
potential for bias must be checked and balanced with an appropriate blind
verification scheme (perhaps under limited circumstances as the Stacey
Report [15, 25] suggests).
Finally, it should be noted that the design of this follow-up experiment was
actually biased against the participants catching all of the errors. The
participants, like all participants of the course, are told at the beginning
of the course that they will be only making individualizations in their
exercise packets and that there are no nonmatches. The participants spent
several days prior to receiving the verification packets making dozens of
individualizations from previous exercise packets. Toward the end of the
course, the participants were given the verification packets and they were
not told that these packets would contain errors. They were simply asked
whether they agreed or disagreed with the previous participant’s
conclusions. Additionally, in actual casework, it is common practice for an
agency to have only one examiner, perhaps two in some agencies, verify an
initial examiner’s conclusions. In this follow-up study, each error was
subjected to eight separate, independent verifications by examiners with
more than one year of experience. In all of these instances, the errors were
not verified by the examiner.
Finally, the verification worksheets had been altered by the authors.
Initially, some of the conclusions of the previous experts were made at
lower levels of confidence. All the results on the verification worksheets
in the experiment were altered to a 3 (the highest level of confidence).
Therefore, participants were given worksheets that not only presented errors
marked as individualizations, but also individualizations made by other
experts at the highest level of confidence. In spite of these biases,
participants with more than one year of experience still did not verify any
Future Research and Direction
Future studies will address some of the limitations previously discussed. In
particular, nonmatches in packets, proper background and demographic data
(while still preserving anonymity), and continued inclusion of verification
will be valuable components. Additionally, collecting data from participants
from countries with national standards for training and registration,
numeric threshold standards (a minimum number of points), or different
methodologies for comparison will be a goal of future studies. These data
could then be compared to data from U.S. examiners to determine whether
there are significant differences. Such comparisons could support or
identify weaknesses in current standards and practices.
The following conclusions were drawn from the data in this experiment:
• 5861 individualizations were attempted at the highest level of confidence
by 92 participants with more than one year of experience.
• 5800 correct individualizations were made at the highest level of
confidence by participants with more than one year of experience.
• Two erroneous individualizations were made at the highest level of
confidence by two participants with more than one year of experience.
• Fifty-nine clerical errors were made at the highest level of confidence by
participants with more than one year of experience.
• The most common type of clerical error was a transposition transcription
error (74% of the clerical errors).
• Errors occurred twice as often on left-handed latent prints than errors
from right-handed latent prints.
• More than half the examiners made no errors (clerical or erroneous
individualizations); however, some examiners made multiple errors. As the
number of errors increased for a single examiner, so did the number of that
examiner’s erroneous individualizations (these data included all errors at
all levels of confidence 1 through 3).
• Participants with one year of experience or less made a significantly
higher ratio of errors (specifically erroneous individualizations) when
compared to participants with more than one year of experience.
• When the confidence reported for a participant decreased, the number of
erroneous individualizations increased by two orders of magnitude. The
number of clerical errors, however, remained relatively constant. Clerical
errors are independent of latent print difficulty or examiner ability.
• A verification follow-up experiment was conducted. Sixteen participants
with more than one year of experience did not verify two errors given to
each of them amongst eight correct individualizations. Only one erroneous
individualization was not caught by a verifier in the follow-up experiment.
This verifier was a trainee with less than one year of experience.
The previous conclusions and rates of error were meant to determine the
reliability of examiners under a specific set of conditions. These
statistics are not intended to be a predictor of reliability for other
examiners in case work; rather, they are merely a hint of a nebulous, wispy,
industry-wide error rate. Most importantly, the results show that errors can
and do happen. Latent print examinations are currently performed by human
examiners, and human examiners are fallible. Therefore, the examinations are
only as valid as the examiner performing the tests and the conditions under
which the tests were conducted. Only through rigorous use of ACE-V,
sufficient training, use of precautions to limit bias and potentially other
influential factors, and appropriate safeguards of quality assurance
programs will errors be reduced and the chances of falsely incriminating an
individual based on an erroneous individualization be minimized.
The authors would like to thank the following people for their various
• BCA student workers Kristin Tebow and Trisha A. Evans for their tireless
• SWGFAST members for their comments and concerns, particularly Alice Maceo,
Debbie Benningfield, Bridget Lewis, Mike Campbell, Lyla Thompson, Stephen
Meagher, Alan McRoberts, Pat Wertheim, David Grieve, and Christophe Champod
for their valuable contributions.
• Drs. Ralph and Lyn Haber for their suggestion of a trilevel indicator of
confidence. This was an important tool for the project and an improvement
upon an earlier design. Their comments and guidance with respect to the
study design were extremely valuable.
• Lastly, and most importantly, the students of these courses for their
contribution to the study and the profession. Without their continued
support and participation, future studies would not be possible.
Questions regarding this article or these data should be directed to:
Minnesota Bureau of Criminal Apprehension
1430 Maryland Avenue East
St. Paul, MN 55106
1. Frye v US, 293 F. 1013, 1014 (D.C.Cir. 1923).
2. Federal Rules of Evidence; House of Representatives - The Committee on
the Judiciary, US Government Printing Office: Washington, DC, 1975.
3. Federal Rules of Evidence; House of Representatives - The Committee on
the Judiciary, US Government Printing Office: Washington, DC, amended April
4. Daubert v Merrell Dow Pharmaceuticals, Inc. 509 U.S. 579, at 593-595
5. Kumho Tire v Carmichael, 526 U.S. 137 (1999).
6. US v Byron Mitchell, 365 F.3d 217 (3rd Cir. 2004).
7. US v Plaza, et. al. 188 F. Supp. 2d, Daubert Hearing, 2002.
8. Grieve, D. Possession of Truth. J. For. Ident. 1996, 46 (5), 521-528.
9. Smith, K., Member Latent Print Certification Board, Dulles, VA. Personal
10. The Evaluation of Forensic DNA Evidence; National Research Council,
National Academy Press: Washington, DC, 1996, 75-88.
11. CBS News “Fingerprints: Infallible Evidence?” June 4, 2004.
www.cbsnews.com. (accessed January 13, 2005).
12. Thompson, L. Secretary, Latent Print Certification Board, Shawnee
Mission, KS. Personal communication, 2004.
13. International Association for Identification. Certified Latent Print
Examiner Requirements, Section 3F (2005), www.theiai.org, (accessed January
14. Message from the Latent Print Certification Board. J. For. Ident. 2005,
55 (6), 858.
15. Stacey, R. A Report on the Erroneous Finger print Individualization in
the Madrid Train Bombing Case. J. For. Ident. 2004, 54 (6), 706-718.
16 Shirley McKie.com, Shirley McKie and David Asbury Official Reports,
www.shirleymckie.com, (accessed January 13, 2005).
17. The Innocence Project. Case #141-Stephan Cowans, (2004),
www.innocenceproject.org, (accessed January 13, 2005).
18. Cole, S. More Than Zero: Accounting for Error in Latent Print
Identification. J. Crim. Law & Crim. 2005, 95 (3), 985-1078.
19. Dror, I. E.; Charlton, D.; Péron, A. E. Contextual Information Renders
Experts Vulnerable to Making Erroneous Identifications. For. Sci. Int., in
20. Haber, R., Haber, L. Mindset in the Latent Print Comparison Process.
Presented at IAI Educational Conference, Dallas, TX, August 9, 2005.
21. Saks, M. J.; Risinger, D. M.; Rosenthal, R.; Thompson W. C. Context
effects in forensic science: A review and application of the science of
science to crime laboratory practice in the United States. Science & Justice
2003, 43 (2), 77-90.
22. Saks, M.; Koehler, J. The Coming Paradigm Shift in Forensic
Identification Science. Science August 2005, 309 (5736), 892-895.
23. Steele, L. The Defense Challenge to Fingerprints. Crim. Law Bulletin
2004, 40 (3), 213-240.
24. US v Plaza et al. 188 F. Supp. 2d, Daubert Hearing. Testimony of Ralph
25. Stacey, R. A Report on the Erroneous Finger print Individualization in
the Madrid Train Bombing Case. Presented at IAI Educational Conference,
Dallas, TX, August 9, 2005.
An analysis of 13 respondents’ errors from the 2002 CTS Latent Prints
Examination (Test 02-516).
Three hundred and three (303) responding participants attempted to identify
10 latent prints (Item 5) to four sets of known exemplars (Items 1 through
4). In this examination, there were a total of 15 errors (reported by 13
participants). This equates to 4.3% of the participants making 1 or more
errors. It should also be noted that Errors 2, 4, 7, and 11 all contain
responses that are not possible, and these errors are most likely clerical
Feel free to pass The Detail along to other
examiners. This is a free newsletter FOR latent print examiners, BY latent
print examiners. There are no copyrights on The Detail, and the website is open
for all to visit.
If you have not yet signed up to receive the Weekly Detail in YOUR e-mail inbox,
go ahead and join the list now
so you don't miss out! (To join this free e-mail newsletter, enter your
name and e-mail address on the following page:
You will be sent a
Confirmation e-mail... just click on the link in that e-mail, or paste it
into an Internet Explorer address bar, and you are
signed up!) If you have
problems receiving the Detail from a work e-mail address, there have been past
issues with department e-mail filters considering the Detail as potential
unsolicited e-mail. Try subscribing from a home e-mail address or contact
your IT department to allow e-mails from Topica. Members may
unsubscribe at any time. If you have difficulties with the sign-up process
or have been inadvertently removed from the list, e-mail me personally at
firstname.lastname@example.org and I will try
to work things out.
Until next Monday morning, don't work too hard or too little.
Have a GREAT week!