AFFIDAVIT OF

CHARLES R. HONTS, Ph.D.


I, CHARLES R. HONTS, depose and state as follows:

1. I am an Associate Professor of Psychology at Boise State University, 1910 University Drive, Boise, Idaho 83725. I hold a Ph. D. degree in Experimental Psychology with a concentration on human psychophysiology. My central area of study and research has been in credibility assessment and in polygraph testing in particular. I have attached a copy of my curriculum vitae to this document as Exhibit A. That curriculum vitae accurately lists my education, publications, professional presentations, expert witness testimony, and other professional activities.

2. I am prepared to offer Expert Testimony as follows, if I am allowed to testify in the above referenced case.

3. Polygraph tests have gained general acceptance in the scientific fields of psychology and psychophysiology and in the areas of those disciplines devoted to credibility assessment. Polygraphy has long passed the experimental stage.

4. In practice, virtually all polygraph instruments used for psychophysiological credibility assessment record measures from at least three physiological systems that are controlled by the autonomic nervous system. Recordings are usually made of palmar sweating (commonly known as the galvanic skin response), relative blood pressure (obtained from an inflated cuff on the upper arm), and respiration (obtained from volumetric sensors placed around the chest and/or abdomen). Many instruments will also provide a measure of peripheral blood flow (usually obtained from a photoelectric plethysmograph placed on one of the fingers).

5. Polygraphy or psychophysiological detection of deception is based upon a scientific theory that can be tested with the methods of science (falsified). Any conscious effort at deception by a rational individual causes involuntary and uncontrollable physiological responses which include measurable reactions in blood pressure, peripheral pulse-amplitude, breathing and electrodermal response. The various techniques used in polygraphy for the detection of deception are also capable of being tested through the methods of science. The most commonly used techniques for the psychophysiological detection of deception are comparison question tests (CQT). The theory of these comparison question tests is as follows: The CQT assesses a person's credibility by looking for a differential reaction between two types of questions. The first type of question is known as a relevant question. Relevant questions are direct accusatory questions that address the issue under investigation (e.g., Did you shoot John Doe?) Comparison questions are ambiguous questions to which the subject is maneuvered into answering, "No" (e.g., Before 1994, did you ever do anything that was dishonest, illegal or immoral?). The rationale of the comparison question test predicts that guilty subjects will produce larger physiological responses to the relevant questions which they know they are deceptive, than to the relatively unimportant comparison questions. Innocent subjects are expected to produce larger responses to the comparison questions, to which they are assumed to be either deceptive, or at least uncertain of the veracity of their answer, than to the truthfully answered relevant questions. This type of comparison question is known as a probable lie comparison question and is the most commonly used comparison question in the field. Other types of comparison questions are also used. The second most commonly applied comparison question is the "directed lie" question. The directed lie is a question to which the subject is instructed to lie. The subject is told that it is important that he or she respond appropriately when she or he lies. The predicted differential reactions and rationale of the directed lie is the same as for the probable lie.

6. The directed lie comparison question has been in continuous use by the U. S. Federal government for at least 20 years and is currently used by at least 10 Federal law enforcement agencies of the U. S. Federal Government, including: Air Force Office of Special Investigations, Office of the Secretary of the Air Force, U. S. Army Intelligence and Security Command, Defense Investigative Service, Defense Investigative Agency, Naval Criminal Investigative Service, Central Intelligence Agency, Internal Revenue Service, Department of Energy, and the Drug Enforcement Agency. The directed lie comparison test is taught as a detection of deception technique at the Department of Defense Polygraph Institute, the Polygraph Training Unit of the Canadian Police College, and at the Arizona School of Polygraph Science.

7. The basic theory of the psychophysiological detection of deception and the various techniques used for the detection of deception have been put to numerous scientific tests over the past 25 years.

8. There are numerous studies published in peer-reviewed scientific journals that test the theory of the psychophysiological detection of deception and provide estimates of the error rates for comparison question tests. Science has approached the problem of assessing the accuracy of comparison question tests in two venues, laboratory studies and field studies.

9. Laboratory research has traditionally been an attractive alternative because the scientist can control the environment. Moreover, with regard to credibility assessment studies, the scientist can know with certainty who is telling the truth and who is lying by randomly assigning subjects to conditions. Laboratory research on credibility assessment has typically made subjects deceivers by having them commit a mock crime (e.g. "steal" a watch from an office), and then instructing them to lie about it during a subsequent test. From a scientific viewpoint, random assignment to conditions is highly desirable because it controls for the influence of extraneous variables that might confound the results of the experiment.[See note 1] However, laboratory research in general, and credibility assessment in particular, can be criticized for a lack of realism. This lack of realism may limit the ability of the scientist to apply the results of the laboratory to real-world settings.[See note 2] Some scientists who conduct research on psychophysiological credibility assessment have attempted to overcome this limitation by trying to make the laboratory simulations as realistic as possible.[See note 3] The goal of making laboratory simulations as realistic as possible would seem to be reasonable and should provide results that have applicability to field situations.

10. The alternative approach to studying psychophysiological credibility assessment is to conduct field studies. In this approach, polygraph tests conducted in actual cases are examined. Although field studies are plagued by numerous problems,[See note 4] the chief problem lies in unambiguously determining ground truth. That is, some method that is independent of the outcome of the test is needed for determining who is in fact telling the truth. Although a number of approaches have been taken, it is generally agreed that confessions are the best available criterion for ground truth in these studies.[See note 5] It now seems to be generally agreed by persons doing field research in this area that useful field studies of the psychophysiological credibility assessment tests should have all of the following characteristics:[See note 6]

Subjects should be sampled from the actual population of subjects in which the researcher is interested. If the researcher wants to make inferences about tests conducted on criminal suspects, then criminal suspects should be the subjects who are studied.

Subjects should be sampled by some random process. Cases must be accepted into the study without reference to either the accuracy of the original outcome or to the quality of the physiological recordings.

The resulting physiological data must be evaluated by persons trained and experienced in the field scoring techniques about which inferential statements are to be made. Independent evaluations by persons who have access to only the physiological data are useful for evaluating the information content of those data. However, the decisions rendered by the original examiners probably provide a better estimate of the accuracy of polygraph techniques as they are actually employed in the field.

The credibility of the subject must be determined by information that is independent of the specific test. Confessions substantiated by physical evidence are presently the best criterion available.

11. David C. Raskin, Charles R. Honts, and John C. Kircher have recently reviewed the scientific literature addressing psychophysiological credibility assessment. [See note 7] They found eight high quality laboratory studies of the CQT. [See note 8] The results of those laboratory studies are illustrated in Table 1. The high quality laboratory studies indicate that the CQT is a very accurate discriminator of truth tellers and deceivers. Over all of the studies, the CQT correctly classified about 90 percent [See note 9] of the subjects and produced approximately equal numbers of false positive and false negative errors.

12. In their recent review, Raskin and his colleagues [See note 10] also examined the available field studies of the CQT. They were able to find four field studies [See note 11] that met the criteria for meaningful field studies of psychophysiological credibility assessment tests. The results of the independent evaluations for those studies are illustrated in Table 2. Overall, the independent evaluations of the field studies produce results that are quite similar to the results of the high quality laboratory studies. The average accuracy of field decisions for the CQT was 90.5 percent.[See note 12] However, with the field studies nearly all of the errors made by the CQT were false positive errors.[See note 13]

13. Although the high quality field studies indicate a high accuracy rate for the CQT, all of the data represented in Table 2 were derived from independent evaluations of the physiological data. This is a desirable practice from a scientific viewpoint, because it eliminates possible contamination (e.g. knowledge of the case facts, and the overt behaviors of the subject during the examination) in the decisions of the original examiners. However, independent evaluators rarely offer testimony in legal proceedings. It is usually the original examiner who gives testimony. Thus, accuracy rates based on the decisions of independent evaluators may not be the true figure of merit for legal proceedings. Raskin and his colleagues have summarized the data from the original examiners in the studies reported in Table 2, and for two additional studies that are often cited by critics of the CQT [See note 14]. The data for the original examiners are presented in Table 3. These data clearly indicate that the original examiners are even more accurate than the independent evaluators.

14. I know of no evidence, published or otherwise, that supports the notion that polygraph examinations conducted in confidence for a defense attorney are less valid than polygraph examinations conducted for law enforcement. [See note 15]

15. The scientific data concerning the validity of the polygraph can be summarized as follows: High quality scientific research from the laboratory and the field converge on the conclusion that the CQT is a highly accurate discriminator of truthtellers and deceivers. The research results converge on an accuracy estimate that exceeds 90 percent. Moreover, original examiners, who are most likely to offer testimony, produce even higher estimates of accuracy. There may be a tendency for the CQT to produce more false positive than false negative errors, but this trend in the current literature is not particularly strong. [See note 16] Moreover, no tendency toward false positive errors is seen in the decisions of the original examiners. The scientific validity of a properly administered polygraph examination in a real life case compares favorably with such other forms of scientific evidence as x-ray films, electrocardiograms, fiber analysis, ballistics comparison tests, blood analysis, and is far more reliable than other forms of expert testimony (e.g., psychiatric and psychological opinions as to sanity, diminished capacity, dangerousness and many of the post traumatic stress/recovered memory syndromes) [See note 17].

16. Countermeasures are anything that a subject might do in order to distort or defeat a psychophysiological credibility assessment test. Detailed reviews of the scientific literature on countermeasures are available in a number of locations. [See note 18] This research leads to several conclusions. First, there is no credible scientific evidence that drugs or other countermeasures designed to affect the general state of the subject are effective against the CQT. [See note 19] However, studies have indicated that training in specific point countermeasures designed to increase responding to comparison questions is effective in producing a substantial number of false negative outcomes when used against both the comparison question and the concealed knowledge tests. [See note 20] Nevertheless, it is also important to note that training in the countermeasures appears critical to their effectiveness. Subjects who are only given the information are unable to achieve effects, [See note 21] and the required training is hopefully difficult to obtain.[See note 22] Honts and Perry note that while there are no easy answers to the problem of countermeasures, it appears that computerized analysis of the physiological records substantially reduces the false negative rate attributable to countermeasure use.[See note 23]

17. The popular notion that a "pathological," "psychopathic," or "criminally hardened" liar cannot be tested successfully with the polygraph has no basis in scientific fact. "Psychopathic" or "criminally hardened" liars, including those clinically diagnosed with Antisocial Personality Disorders respond quite satisfactorily when attempting deception and are as easily detected in their deception as normals. [See note 24]

18. Psychotic persons may not be suitable subjects for polygraphic testing, but only when they experience psychotic episodes, delusions or hallucinations during the examination. Then, the subject might sincerely believe such delusions to be fact. Persons psychotic to this degree would be recognized as such by any reasonable person.

19. There are no known traits of personality or personality disorders that would allow or predispose a deceptive person to pass a properly conducted polygraph examination.[See note 25]

20. The notion that the polygraph is generally accepted in the relevant scientific community as a valid test is supported by several sources of evidence. There have been two surveys of the Society for Psychophysiological Research that have directly attempted to address the general acceptance issue. [See note 26] The Society for Psychophysiological Research is a professional society of scientists (Ph.D. and M.D.) who study how the mind and body interact. Thus, the Society for Psychophysiological Research would seem to be the appropriate scientific community for assessing general acceptance. An initial survey was undertaken by the Gallup Organization in 1982. That survey was replicated and extended in 1994 in Susan Amato's Master's Thesis at the University of North Dakota. The results of those surveys were very consistent. Roughly two thirds of the Ph.D. and M.D. members of the Society for Psychophysiological Research who were surveyed stated that they felt that polygraph tests were a valuable diagnostic tool when considered with other available information or that it was sufficiently reliable to be the sole determinant. [See note 27] When only those respondents who reported they were highly informed about the polygraph literature are considered, the percentage who report that polygraph tests are a useful diagnostic tool rises to 83%. Of those individuals who rated themselves as highly informed, fewer than 10% report being involved in conducting polygraph examinations professionally. Therefore, these results are not suspect on the grounds that the responses were skewed by the financial self-interest of the respondents. These results would seem to indicate that there is a great deal of acceptance of these techniques in the relevant scientific community. [See note 28]

21. A second and equally important indicator of the acceptance of the psychophysiological detection of deception in the scientific community is provided by the large number of original scientific studies published in peer-reviewed scientific journals. Studies reporting positive results for the validity of the polygraph have appeared in journals such as: The Journal of Applied Psychology, The Journal of General Psychology, Psychophysiology, The Journal of Police Science and Administration, Current Directions in Psychological Science, Psychological Bulletin, The Journal of Research in Personality, and Law and Human Behavior, to name but a few. To be published in any of these journals, the editor first sends an article out for review by two or three independent scientists who know the area but are not personally involved with the article under consideration. Those peer-reviewers comment on the quality of the literature review, the research design, the statistical analysis, the reasonableness of the conclusions drawn, and the appropriateness of the article for the respective journal. The Editor of the journal also reviews the article and, based on her or his evaluation and on the comments and recommendations of the reviewers, makes a decision about publication. Often revisions are required before publication. Articles with unacceptable scientific methods, statistics, or insupportable conclusions are not published. Articles which are not acceptable within the scientific discipline covered by the journal are not usually published. For example, the Journal of Applied Psychology rejects 85% of the manuscripts submitted to it for publication. Articles which report matters that are not acceptable psychological science do not usually make it through the peer review process and are not published in the Journal of Applied Psychology. The Journal of Applied Psychology has published numerous articles on the psychophysiological detection of deception. [See note 29] The publication of numerous articles in main stream journals of scientific psychology gives a clear indication that the psychophysiological detection of deception is generally accepted by the community of scientific psychologists.

22. The increasing acceptance of the psychophysiological detection of deception is evidenced by the increasing number of scientific publications on the topic and the involvement of a larger number of psychological laboratories. In addition, a new peer-reviewed archival scientific journal devoted to the topic of credibility assessment began publication in early 1997. [See note 30]

23. There is an area of the science of Psychology and the Law that has addressed the impact of testimony concerning the outcome of polygraph examinations on juries.

24. I am familiar with the scientific literature concerning the impact of polygraph testimony on jury research. I have published a scholarly peer-reviewed work[See note 31] that includes a review of this literature, and I have conducted original scientific research on the topic. The results of my research have been published in a peer-reviewed journal and have been presented at scientific meetings.

25. A number of studies have been conducted on this topic. [See note 32] This research has been conducted both as experimental work with mock juries and by conducting post-trial interviews with jury members who had been presented with polygraph testimony. This literature is consistent in showing that juries are not inclined to give extraordinary weight to polygraph evidence. The research provides strong evidence that juries are capable of weighing and evaluating all evidence. Moreover, they are also capable of rendering verdicts that may be inconsistent with polygraph results. In no case did research suggest that polygraph testimony strongly or overwhelmingly affected the jury decision making process.

26. Typical of this research is the study done by Cavoukian and Heslegrave. [See note 33] They report two experiments where cases were presented to mock juries either with or without polygraph evidence. Their mock jurors were asked to give ratings of their perceptions of the likelihood of the defendant's guilt and they were asked to render verdicts. In both experiments, in the absence of polygraph evidence, subjects tended to rate the defendant near the middle (uncertain) portion of the rating scale. This indicates that the evidence was relatively equivocal, the very type of case where polygraph evidence is likely to be offered. The addition of evidence that the defendant had passed a polygraph did shift subjects ratings in the not guilty direction, but the effect was relatively small, shifting from a mean rating of about 3 to a mean rating of about 4 (7-point scale) in one experiment and from a mean rating of about 5 to a mean rating of about 6 (9-point scale) in the other experiment. Polygraph evidence had a significant effect on verdicts in one experiment, but polygraph testimony did not have a significant effect on verdicts in a second study. All effects of polygraph testimony were eliminated by the introduction of negative testimony by an opposing witness who testified that polygraph tests were only 80% accurate and that the results of polygraph tests should be viewed with skepticism. Cavoukian and Heslegrave concluded that concerns about blind acceptance and overwhelming impact of polygraph tests are unjustified. I concur.

27. Research conducted at the University of North Dakota by my graduate students and myself [See note 34] has replicated the findings of the research described in 25 and 26, supra. In the context of a mock trial, we contrasted polygraph testimony with testimony concerning identification based on a blood test. Consistently, we found that jurors were more skeptical of polygraph testimony than they were of blood test testimony, even when the experts reported them to be of the same level of accuracy. There were no indications in any of the studies that polygraph evidence overwhelmed jurors or that they were unable to use and value evidence that ran contrary to the polygraph outcome.

28. My personal experience with giving testimony before juries has also indicated that juries are quite willing to discount polygraph testimony in favor of other testimony, and thus render verdicts contrary to the polygraph outcome.

29. I know of no data, published or unpublished that supports the notion that juries give undue weight to polygraph evidence, or that they are unable to evaluate and weigh polygraph evidence in the context of other testimony given at trial.

30. With regard to the specific case of Commonwealth v. Louise Woodward:

I have reviewed the questions and the physiological recordings produced during Dr. David C. Raskin's physiological detection of deception examination of the defendant, Louise Woodward. I have also examined Dr. Raskin's report of that examination.

During his examination, Dr. Raskin asked Ms. Woodward the following four relevant questions: On February 4, 1997, did you hit or strike Matthew on the head? On February 4, 1997, did you deliberately hit Matthew's head against a hard object or surface? Other than trying to arouse him on February 4, 1997, did you injure Matthew by forcibly shaking him? Did you deliberately inflict the injuries that resulted in Matthew's death? The report indicates that Ms. Woodward answered all four relevant questions in the negative.

Dr. Raskin used directed-lie comparison questions in his examination. I believe that this is the most valid technique currently available for conducting such tests.[See note 35]

Dr. Raskin used a state of the art digitial instrument to record Ms. Woodward's physiological responses during the presentation of the questions. This instrument appeared to be working properly and should have produced highly accurate recordings of Ms. Woodward's physiological responses.

The techniques and scoring used by Dr. Raskin are representative of those used in a number of laboratory and field studies discussed elsewhere in this affidavit. As such, the validity data described in Table 3 of this memorandum should be the correct scientific reference for evaluating the validity of Dr. Raskin's examination of Ms. Woodward.

I numerically evaluated the physiological recordings produced in Dr. Raskin's examination of Ms. Woodward using the scoring methods developed and validated at the University of Utah. These are the same scoring methods taught at the Canadian Police College, and they are considered the standard for polygraph evaluation by law enforcement in Canada. This scoring system is very similar to the one used by most Federal law enforcement agencies in the United States.

My independent evaluation of Dr. Raskin's charts resulted in a total numerical score of +15. In this scoring system, total numerical scores of +6 or greater are considered to be sufficient for a decision of truth-telling to the relevant questions of the examination. My analysis produced positive scores of +3 or greater to each of the relevant questions. This pattern of results is consistent with a conclusion of truth telling to all of the relevant questions. Based on scientific research, the confidence in this independent evaluation is better than 90%.

It is my professional scientific opinion that Ms. Woodard was telling the truth when she answered the relevant questions in Dr. Raskin's polygraph examination of 7 May 1997. I hold this opinion to a reasonable degree of scientific certainty.

31. It is my belief that the above statements represent the state of the science on polygraph testing and I would so testify. I declare under the penalties of perjury that the foregoing is true and correct.   ______________________________ Charles R. Honts


Endnotes

[Note 1]  

See the extensive discussion of the advantages of random assignment to conditions in T. D. Cook and D. T. Campbell, QUASI-EXPERIMENTATION: DESIGN AND ANALYSIS ISSUES FOR FIELD SETTINGS (1979).


[Note 2]  

Id.


[Note 3]  

See John C. Kircher, Steven W. Horowitz, & David C. Raskin, Meta-analysis of mock crime studies of the control question polygraph technique, 12, LAW AND HUMAN BEHAVIOR, 79 (1988). Three factors have been identified as contributing to the realism of laboratory research on the CQT. 1) Use of realistic subject populations. College student subjects have been associated with low accuracy rates, while more representative subject samples from prison populations and the community have been associated with higher accuracy rates. 2) Use of representative field examiners, techniques, and scoring methods. Those laboratory studies that have used field polygraph examiners, and field techniques for administering and scoring the examinations have produced higher accuracy rates. 3) The use of incentives associated with the outcome of the examinations. Usually, subjects are paid money if they pass the examination, although other studies have used negative events associated with failing the test. Studies with explicit motivations associated with the outcome of the test have produced higher accuracy rates.


[Note 4]  

Supra Note 1 (Cook and Campbell).


[Note 5]  

The problems associated with field research in this area are discussed in detail by David C. Raskin, Polygraph Techniques for the Detection of Deception, in David C. Raskin (Ed.) PSYCHOLOGICAL METHODS IN CRIMINAL INVESTIGATION AND EVIDENCE, 276 (1989) at 264.


[Note 6]  

See the recent review by David C. Raskin, Charles R. Honts, and John C. Kircher, The Scientific Status of Research on Polygraph Techniques: The Case For Polygraph Tests, in MODERN SCIENTIFIC EVIDENCE: THE LAW AND SCIENCE OF EXPERT TESTIMONY, D. L. Faigman, D. Kaye, M. J. Saks, & J. Sanders (Eds. in press).


[Note 7]  

Id.


[Note 8]  

Avital Ginton et. al., A Method for Evaluating the Use of the Polygraph in a Real-Live Situations, 67 J. APPLIED PSYCHOL. 131 (1982); Charles R. Honts et al., Mental and Physical Countermeasures Reduce the Accuracy of Polygraph Tests, 79 J. APPLIED PSYCHOL. 252 (1994); Horowitz et al., The Role of Comparison Questions in Physiological Detection of Deception, manuscript in press with Psychophysiology (1996); John C. Kircher and David C. Raskin, Human Versus Computerized Evaluations of Polygraph Data in a Laboratory Setting, 73 J. APPLIED PSYCHOL. 291 (1988); John A. Podlesny & David C. Raskin, Effectiveness of Techniques and Physiological Measures in the Detection of Deception, 15 PSYCHOPHYSIOLOGY, 344 (1978); John A. Podlesny & Connie M. Truslow, Validity of an Expanded-Issue (Modified General Question) Polygraph Technique in a Simulated distributed-Crime-Roles Context, 78 J. APPLIED PSYCHOL. 788 (1993); David C. Raskin & Robert D. Hare, Psychopathy and Detection of Deception in a Prison Population, 15 PSYCHOPHYSIOLOGY, 126 (1978); Louis I. Rovner, The accuracy of physiological detection of deception for subjects with prior knowledge, 15 POLYGRAPH 1 (1986).


[Note 9]  

The results excluded the inconclusive outcomes as they are not decisions.


[Note 10]  

Supra note 6 (Raskin, Honts, & Kircher)


[Note 11]  

Charles R. Honts, Criterion development and validity of the control question test in field application, THE JOURNAL OF GENERAL PSYCHOLOGY, 509, 123 (1996); Charles R. Honts & David C. Raskin, A Field Study of the Directed Lie Control Question, 16 J. POLICE SCI. ADMIN. 56 (1988); Christopher J. Patrick & William G. Iacono, Validity of the Control Question Polygraph Test: The Problem of Sampling Bias, 76, J. APPLIED PSYCHOL. 229 (1991); David C. Raskin et. al., A STUDY OF THE VALIDITY OF POLYGRAPH EXAMINATIONS IN CRIMINAL INVESTIGATIONS, Final Report to the National Institute of Justice, Grant Number 85-IJ-CX-0400, Department of Psychology, Salt Lake City, University of Utah. (1988).


[Note 12]  

The results excluded inconclusive outcomes as they are not decisions.


[Note 13]  

See the discussion in Raskin et. al., supra note 6 and in Honts, supra note 11, concerning the performance of original examiners in these studies. They note that the original examiners in the Patrick and Iacono study perform at a much higher level than the independent evaluators. This finding was not representative of the other three field studies. The original examiners in the Patrick and Iacono study, supra note 11, correctly classified 100% of the guilty and 90% of the innocent subjects. This performance is quite similar to the original examiners in the Honts (1996) field study, supra note 11, who were from the same law enforcement agency. Raskin et. al., supra note 6, and Honts, supra note 11, have argued that the independent evaluator data from the Patrick and Iacono study should be viewed as an anomaly. If the Patrick and Iacono data are excluded, the field estimate of the accuracy of CQT decisions is 95.5%, Raskin et. al., Supra Note 6.


[Note 14]  

Those two studies are, Benjamin Kleinmuntz and Julian J. Szucko, A field study of the fallibility of polygraphic lie detection, 308, NATURE, 449 (1984), Frank Horvath, The effects of selected variables on interpretation of polygraph records, 62, JOURNAL OF APPLIED PSYCHOLOGY 127 (1977). Neither of these studies meets the generally accepted requirements for useful field studies but nevertheless they are frequently cited by critics of the CQT as evidence that the CQT is not accurate. The study reported by Benjamin Kleinmuntz and Julian J. Szucko, A field study of the fallibility of polygraphic lie detection, 308, NATURE, 449 (1984) fails to meet the criteria for a useful field study because: The subjects were employees who were forced to take tests as part of their employment, not criminal suspects. The case selection method was not specified. The data were evaluated by students at a polygraph school that does not teach blind chart evaluation. Moreover, those students were given only one ninth of the usual amount of data collected in a polygraph examination and were forced to use a rating scale with which they were not familiar. The study reported by Frank Horvath, The effects of selected variables on interpretation of polygraph records, 62, JOURNAL OF APPLIED PSYCHOLOGY 127 (1977), also fails to meet the criteria for a useful study because: About half of the innocent subjects were victims of violent crime, not suspects. Virtually all of the false positive errors in that study were with innocent victims, not innocent suspects. In addition, the persons doing the blind evaluations were all trained at a polygraph school that does not teach blind chart evaluation. Finally, cases were not selected at random. Some cases were excluded from the study because of the nature of the charts. An interesting fact that critics almost never mentioned is that the decisions by the original examiners in the Horvath Study were 100% correct. Also see the discussion in David C. Raskin, Methodological Issues in estimating polygraph accuracy in field applications, 19, CANADIAN JOURNAL OF BEHAVIOURAL SCIENCE 389 (1987).


[Note 15]  

This notion, known as the Friendly Polygraph Examiner Hypothesis (FPEH) was discussed at length by Charles R. Honts & Mary V. Perry, Polygraph Admissibility: Changes and Challenges, 16, L. & HUM. BEHAV. 357 (1992) and was found to be without validity. The issue was recently revisited by Charles R. Honts, Is it time to reject the friendly polygraph examiner hypothesis (FEPH)?, a paper presented at the annual meeting of the American Psychological Society, Washington, D.C (1997, May). The Honts analysis of the FPEH is as follows: The FPEH suggests that polygraph examinations conducted for the defense on a privileged and confidential basis are more likely to produce false negative outcomes than when subjects know that the examiner will report adverse outcomes. The FPEH assumes that if the subject expects that only a favorable outcome will be reported, the subject will have little at stake and will have no fear of the detection of deception. It is surmised that this lack of fear of the detection of deception will reduce the threat posed by the crime-relevant questions in the polygraph examination and the guilty subject will be more likely to pass. Two basic assumptions underpin the FPEH: (1) Fear of the detection of deception is necessary for the CQT to function. (2) There is no fear of detection of deception (or other motivation) in a confidential polygraph examination.

First, there is no basis for assuming that fear of the detection of deception is necessary for the CQT to function. Physiological detection of deception has been demonstrated in numerous laboratory studies under no motivation, reward motivation, punishment, and even when the subjects did not know they were in a detection of deception situation. No differences between these motivational conditions has been reliably observed. Although fear may be sufficient for the detection of deception, it clearly is not necessary. Fear is not an important part of any modern theory of CQTs.

Even if fear were necessary for detection, it does not follow that a reduction in fear would allow a deceptive person to pass the test. The CQT requires differential reactivity between relevant and comparison questions. A reduction in fear would reduce the fear associated with both question types thus maintaining the differential reactivity between the two. Since these tests are evaluated within-subjects, and not against a normative standard, the effect of reducing the motivation level (fear) would be nil.

Finally the FPEH's assumption that there is no fear (or any motivation) in a confidential polygraph is unrealistic. The subject of a confidential polygraph in a criminal case has a clear motivation, the gain she or he will receive from passing the test. Clearly this is a more powerful motivation than the small monetary rewards used in most laboratory studies. Additionally, Honts presented data from both the laboratory and the field that refute the FPEH. See supra, Honts paper presented to APS in Washington, D.C.(May, 1997).


[Note 16]  

This is especially true if the outlying data produced by the Patrick and Iacono study, supra note 11, are discounted.


[Note 17]  

See the discussion in, Charles R. Honts & Mary V. Perry, Polygraph Admissibility: Changes and Challenges, 16, L. & HUM. BEHAV. 357 (1992), and Charles R. Honts & Bruce D, Quick, The polygraph in 1995: Progress in science and law, NORTH DAKOTA LAW REVIEW, 71 (1995).


[Note 18]  

e.g., Charles R. Honts & Mary V. Perry, Polygraph Admissibility: Changes and Challenges, 16, L. & HUM. BEHAV. 357 (1992) at 373; Charles R. Honts, Interpreting research on polygraph countermeasures. 15 J. Police Science and Administration 204 (1987); Charles R. Honts, et. al., Mental and physical countermeasures reduce the accuracy of polygraph tests. 79, JOURNAL OF APPLIED PSYCHOLOGY, 252 (1994), Raskin et al., supra note 6.


[Note 19]  

Id., Honts (1987); Id., Raskin et al.


[Note 20]  

See e.g., Charles R. Honts, David C. Raskin, & John C. Kircher, Mental and physical countermeasures reduce the accuracy of polygraph tests. 79, JOURNAL OF APPLIED PSYCHOLOGY, 252 (1994).


[Note 21]  

Rovner (1986), supra note 8; also see, Charles R. Honts, David C. Raskin, John C. Kircher, , & Robert L. Hodes, Effects of spontaneous countermeasures on the physiological detection of deception, 16, JOURNAL OF POLICE SCIENCE AND ADMINISTRATION, 91 (1988).


[Note 22]  

Honts and Perry, supra note 17 at 376.


[Note 23]  

Id at 374; also see Honts et al., (1994) supra note 18.


[Note 24]  

Numerous studies have addressed the question of whether psychopaths can beat the polygraph, e.g. Raskin and Hare, supra note 3; also see the analysis and review by Charles R. Honts, David C. Raskin, & John C. Kircher, 19, Effects of socialization on the physiological detection of deception. JOURNAL OF RESEARCH IN PERSONALITY, 373 (1985).


[Note 25]  

Id., Honts et al.; also see Charles R. Honts, David C. Raskin, & John C. Kircher. (1986, October). Individual differences and the physiological detection of deception. Paper presented at the annual meeting of the Society for Psychophysiological Research, Montreal Canada.


[Note 26]  

The Gallup Organization, Survey of the members of the Society for Psychophysiological Research concerning their opinions of polygraph test interpretations, 13, POLYGRAPH, 153 (1984); Susan L. Amato, A SURVEY OF THE MEMBERS OF THE SOCIETY FOR PSYCHOPHYSIOLOGICAL RESEARCH REGARDING THE POLYGRAPH: OPINIONS AND IMPLICATIONS. Unpublished Master's Thesis, the University of North Dakota, Grand Forks (1993).


[Note 27]  

Respondents in both surveys gave responses to the following question: Which one of these four statements best describes your own opinion of polygraph test interpretations by those who have received systematic training in the technique, when they are called upon to interpret whether a subject is or is not telling the truth? A) It is a sufficiently reliable method to be the sole determinant, B) It is a useful diagnostic tool when considered with other available information, C) It is questionable usefulness, entitled to little weight against other available information, D) It is of no usefulness.


[Note 28]  

There has recently been a third survey of the members of the SPR. That survey was conducted by William Iacono and David Lykken of the University of Minnesota, The Scientific Status of Research on Polygraph Techniques: The Case For Polygraph Tests, in MODERN SCIENTIFIC EVIDENCE: THE LAW AND SCIENCE OF EXPERT TESTIMONY, D. L. Faigman, D. Kaye, M. J. Saks, & J. Sanders (Eds. in press). Drs. Iacono and Lykken are two of the most outspoken critics of polygraph testing. However, the present affiant believes that the Iacono and Lykken survey is so flawed and at this time so controversial that it cannot be used for any substantive purpose. Problems with the Iacono and Lykken study include: 1) The cover letter for the Iacono and Lykken survey sets the survey in the context of the legal admissibility of the polygraph in court, rather than about the scientific validity of the technique. In effect this is asking the respondents to make a political and legal judgment rather than a scientific one. This is in clear contrast the Amato and Honts survey which was set in the context of whether or not the SPR should have a formal scientific policy regarding the validity of polygraph testing. The context of the Iacono and Lykken survey is clearly inappropriate since few, if any, of the members of the SPR have the legal background to make an admissibility assessment. 2) The sample of respondents to the Iacono and Lykken survey describe themselves as very uninformed about the topic of polygraph examinations. When asked, "About how many empirical studies, literature reviews, commentaries, or presentations at scientific meetings dealing with the validity of the CQT have you read or attended?" the average respondent replied 2.6, with a standard deviation of 1.5. This means that 83% of the respondents had read or attended fewer than 4.1 papers or presentations on polygraph. Moreover, fewer than 2% of the respondents had read more than 5.6 articles. Given the large number of scientific articles and presentations on this topic (the present affiant, has either authored or co-authored over 100 such papers and presentations by himself), these data provide a strong indication that the Iacono and Lykken sample was, as a whole, highly uninformed about the polygraph, and thus has little to offer in terms of informed opinion about its scientific validity. 3) There is one known anomaly in the Iacono and Lykken data analysis that makes it impossible to compare some of their results to the other surveys in any meaningful way. In determining their highly informed group, Iacono and Lykken cut the distribution at 4 and above on their 7-point scale. In forming their highly informed group, Amato and Honts cut the distribution at 5 and above. This difference in cutting scores makes it impossible to compare these results across the two surveys. Iacono and Lykken's choice of a cutting point almost certainly reduced the confidence estimate by their highly informed subjects. 4) In their chapter in the Faigman et al. book, id., Iacono and Lykken represent their survey as a random survey. However, it is my understanding that in a federal case involving a military court martial, Iacono admitted under cross-examination that the Iacono and Lykken survey was in fact not based on a random sample. Drs. Raskin, Honts, and Kircher were deliberately left out of the sampling frame and thus did not have an opportunity to review, respond, or be represented in the survey. 5) Because of the serious anomaly in the data analysis and the self-admitted misrepresentation of the survey in a publication intended for the legal profession, Dr. Amato and I became very concerned that there might be other undisclosed problems with the Iacono and Lykken survey. Under the ethical standards of the American Psychological Association, scientists are required to make their data available for reanalysis by qualified scientists. On March 10, 1997, and again on April 29, 1997, Dr. Amato and I wrote to, first Dr. Iacono, and then to Dr. Lykken requesting the data from their survey for the purpose of reanalysis. To this date, they have not responded. At this time Dr. Amato and I are preparing to take formal action to obtain those data for reanalysis. Until such a reanalysis can be performed, it is my opinion that the Iacono and Lykken survey data cannot be relied upon for any substantive purpose.


[Note 29]  

Some of the articles on the polygraph published in the Journal of Applied Psychology are as follows: P. J. Bersh, A validation study of polygraph examiner judgments, Journal of Applied Psychology, 399, 53 (1969); P.O. Davidson, Validity of the guilty knowledge technique: The effects of motivation. Journal of Applied Psychology, 52, 62-65 (1968); E. Elaad, (1990). Detection of guilty knowledge in real-life criminal investigations. Journal of Applied Psychology, 75, 521-529 (1990); E. Elaad, A. Ginton & N. Jungman, Detection measures in real-life criminal guilty knowledge tests. Journal of Applied Psychology, 77, 757-767 (1992); A. Ginton, D. Netzer, E. Elaad &G. Ben-Shakhar, A method for evaluating the use of the polygraph in a real-life situation. Journal of Applied Psychology, 67, 131-137 (1982); C. R. Honts, R. L. Hodes, & D. C. Raskin, Effects of physical countermeasures on the physiological detection of deception. Journal of Applied Psychology, 70, 177-187 (1985); C. R. Honts, D. C. Raskin, & J. C. Kircher Mental and physical countermeasures reduce the accuracy of polygraph tests, Journal of Applied Psychology, 79, 252-259 (1994); F. S. Horvath, The effect of selected variables on interpretation of polygraph records. Journal of Applied Psychology, 62, 127-136 (1977); J, C, Kircher, & D. C. Raskin, Human versus computerized evaluations of polygraph data in a laboratory setting. Journal of Applied Psychology, 73, 291-302 (1988); C. J. Patrick, & W. G. Iacono, Validity of the control question polygraph test: The problem of sampling bias. Journal of Applied Psychology, 76, 229-238 (1991); J. A. Podlesny & C. Truslow, Validity of an expanded-issue (Modified General Question) polygraph technique in a simulated distributed-crimes-roles context. Journal of Applied Psychology, 5 (1993).


[Note 30]  

The Journal of Credibility Assessment and Witness Psychology published its first issue on 7 February 1997. One of the main topics identified in this journals charter was the psychophysiological detection of deception


[Note 31]  

C. R. Honts, & M. V. Perry, Polygraph Admissibility: Changes and Challenges, 16 L. & Hum. Behav. 357 (1992).


[Note 32]  

N. J. Brekke, P. J. Enko, G. Clavet, & E. Seelau, The Impact of Nonadversarial Versus Adversarial Expert Testimony, 15 L. & Hum. Behav. 451 (1991). S. C. Carlson, M. S. Passano & J. A. Jannnunzzo, The Effect of Lie Detector Evidence on Jury Deliberations: An Empirical Study. 5, J. Police Sci. & Admin. 148 (1977). A. Cavoukian & R. J. Heslegrave, The admissibility of polygraph evidence in court: Some empirical findings. 4, L. & Hum. Behav. 117 (1979). A. Markwart & B. E. Lynch, The Effect of Polygraph Evidence on Mock Jury Decision-Making. 7 J. Police Sci. & Admin. 324 (1979).


[Note 33]  

Id.


[Note 34]  

L. Vondergeest, C. R. Honts, & M. K. Devitt, Effects of Juror and Expert Witness Gender on Jurors' Perceptions of An Expert Witness. MODERN PSYCHOLOGICAL STUDIES, 1 (1993). M. K. Devitt, C. R. Honts, & B. Gillund. Stealing thunder does not ameliorate the effects of the hired gun cross-examination tactic. Paper presented at the annual meeting of the American Association for Applied and Preventive Psychology, Chicago (1993). C. R. Honts, M. K. Devitt, & S. Amato, Explanatory style predicts perceptions of expert witness believability. Paper presented at the annual meeting of the American Association of Applied and Preventive Psychology, Chicago (1993). C. R. Honts, & M. K. Devitt, The hired gun cross examination tactic reduced mock jurors' perception of expert witness' credibility. Paper presented at the biennial meeting of the American Psychology-Law Society/Division 41 San Diego, CA (1992).


[Note 35]  

See the review in: Charles R. Honts, Psychophysiological detection of deception, 3, CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE, 77 (1994).


GO TO DR. RASKIN'S AFFIDAVIT

GO TO DR. KATKIN'S AFFIDAVIT

MAIN MENU 

Created and maintained by Cara Lundquist