The Journal of Credibility Assessment and Witness Psychology
1997, Vol. 1, No. 1, 9-32
Published by the Department of Psychology of Boise State University
Truth or Just Bias: The Treatment of the Psychophysiological Detection of Deception in Introductory Psychology Textbooks
Mary K. Devitt , Oklahoma State University
Charles R. Honts, Boise State University
and Lynelle Vondergeest, University
of North Dakota
This article was edited by J. Peter Rosenfeld of Northwestern University.
Copyright 1997 by the Department of Psychology of Boise State University and the Author. Permission for non-profit electronic dissemination of this article is granted. Reproduction in hardcopy/print format for educational purposes or by non-profit organizations such as libraries and schools is permitted. Any modification of this document is expressly forbidden. For all other uses of the this article, prior advance written notice is required. Send inquiries by hardcopy to: Charles R. Honts, Ph. D., Editor, The Journal of Credibility Assessment and Witness Psychology, Department of Psychology, Boise State University, 1910 University Drive, Boise, Idaho 83725, USA.
Abstract. This study examined the presentation of psychophysiological detection of deception (PDD; polygraph) testing in introductory psychology textbooks. We examined a sample of 37 introductory psychology textbooks published between 1987 and 1994 for content that discussed PDD testing. Excerpts concerning PDD were then checked for misdescriptions or inaccuracies and rated by two psychophysiologists and a social psychologist. The results showed that PDD received strongly negative treatment in the texts. Moreover, the treatments were often fraught with misdescriptions and inaccuracies. In addition, there was an over-reliance on reviews as opposed to empirical studies. We discuss the significance of the problems of bias, reliance on secondary sources, and inaccuracies, and elaborated on the importance of balanced and error free presentations in this medium that serves as a first introduction to the science of psychology for so many people.
Previous content analyses of Introductory Psychology textbooks have been conducted in areas such as the treatment of counseling versus clinical psychology (Leong & Poynter, 1991), transactional analysis (Douglass, 1990), humanistic psychology (Churchill, 1988), sensory deprivation research (Suedfeld & Coren, 1989), religion (Lehr & Spilka, 1989), parapsychology (Roig, Icochea, & Cuzzucoli, 1991), the number of neurons in the brain (Soper & Rosenthal, 1988), the Little Albert legend (Paul & Blumenthal, 1989), the Yerkes-Dodson Law (Winton, 1987), the utility of idealized figures (Shepard, 1983), and racial diversity (Gay, 1988). Those studies have illustrated that misdescriptions, inaccuracies, theoretical biases, ambiguity, lack of objectivity, or lack of assimilation may be present in Introductory Psychology material. As a result, it appears that college students are not being well served when controversial material is inadequately and incompletely presented.
In the present study we address another controversial area that is frequently covered in introductory textbooks, that is, the psychophysiological detection of deception (PDD). Psychophysiological detection of deception tests (also known as polygraph or lie detector tests) are psychological tests that are an important application of psychology in the real world (Honts, 1994a). In the United States and Canada, virtually all federal and local law enforcement agencies employ polygraph examiners who conduct investigative examinations with criminal suspects. The results of such tests often remove individuals from suspicion or result in confessions of wrongdoing following interrogations (Honts & Perry, 1992; Lykken, 1981; Raskin, 1986). Polygraph testing also finds application in the workplace (Honts, 1991). Although many screening uses of polygraph testing in the private sector were prohibited in 1988 (Employee Polygraph Protection Act), employers may still use polygraph to investigate specific losses, and several industries were exempted from the screening ban. Moreover, polygraph tests for pre-employment screening are widely used by federal, state, and local governments. Polygraph pre-employment screening of police officer applicants is particularly pervasive. Finally, polygraph testing plays a critical role in personnel selection and the security clearance process in the national security agencies (Department of Defense, 1991; Honts, 1991; 1994a). All employees of the National Security Agency and the Central Intelligence Agency must take and pass polygraph tests to obtain and retain their security clearances. There are proposals to expand greatly the numbers of individuals subject to such clearance testing (Department of Defense, 1991). Although the numbers of tests conducted in the national security system may be relatively small in absolute terms (i.e., in the tens of thousands), in terms of the special trust and power placed in the hands of those who conduct PDD examinations, the importance of such tests can hardly be overstated (Honts, 1994a). It thus seems important that Introductory Psychology textbooks present a fair and unbiased picture of this important area of applied psychology
Materials. The data base for this study consisted of an exhaustive sample of the 37 Introductory Psychology textbooks offered to the psychology faculty of a medium sized midwestern university during the academic year 1993/94. In the case that multiple editions of any textbook were made available, only the most current edition was used in this analysis.
Procedure. Each of the textbooks was searched for references to lie detection, polygraph, or detection of deception testing. If the textbooks contained references to PDD testing, the words and number of pages devoted to the topic were counted. The textbook sections were then rated on a 7-point scale regarding their orientation toward PDD testing (1 = negative, 4 = neutral, 7 = positive). Three different individuals rated the orientation for all of the textbook excerpts. The first rater was a psychophysiologist (the second author of the present manuscript) who was highly familiar with the polygraph testing literature and who has testified as an expert on polygraph examinations in a number of courts of law in the United States and Canada. The second was an assistant professor of psychology (a colleague of the first author) who was trained as a psychophysiologist and who was not involved in polygraph research or in the polygraph controversy in any way. The third evaluator was an associate professor social psychologist (a colleague of the second author) who has not been involved in the polygraph controversy in any way, but who does frequently teach large Introductory Psychology classes. The three evaluations were conducted independently of one another.
In addition, reference citations were recorded. The reference citations present in each textbook were counted, examined, and classified as to their orientation (either positive or negative) toward polygraph testing. The reference citations were also classified as either laboratory or field studies, or reviews. When research or reviews were cited, the descriptions of empirical research and reviews were examined for factual errors or misdescriptions. Also recorded were the types of polygraph usage (forensic testing, investigative testing, on-the-job screening, pre-employment screening, or national security screening) discussed in each textbook. The types of polygraph tests (Control Question Test, Concealed Knowledge Test, or Relevant-Irrelevant Test) mentioned were also recorded.
General Statistics. The data collected in this study are presented and summarized in Table 1. The mean textbook length was 656.31 pages. For only those books that discussed polygraph testing, the mean textbook length was 655.9 pages. The mean number of pages devoted to a discussion of polygraph testing was 1.5 pages. Twenty-nine of the textbooks (78.4%) included some discussion of polygraph testing. Of the texts that discussed PDD, only 11 (29.7%) described empirical research.
Table 1. Analysis of the Presentation of Polygraph Testing in 37 Introductory Psychology Texts
| ||||
| ||||
| ||||
Segments with Both Empirical and Reviews Cited | ||||
Atkinson | ||||
Doyle | ||||
Dworetzky | ||||
Feldman | ||||
Huffman | ||||
Kalat | ||||
Lefton | ||||
Santrock | ||||
Wade | ||||
Wood | ||||
Worchel | ||||
Means | ||||
Only reviews Cited | ||||
Baron | ||||
Bernstein | ||||
Bootzin | ||||
Carlson | ||||
Gleitman | ||||
Laird | ||||
Meyers | ||||
Peterson | ||||
Pettijohn | ||||
Rubin | ||||
Smith | ||||
Weiten | ||||
Weiten (briefer version) | ||||
Wortman | ||||
Means | ||||
No Citations | ||||
Crooks | ||||
Darley | ||||
Roediger | ||||
Shaver | ||||
Means | ||||
PDD Not Discussed | ||||
Benjamin | ||||
Bourne | ||||
Gerow | ||||
Goldstein | ||||
Gray | ||||
McConnell | ||||
Ornstein | ||||
Zimbardo | ||||
Grand Means |
Ratings. The mean ratings of the textbook excerpts were as follows: Polygraph-Expert/Psychophysiologist, M = 2.24, sd = 0.87, Independent Psychophysiologist, M = 2.55, sd =1.01, Social Psychologist, M = 3.79, sd =0.86. A repeated measures Analysis of Variance (ANOVA) was used to test for differences among the raters. This analysis revealed a significant difference between the means, F(2, 27) = 40.51, p < .001. This analysis was followed-up with single degree of freedom tests. The Bernoulli corrected p value was calculated by dividing alpha (.05) by the number post-hoc comparisons (3), for an alpha value of p = .017. The univariate tests indicated that the ratings by the two psychophysiologists were not significantly different, F(1, 28) = 1.95, p >.1, but that the ratings of the Polygraph-Expert Psychophysiologist and the Social Psychologist were significantly different, F(1, 28) = 71.95, p < .001, as were the ratings of the Independent Psychophysiologist and the Social Psychologist, F(1, 28) = 37.57, p < .001. Interestingly, neither of the psychophysiologists rated a single excerpt as positive. The average rating for the three evaluators is shown in Table 1.
Citations. Only four (16%) of the textbooks provided any positive citations, and those textbooks cited only review articles. The ratio of negative citations to positive citations was over 15 to 1 (4.28/.28). To determine if differences in orientation, number, and type of citations existed between the textbooks that discussed both empirical studies and reviews (Mixed group) and those textbooks that discussed reviews only (Review group), t-tests for independent samples were conducted. There was a significant difference in orientation for the discussion type, t(23) = 3.23, p = .003, with the Mixed group providing a more negative discussion (M = 1.64) than the Review group (M = 2.64). Also noted was a significant difference in total number of citations provided by each group, t(23) = 3.28, p = .003. The Mixed group provided more citations (M = 6.46) than the Review group (M = 3.0). Finally, there was a significant difference in the number of negative citations t(23) = 3.96, p = .001, with the Mixed group presenting more negative citations (M = 6.46) than the Review group (M = 2.57).
The frequency of various citations was also examined. The most frequently cited (14 times) review was the popular book by Lykken (1981). The most commonly mentioned (6 times) empirical field study was one by Kleinmuntz and Szucko (1984). Finally, the most commonly cited (2 times each) laboratory validity studies were the studies by Honts, Hodes, and Raskin (1985; concerning countermeasures) and by Szucko and Kleinmuntz (1981; concerning validity). Overall, 64 different citations were noted. Fifty (78.1%) of those citations were for reviews, nine (14.1%) were empirical laboratory studies, and five (7.8%) were empirical field studies. Fifteen reviews, two laboratory studies, and three field studies were cited more than one time each. Furthermore, over all of the textbook excerpts there were 113 citations (i.e., some of the 64 separate citations were cited in more than one textbook). The most frequently cited author was David Lykken with a total of 29 citations for eight different publications. At least one of Lykken's works was discussed in 19 of the textbooks.
The types and uses of polygraph testing discussed in the excerpts
were also assessed. Those results are presented in Table 2. Overall,
23 of the textbooks discussed some specific use or type of polygraph
tests. In those texts that discussed types of polygraph testing
(Control Question Test [CQT], Concealed Knowledge Test [CKT],
and Relevant/Irrelevant [RI]), 17 (74%) mentioned only one test
type. The other six textbooks mentioned two types of polygraph
tests. No textbooks discussed more than two types of polygraph
tests. Ten textbooks provided a discussion of the RI test. The
CQT and the CKT were each discussed in nine textbooks. The uses
of polygraph tests that were assessed included forensic testing,
investigative testing, on-the-job screening, pre-employment screening,
and national security screening. Overall, 23 (62.1%) of the textbooks
included some mention of at least one of the uses of polygraph
tests, although only the textbooks with citations (reviews and/or
empirical research) discussed those uses. Pre-employment screening
was discussed most often (17 times), followed by forensic and
investigative testing (14 times each). On-the-job screening was
mentioned 13 times, and national security screening was discussed
in eight textbooks. Only one textbook discussed all of the polygraph
uses. Thirteen textbooks discussed three or four of the uses,
while nine textbooks discussed either one or two of the possible
uses.
Table 2. Percent of Textbooks That Provided a Discussion of the Uses and Types of Polygraph Tests | |||
Topic (Use/Type) |
|
|
|
Forensic | |||
Investigative | |||
On-The-Job Screening | |||
Pre-employment Screen | |||
National Security Screen | |||
Control Question Tests | |||
Concealed Information | |||
Relevant Irrelevant |
Finally, the discussions of polygraph testing were examined for factual errors in the reported research. Overall, 25 textbooks provided discussions with research or review citations. Factual errors or misdescriptions were noted in 18 (72%) of those textbooks (e.g., Feldman [1993], in describing a countermeasure study by Honts, Hodes, and Raskin [1985], stated that subjects in that study had used a tack in the shoe as a countermeasure. No such manipulation was included in that study.). Details of the errors and misdescriptions in the excerpts are provided in Appendix A at the end of this article.
Our analysis of the treatment of PDD in Introductory Psychology textbooks indicates that most textbooks present a negative view of the area. If the majority of research concerning PDD indicated poor validity, this view would clearly be justified. The question thus becomes what does the empirical literature have to say about the validity of PDD tests?
Despite their widespread application, polygraph tests have been, and continue to be, the source of great controversy in the scientific literature. Of the three techniques discussed in this paper, there seems to be general agreement in the scientific literature that the Relevant-Irrelevant Test lacks validity (Ben-Shakhar & Furedy, 1990; Honts, 1991; Iacono & Patrick, 1988; Kleinmuntz & Szucko, 1982; Lykken 1981; Raskin, 1986; Saxe, Dougherty, & Cross, 1985). However, this may be a limited finding as the RI is used very infrequently in forensic settings and its applied uses seem to be limited to employment settings (Honts, 1991). If authors intend that their comments be directed to the use of the RI in employment settings they should state this clearly, as such incontrovertible agreement is noticeably lacking for the other two techniques.
The most commonly used test in the field is the Control Question Test. We will focus most of our analysis on validity studies of the CQT. The third technique, the Concealed Knowledge Test has been studied extensively in the laboratory, but has not achieved much application in the field. In the following section, we also review the empirical literature on the CKT.
The subsequent review also focuses on forensic applications of the polygraph. There is virtually no empirical scientific literature on the validity of PDD tests in employment settings, and thus there is nothing to review (Honts, 1991). Similarly there is little empirical literature on the national security uses of the polygraph. However, what literature there is on the national security uses consistently produces near chance estimates of validity (Barland, Honts, & Barger, 1989; Honts, 1991; 1992; 1994a). We found no references to any of theses sources in the Introductory Psychology textbooks.
Laboratory Studies Concerning Forensic Settings. A recent meta-analysis of 15 laboratory studies (Kircher, Horowitz, & Raskin, 1988) of the Control Question Test indicated a wide range of validity estimates. One study found near chance results, while six of the studies produced moderate validity estimates, and eight of the studies report validity coefficients of 0.7 or better. In four of the studies, the validity coefficients exceeded 0.8. The Kircher et al. meta-analysis noted that these laboratory studies differed widely in their ecological validity. Some studies used mock crimes and procedures that closely modeled field conditions while other studies were very artificial and used unrealistic procedures. Moreover, the Kircher et al., meta-analysis indicated that those laboratory studies that most closely modeled field conditions produced the highest accuracy rates. A similar state of affairs appears to exist in the Concealed Knowledge Test literature. A more recent review (Honts & Quick, 1995) of the most ecologically valid laboratory studies of both the CQT and the CKT produced overall estimates of accuracy of about 90% and approximately equal false positive and false negative error rates.
Regardless of their methodology, some (e.g., Ben-Shakhar & Furedy, 1990; Lykken, 1981) have criticized all laboratory studies on the grounds that they lack ecological validity. These critics contend that it is not possible in the laboratory to mimic adequately the motivational and emotional context of being given a polygraph test when you are accused of a crime. Others have argued that if sufficient care is taken in creating a deceptive context in the laboratory, then laboratory studies can be useful in estimating the accuracy of the technique in the field (e.g., Podlesny & Raskin, 1978; Kircher et al., 1988).
The Kircher et al. (1988) review and meta-analysis should have been easily available to all of the authors of the Introductory Psychology textbooks considered in this analysis. It was published in a first tier psychology journal (Law and Human Behavior) that is published by APA Division 41, and is abstracted in all of the popular reference sources. We believe that it is telling that the laboratory study cited most frequently for estimates of validity is the Szucko and Kleinmuntz (1981; American Psychologist) study which produced the lowest estimate of accuracy (detection efficiency r = 0.21; the next lowest study, which produced an r of .51, accounting for six times the criterion variance, is the Kircher et al., meta-analysis). Conspicuously absent from the textbook excerpts were references to equally available publications in first tier journals that produce high estimates for the validity of the Control Question Test (e.g., Podlesny & Raskin, 1978, Psychophysiology; Ginton, Netzer, Elaad, & Ben-Shakhar, 1982, Journal of Applied Psychology; Kircher & Raskin, 1988; Journal of Applied Psychology; Dawson, 1981; Psychophysiology; Raskin & Hare, 1978; Psychophysiology). As a minimum, each of the studies cited above accounted for 10 times the criterion variance of Szucko and Kleinmuntz (the validity coefficient for Szucko and Klienmuntz was .21 while the validity coefficients for the cited studies ranged from .65 for Ginton et al., to .87 for Raskin and Hare). One is left with the inescapable conclusions that either the introductory psychology textbook authors gave only a cursory review to the laboratory data on the polygraph or they were biased in their choice of studies to cite.
Ben-Shakhar and Furedy (1990) provide a review of the laboratory studies of the Concealed Knowledge Test. At that time they found ten laboratory studies of the CKT that they felt were scientifically sound enough to include in their review (Balloun & Homes, 1979; Bradley & Ainsworth; 1984; Bradley & Warfield, 1984; Davidson, 1968; Giesen & Rollison, 1980; Lykken, 1959; Podlesny & Raskin, 1978; Steller, Haenert, & Eiselt, 1987; Stern, Breen, Watanabe, & Perry, 1981; Waid, Orne, Cook, & Orne, 1978). However, no meta-analysis or quantitative analysis of the quality of these studies was reported. Over all ten studies, the accuracy with guilty subjects ranged from 61.1% (Balloun & Holmes, 1979) to 100% (Bradley & Ainsworth, 1984; and Bradley & Warfield, 1984). Accuracy with innocent subjects ranged from 80.6% (Waid et al., 1978) to 100% in seven of the studies (Bradley & Ainsworth, 1984; Bradley & Warfield, 1984; Davidson, 1968; Giesen & Rollinson, 1980; Lykken, 1959; Podlesny & Raskin, 1978; Steller et al., 1987). Only a single one of these studies received a single citation in one textbook. That study was Bradley and Ainsworth (1984), one of two studies indicating 100% accuracy with both innocent and guilty subjects.
Field Studies Concerning Forensic Settings. In any event, laboratory studies cannot tell the complete story. Data from real world settings are necessary to compliment and extend the results from the laboratory. Unfortunately, validity estimates based on field studies are also mixed and highly debated. Much of the debate regarding field studies concerns the issue of what constitutes adequate methodology. There seems to be an emerging consensus among both proponents (e.g., Honts & Perry, 1992) and critics (e.g., Patrick & Iacono, 1991) that the following are the necessary minimum requirements for field studies of PDD: First, the subjects must represent the population for generalization. If one is interested in studying criminal suspects, then the subjects should be criminal suspects. Second, the cases used in the study should be selected by some random process without reference to the accuracy of the original examiners decision or to the quality of the physiological data. Third, the decisions used for the data analysis should be based on independent reviews of only the physiological data. Information about the case facts and the overt behavior of the subjects should be withheld from the evaluators. (This criterion holds only if the goal of the study is to determine the ability of the physiological data to discriminate the innocent and guilty. If the goal of the study is to determine the utility of the procedure for some applied goal, admissibility in court for example, the data from the original examiners may be more valuable, see Honts & Quick, 1995.) Fourth, the independent evaluators should be experienced in the independent evaluation of PDD data and they should use techniques that are representative of those actually used in the field. Finally, the truthfulness of the subjects must be confirmed by some criterion that is independent of the outcome of the polygraph examination. Confessions, although problematic, are generally considered to be the best criterion, especially if they are supported by corroborating evidence.
A recent review (Honts & Quick, 1995), found four field studies of the CQT (Honts, 1994b, now in press; Honts, & Raskin, 1988; Iacono & Patrick, 1991; and Raskin, Kircher, Honts, & Horowitz, 1988) and two of the CKT (Elaad, 1990; Elaad, Ginton, & Jungman, 1992) that were able to meet the stringent requirements for a useful field study described above. Three of the field studies (Honts, 1994; Honts & Raskin, 1988; Raskin et. al., 1988) produced accuracy rates above 90%. The independent evaluators in the third study (Iacono & Patrick, 1991) produced a high false positive rate, although the accuracy rate of the original examiners exceeded 90%.
Recently Patrick and Iacono (1991) have suggested that retrospective field studies may not be useful for estimating the accuracy of polygraph tests because of sampling biases built into the design of such studies. Their position is based on a theoretical analysis and an earlier thought experiment (Iacono, 1991). Fortunately there is no compelling data to support their analysis and many of the assumptions of that analysis are insupportable (e.g., If a guilty person passes a polygraph test, there will be no further investigation of that suspect, and confessions are only obtained following failed polygraph tests). If these assumptions are altered or are invalid then very different conclusions can be suggested (Raskin, Honts, & Kircher, in press). Moreover, recent work contradicts their position (Honts, in press) and indicates that confession results are very comparable with results based on other criteria.
Unfortunately, only Iacono and Patrick (1991) would have been readily available to the authors of the Introductory Psychology textbooks considered here and it would have appeared in print as most of these texts would have been nearing completion. It is not fair to expect that the authors of Introductory Psychology textbooks should know about unpublished reports in an applied area. However, there were a number of other field studies that were available to these authors at the time these books were written. All of those studies were reviewed in a study commissioned by the United States Congress and conducted by the Office of Technology Assessment (OTA, 1983). The OTA report was subsequently summarized in the American Psychologist (Saxe, Dougherty, & Cross, 1985). OTA concluded that there were ten field studies of the Control Question Test that met minimal scientific standards (although none would unambiguously meet all of the criteria described above [Barland & Raskin, 1976; Bersh, 1969; Davidson, 1979; Horvath, 1977; Horvath & Reid, 1971; Hunter & Ash, 1973; Kleinmuntz & Szucko, 1982; Raskin, 1976; Slowik & Buckley, 1975; Wicklander & Hunter, 1975]). Over these ten studies, the average accuracy with guilty subjects was 90% and the average accuracy with innocent subjects was 80%. In those eight studies that used a confession criterion, the accuracy of decisions with guilty subjects ranged from 98.6% (Wicklander & Hunter, 1975) to 75% (Klienmuntz & Szucko, 1982). With innocent subjects the accuracy rates ranged from 100% (Davidson, 1979) to 51.1% (Horvath, 1977).
At present there are only two published field studies of the CKT. Both of those studies would meet the criteria described above for a useful field study of the detection of deception. The two studies were reported by Elaad and his colleagues (Elaad, 1990; Elaad, Ginton, & Jungman, 1992). The average accuracy rate for guilty subjects in those studies was 47% while the average accuracy with innocent subjects was 98%. These results suggest that in the field the CKT produces extremely high numbers of false negative errors. This finding has been discussed in the light of what we know about eyewitness memory, and may not be surprising (see the discussion in Raskin et al., in press).
Thus, like the laboratory studies, the high quality field studies also seem to paint a relatively positive picture of the accuracy of the CQT, although one could argue that the literature is mixed in both venues. The picture for the CKT is clearer, both the laboratory and the field studies indicate that the CKT is prone to false positive errors and that in field settings the false negative rate may be extreme.
Attitudes of the Scientific Community Toward PDD. Another index of the scientific community's view of PDD testing could be found in surveys. The members of the Society for Psychophysiological Research (SPR) were polled on this topic by The Gallup Organization (1984). At that time, 63% of the respondents said that they believed polygraph tests were useful diagnostic tools when used with other available information, while only 1% of the respondents stated a belief that polygraph tests were without value. More recently, the members of SPR were again surveyed about their attitude toward polygraph testing (Amato & Honts, 1994). The results of the Amato and Honts study showed that 60.2% of the respondents believed PDD tests were useful diagnostic tools when used with other available information. Moreover, 80.5 % of the respondents who claimed to be familiar with the PDD literature believed that polygraph tests were useful diagnostic tools. Only 1.7% of the respondents stated that polygraph tests were without merit.
Although there is controversy, the empirical and review literature concerning PDD suggests the following conclusions: There is little support for the Relevant-Irrelevant Test, but this test is in frequent use only in employment settings. The laboratory and field data concerning the Control Question Test are mixed. However, when the ecologically valid laboratory studies and the high quality field studies are considered, both indicate high validity for the CQT. The ecologically valid laboratory studies and the high quality field studies of the Concealed Knowledge Test converge on a conclusion that the CKT is prone to false negative errors. Moreover, in the field the CKT seems to produce extreme numbers of false negative errors.
Given the generally favorable findings of both the empirical laboratory and field literature on the CQT, our review of Introductory Psychology textbooks appears to have revealed a distressing lack of balance. None of the textbooks accurately noted the important distinctions in the literature concerning the validity of the three techniques. Moreover, the general negative tone of the textbooks appears to be unjustified by the literature. This lack of balance is typified by the fact that the most commonly cited field study of PDD was the study by Kleinmuntz and Szucko (1984). Of all the field studies available in the literature, regardless of quality, this study is the one of two confession studies (the other is Horvath, 1977) that produced notably lower accuracy estimates. Of the eight confession confirmation studies in the OTA report, these are the two with the worst accuracies.
Given that it is so frequently cited, it may be illustrative to describe the methodology of the Kleinmuntz and Szucko (1984) study at this time. Unfortunately, the most cited form of this study is a 1984 publication in the journal Nature which is only about one page in length. Very few details are provided in that publication. However, the study has been described in detail elsewhere (OTA, 1983, and in Kleinmuntz & Szucko, 1982). From those descriptions we can determine the following facts about the Kleinmuntz and Szucko (1984) study. The subjects of this study were individuals who were tested by a private company regarding employee theft as a condition of their employment. None of the subjects was under criminal investigation at the time of testing. The physiological data were evaluated by students of a polygraph school who had not completed their training. The polygraph school these students were attending is one that stresses the evaluation of the case facts and the subject's overt behavior. The independent quantitative analysis of the physiological data is not stressed. Finally, the student evaluators were given only 1/9th of the data they would usually have in making an evaluation and they were forced to use an unfamiliar rating scale with which they had no prior experience or training. That rating scale is never used in the field, and the students were not allowed to arrive at an inconclusive outcome, as they would be allowed to do in the field. The cases used were confirmed by confession, but the method of case selection was not specified in the report. There is no indication that any additional confirmatory information was sought or obtained. If the criteria for a useful field study described above are consulted, it can readily be seen that the Kleinmuntz and Szucko (1984) study fails on almost every count. However, none of these methodological shortcomings were mentioned by any of the Introductory Psychology textbook authors who referenced this study.
Another problematic field study that is frequently cited is one by Horvath (1977). One problem with that study is that the cases were selected for inclusion in the study on the basis of the quality of the recordings, not on some random sampling basis. Moreover, although it is not indicated in the Journal of Applied Psychology publication, the dissertation (Horvath, 1974) upon which it is based states that some of the innocent subjects were crime victims who were being tested to verify their statements to the police. Subsequent analyses indicated that all but one of the false positive errors occurred with innocent victims, not suspects (see Raskin, 1986).
We realize that the authors of Introductory Psychology textbooks do not have the time to read each dissertation upon which an empirical report is based, or to read all the available overlapping sources. However, the critical information about the Kleinmuntz and Szucko (1984) and Horvath (1977) studies discussed above was available to the Introductory Psychology textbook authors discussed here through several published reviews (notably, Raskin 1986; 1987; 1989). The 1987 review by Raskin would have been readily revealed by even a cursory search on PsycLit.
Unfortunately, similar biases are evident in the descriptions of laboratory studies. One of the two most frequently cited laboratory studies (Szucko & Kleinmuntz, 1981) was the only study in the Kircher et al. (1988) meta-analysis that produced chance discrimination. As such, it was an extreme outlier in the negative direction. The other frequently cited laboratory study was by Honts et al. (1985). Although this study produced moderate discrimination rates in its control conditions, it was cited in the Introductory Psychology textbooks because it demonstrated that under certain circumstances PDD tests could be distorted and/or defeated by countermeasures. Thus, this article was also used to paint PDD testing in a negative light. Numerous laboratory studies published in readily available first tier journals were available to the Introductory Psychology textbook authors, but were ignored or overlooked in favor of an outlier in the negative direction.
Through their choice of citations, the authors of Introductory Psychology textbooks have painted a very negative picture of the science of PDD testing. Our review of the scientific literature shows that this extreme negative view is not justified. Although there is controversy, we strongly believe that the empirical literature supports the validity of polygraph testing with the Control Question Test. Moreover, scientific surveys indicate that the majority of psychophysiologists agree. We believe that most of the current treatments of PDD in Introductory Psychology textbooks are doing an injustice to newcomers to psychology by painting a distorted and biased view of this important applied psychology. At the worst, it could be argued that Introductory Psychology textbook authors should note that there is controversy and describe data from both sides. If studies such as Klienmuntz and Szucko (1984) are cited, the criticism of such studies should always be mentioned. Such a neutral position would seem to be defensible.
It would appear that Introductory Psychology textbook authors would do well to actually examine the research literature in controversial areas they write about, rather than relying on secondary sources that may have been written by extreme proponents for one side or the other in an ongoing controversy. Truth, rather than bias, should be the criterion for inclusion in this important format that introduces most people to scientific psychology.
Amato, S. L., & Honts, C. R. (1994). What do psychophysiologists think about polygraph tests? A survey of the membership of SPR. Psychophysiology, 31, S22. (Abstract).
Atkinson, R. L., Atkinson, R. C., Smith, E. E., & Bem, D. J. (1993). Introduction to psychology (11th ed.). Orlando, FL: Harcourt Brace Jovanovich.
Balloun, K. D., & Holmes, D. S. (1979). Effects of repeated examinations on the ability to detect guilt with a polygraphic examination: A laboratory experiment with a real crime. Journal of Applied Psychology, 64, 316-322.
Barland, G. H., Honts, C. R., & Barger, S. D. (1989). Studies of the Accuracy of Security Screening Polygraph Examinations. Department of Defense Polygraph Institute, Fort McClellan, Alabama.
Barland, G. H., & Raskin, D. C. (1976). Validity and reliability of polygraph examination of criminal suspects (Contract No. 75-N1-99-0001). Washington D. C.: National Institute of Justice, Department of Justice.
Baron, R. A. (1992). Psychology (2nd ed.). Needham Heights, MA: Allyn and Bacon.
Benjamin, L. T., Hopkins, J. R., & Nation, J. R. (1990). Psychology. New York: Macmillan.
Ben-Shakhar, G., & Furedy, J. J. (1990). Theories and applications in the detection of deception. New York: Springer-Verlag.
Bersh, P. J. (1969). A validation study of polygraph examiner judgments. Journal of Applied Psychology, 53, 399-403.
Bernstein, D. A., Clarke-Stewart, Roy, E. J., Srull, T. K., & Wickens, C. D. (1994). Psychology (2nd ed.). Boston: Houghton Mifflin.
Bootzin, R. R., Bower, G. H., Crocker, J., & Hall, E. (1991). Psychology today: An introduction (7th ed.). New York: McGraw-Hill.
Bourne, L. E., Jr., Ekstrand, B. R., & Dunn, W. L. S. (1988). Psychology: A concise introduction. New York: Holt, Rinehart, and Winston.
Bradley, M. T., & Ainsworth, D. (1984). Alcohol and psychophysiological detection of deception. Psychophysiology, 21, 63-71.
Bradley, M. T., & Warfield, J. F. (1984). Innocence, information, and the guilty knowledge test in the detection of deception. Psychophysiology, 21, 683-689.
Carlson, N. R. (1993). Psychology: The science of behavior (4th ed.). Needham Heights, MA: Allyn and Bacon.
Churchill, S. D. (1988). Humanistic psychology in introductory textbooks. Humanistic Psychologist, 16, 341-357.
Crooks, R., & Stein, J. (1991). Psychology: Science, behavior, and life (2nd ed.). Orlando, FL: Holt, Rinehart, and Winston.
Darley, J. M., Glucksberg, S., & Kinchla, R. A. (1991). Psychology, 5th ed. Englewood Cliffs, NJ: Prentice-Hall.
Davidson, P. O. (1968). Validity of the guilty knowledge technique: The effects of motivation. Journal of Applied Psychology, 52, 62-65.
Davidson, W. A. (1979). Validity and reliability of the cardio activity monitor. Polygraph, 8, 104-111.
Dawson, M. E. (1981). Physiological detection of deception: Measurement of responses to questions and answers during countermeasure maneuvers, Psychophysiology, 17, 8-17.
Department of Defense (1991). Fiscal Year 1990 Report to Congress on the DoD Polygraph Program. Washington, D. C.: Department of Defense.
Douglass, H. J. (1990). Transactional analysis in American college psychology textbooks. Transactional Analysis Journal, 20, 92-110.
Doyle, C. L. (1987). Explorations in psychology. Monterey, CA: Brooks/Cole.
Dworetzky, J. P. (1991). Psychology (4th ed.). St. Paul, MN: West.
Elaad, E. (1990). Detection of guilty knowledge in real-life criminal investigations. Journal of Applied Psychology, 75, 521-529.
Elaad, E., Ginton, A., & Jungman, N. (1992). Detection measures in real-life criminal guilty knowledge tests. Journal of Applied Psychology, 77, 757-767.
Employee Polygraph Protection Act of 1988. Public Law 100-347, 29, U. S. C. 2001 (1988).
Feldman, R. S. (1993). Understanding psychology (3rd ed.). New York: McGraw-Hill.
The Gallup Organization (1984). Survey of members of the Society For Psychophysiological Research concerning their opinions of polygraph test interpretation. Polygraph, 13, 153-165.
Gay, J. (1988). The incidence of photographs of racial minorities in introductory psychology texts. The Journal of Black Psychology, 15, 77-79.
Gerow, J. R. (1992). Psychology: An introduction (3rd ed.). New York: Harper Collins.
Giesen, M., & Rollison, M. A. (1980). Guilty knowledge versus innocent associations: Effects of trait anxiety and stimulus context on skin conductance. Journal of Research in Personality, 14, 1-11.
Ginton, A., Netzer, D., Elaad, E., & Ben-Shakhar, G. (1982). A method for evaluating the use of the polygraph in a real-life situation. Journal of Applied Psychology, 67, 131-137.
Gleitman, H. (1991). Psychology (3rd ed.). New York: W. W. Norton.
Goldstein, E. B. (1994). Psychology. Pacific Grove, CA: Brooks/Cole.
Gray, P. (1991). Psychology. New York: Worth.
Honts, C. R. (1991). The emperor's new clothes: Application of polygraph tests in the American workplace. Forensic Reports, 4, 91-116.
Honts, C. R., (1992). Counterintelligence scope polygraph (CSP) test found to be a poor discriminator. Forensic Reports, 5, 215-218.
Honts, C. R. (1994a). The psychophysiological detection of deception. Current Directions in Psychological Science, 3, 77-82.
Honts, C. R. (1994b). Field validity study of the Canadian Police College polygraph technique. Psychophysiology, 31, S57. (Abstract)
Honts, C. R. (1996). Criterion development and validity of the Control Question Test in field application. The Journal of General Psychology, 123, 309-324.
Honts, C. R., Hodes, R. L., & Raskin, D. C. (1985). Effects of physical countermeasures on the physiological detection of deception. Journal of Applied Psychology, 70, 177-187.
Honts, C. R., & Perry, M. V. (1992). Polygraph admissibility: Challenges and changes. Law and Human Behavior, 16, 357-379.
Honts, C. R., & Quick , B. D. (1995). The polygraph in 1996: Progress in science and the law. North Dakota Law Review, 71, 997-1020.
Honts, C. R., & Raskin, D. C. (1988). A field study of the validity of the directed lie control question. Journal of Police Science and Administration, 16, 56-61.
Horvath, F. A. (1974). The accuracy and reliability of police polygraphic ("lie detectors") in examiners's judgments of truth and deception: The effect of selected variables. Unpublished Doctoral Dissertation, Michigan State University.
Horvath, F. S. (1977). The effect of selected variables on interpretation of polygraph records. Journal of Applied Psychology, 62, 127-136.
Horvath, F. S., & Reid, J. E. (1971). The reliability of polygraph examiner diagnosis of truth and deception. The Journal of Criminal Law Criminology and Police Science, 62, 276-281.
Huffman, K., Vernoy, M., Williams, B., & Vernoy, J. (1991). Psychology in action (2nd ed.). New York: Wiley.
Hunter, F. L., & Ash, P. (1973). The accuracy and consistency of polygraph examiners's diagnoses. Journal of Police Science and Administration, 1, 370-375.
Iacono, W. G., (1991). Can we determine the accuracy of polygraph tests? In P. K. Ackles, J. R. Jennings, & M. G. H. Coles (Eds.) Advances in psychophysiology (Vol. 4). Greenwich, CT: JAI Press.
Iacono, W. G., & Patrick, C. J. (1988). Assessing deception: Polygraph techniques. In R. Rogers (Ed.), Clinical assessment of malingering and deception. New York: Guilford. (205-233).
Kalat, J. W. (1993). Introduction to psychology (3rd ed.). Belmont, CA: Brooks/Cole.
Kircher, J. C., Horowitz, S. W., & Raskin, D. C. (1988). Meta-analysis of mock crime studies of the control question polygraph technique. Law and Human Behavior, 12, 79-90.
Kircher, J. C. & Raskin, D. C. (1988). Human versus computerized evaluations of polygraph data in a laboratory setting. Journal of Applied Psychology, 73, 291-302.
Kleinmuntz, B., & Szucko, J. J. (1982). On the fallibility of lie detection. Law and Society Review, 17, 84-104.
Kleinmuntz, B., & Szucko, J. J. (1984). A field study of the fallibility of polygraphic lie detection. Nature, 308, 449-450.
Laird, & Thompson, (1992). Psychology. Boston: Houghton Mifflin.
Lefton, L. A. (1994). Psychology (5th ed.). Needham Heights, MA: Allyn and Bacon.
Lehr, E., & Spilka, B. (1989). Religion in the introductory psychology textbook: A comparison of three decades. Journal for the Scientific Study of Religion, 28, 366-371.
Leong, F. T., & Poynter, M. A. (1991). The representation of counseling versus clinical psychology in introductory psychology textbooks. Teaching of Psychology, 18, 12-16.
Lykken, D. T. (1959). The GSR in the detection of guilt. Journal of Applied Psychology, 43, 385-388.
Lykken, D. T. (1981). A tremor in the blood: Uses and abuses of the lie detector. New York: McGraw-Hill.
McConnell, J. V., & Philipchalk, R. P. (1992). Understanding human behavior (7th ed.). Orlando, FL: Holt, Rinehart, and Winston.
Meyers, D. G. (1992). Psychology (3rd ed.). New York: Worth.
Office of Technology Assessment (1983). Scientific validity of polygraph testing: A research review and evaluation -- A technical memorandum (OTA-TM-H-15). Washington, DC: U. S. Government Printing Office.
Ornstein, R., & Carstensen, L. (1991). Psychology: The study of human experience (3rd ed.). New York: Harcourt Brace Jovanovich.
Patrick, C. J., & Iacono, W. G. (1991). Validity of the control question polygraph test: The problem of sampling bias. Journal of Applied Psychology, 76, 229-238.
Paul, D. B., & Blumenthal, A. L. (1989). On the trail of Little Albert. The Psychological Record, 19, 547-553.
Peterson, C. (1991). Introduction to psychology. New York: Harper Collins.
Pettijohn, T. F. (1992). Psychology: A concise introduction (3rd ed.). Guilford, CT: Dushkin Publishing.
Podlesny, J. A., & Raskin, D. C. (1978). Effectiveness of techniques and physiological measures in the detection of deception. Psychophysiology, 15, 344-358.
Raskin, D. C. (1976). Reliability of chart interpretation and sources of errors in polygraph examinations (Report No. 76-3, Contract No. 75-NI-99-0001). Washington, D. C., National Institute of Law Enforcement and Criminal Justice, Law Enforcement Assistance Administration, U. S. Department of Justice.
Raskin, D. C. (1986). The polygraph in 1986: Scientific, professional and legal issues surrounding the application and acceptance of polygraph evidence. Utah Law Review, 1986, 29-74.
Raskin, D. C. (1987). Methodological issues in estimating polygraph accuracy in field applications. Canadian Journal of Behavioral Science, 19, 389-404.
Raskin, D. C. (1989). Polygraph techniques for the detection of deception. In D. C. Raskin (Ed.), Psychological methods in criminal investigation and evidence. New York: Springer. (247-296).
Raskin, D. C., & Hare, R. D. (1978). Psychopathy and detection of deception in a prison population. Psychophysiology, 15, 126-136.
Raskin, D. C., Honts, C. R., & Kircher, J. C. (in press). The scientific status of research on polygraph techniques: The case for polygraph tests. Chapter to appear in D. L. Faigman, D. Kaye, M. J. Saks, & J. Sanders (Eds.), The West companion to scientific evidence.
Raskin, D. C., Kircher, J. C., Honts, C. R., & Horowitz, S. W. (1988). A study of the validity of polygraph examinations in criminal investigation. (Grant No. 85-IJ-CX-0040, National Institute of Justice). Salt Lake City: University of Utah, Department of Psychology.
Roediger, H. L., III, Capaldi, E. D., Paris, S. G., & Polivy, J. (1991). Psychology (3rd ed.). New York: Harper Collins.
Roig, M., Icochea, H., & Cuzzucoli, A. (1991). Coverage of parapsychology in introductory psychology textbooks. Teaching of Psychology, 18, 157-160.
Rubin, Z., Peplau, L. A., & Salovey, P. (1993). Psychology. Boston: Houghton Mifflin.
Santrock, J. W. (1988). Psychology: The science of mind and behavior (2nd ed.). Dubuque, IA: Wm. C. Brown.
Saxe, L., Dougherty, D., & Cross, T. (1985). The validity of polygraph testing: Scientific analysis and public controversy. American Psychologist, 40, 355-366.
Shaver, K. G., & Tarpy, R. M. (1993). Psychology. New York: Macmillan.
Shepard, R. N. (1983). "Idealized" figures in textbooks versus psychology as an empirical science. American Psychologist, 38, 855.
Slowik, S. M., & Buckley, J. P. (1975). Relative accuracy of polygraph examiner diagnosis of respiration, blood pressure, and GSR recordings. Journal of Police Science and Administration, 3, 305-309.
Smith, R. E. (1993). Psychology. St. Paul, MN: West.
Soper, B., & Rosenthal, G. (1988). The number of neurons in the brain: How we report what we do not know. Teaching of Psychology, 15, 153-156.
Steller, M., Haenert, P., & Eiselt, W. (1987). Extroversion and the detection of information. Journal of Research in Personality, 21, 334-342.
Stern, R. M., Breen J. P., Watanabe, T., & Perry, B. S. (1981). Effects of feedback of physiological information on responses to innocent associations and guilty knowledge. Journal of Applied Psychology, 66, 677-681.
Suedfeld, P., & Coren, S. (1989). Perceptual isolation, sensory deprivation, and rest: Moving introductory psychology texts out of the 1950s. Canadian Psychology, 30, 17-29.
Szucko, J. J., & Kleinmuntz, B. (1981). Statistical versus clinical lie detection. American Psychologist, 36, 488-496.
Wade, C., & Tavris, C. (1993). Psychology (3rd ed.). New York: Harper Collins.
Waid, W. M., Orne, E. C., Cook, M. R., & Orne, M. T. (1978). Effects of attention, as indexed by subsequent memory, on electrodermal detection of deception. Journal of Applied Psychology, 63, 728-733.
Weiten, W. (1992). Psychology: Themes and variations (2nd ed.). Belmont, CA: Brooks/Cole.
Weiten, W. (1994). Psychology: Themes and variations (2nd ed., briefer version). Belmont, CA: Brooks/Cole.
Wicklander, D. E., & Hunter, F. L. (1975). The influence of auxiliary sources of information in polygraph diagnoses. Journal of Police Science and Administration, 3, 405-409.
Winton, W. M. (1987). Do introductory textbooks present the Yerkes-Dodson Law correctly? American Psychologist, 42, 202-203.
Worchel, S., & Shebilske, W. (1992). Psychology: Principles and applications (4th ed.). Englewood Cliffs, NJ: Prentice-Hall.
Wood, E. R. G., & Wood, S. E. (1993). The world of psychology. Needham Heights, MA: Allyn and Bacon.
Wortman, C., Loftus, E., & Marshall, M. (1992). Psychology (4th ed.). New York: McGraw-Hill.
Zimbardo, P. G., & Weber, A. L. (1994). Psychology. New York: Harper Collins.
We wish to thank Eric Landrum and Marc Pratarelli for their assistance
in reviewing and rating the textbook excerpts for this study.
Thank you also to Peter Rosenfeld and the anonymous reviewers
for their helpful comments and suggestions on earlier drafts of
this article. Address correspondence to Mary Devitt, Department
of Psychology, 215 N. Murray, Stillwater, OK 74078.
Appendix A: Factual Errors and Misdescriptions in the Text Excerpts. | |
Text | Errors and Misdescriptions
Correct Information |
Atkinson et al. | States that a relaxed baseline is taken for comparison to later responses.
No polygraph tests do this. Person may be able to beat the test by causing reactions during the neutral questions.
This would have no impact on the evaluation of a polygraph. The recording shown in the figure is referred to as a heart rate recording.
It is a relative blood pressure recording. Persons who are less socialized may be less aroused and harder to detect. All of the empirical evidence suggests that this is not the case. |
Baron | Control questions are described as name, place of birth, where someone works.
These are neutral not control questions. |
Bernstein et al. | Heart rate is described as a dependent measure.
Heart rate is not used as a dependent measure in the field. For the polygraph to be effective, the person being tested must believe that the machine is infallible in its ability to detect lies. No one who does research in this area states this position. There is no empirical research to support it and a great deal of research to refute it. |
Bootzin et al. | The text suggests that you can beat the test with countermeasures to neutral questions.
Countermeasures against neutral questions would have no effect. |
Carlson | Heart rate is described as a dependent measure.
Heart rate is not used as a dependent measure in the field.
The text describes a directed lie control test, but calls it a control question test. The text states that the chance of a false positive error on a 3 key 5 item GKT is 8/1000. The correct value is 1/125, i.e., 1/5 X 1/5 X 1/5, if the items are truly independent. |
Crooks and Stein | Heart rate is described as a dependent measure.
Heart rate is not used as a dependent measure in the field. |
Darley et al. | Heart rate is described as a dependent measure.
Heart rate is not used as a dependent measure in the field. |
Doyle | No errors. |
Dworetzky | States that there are separate channels for respiratory rate, heart rate, blood pressure and GSR.
Heart rate is not measured unless it is derived from the blood pressure recording. The text indicates that subjects will be monitored while giving narrative answers to questions like "Where were you last night?"
In actual tests all questions are answered "Yes" or "No". The date for Marston supporting the polygraph was given as 1932.
Marston testified in U.S. v. Frye in 1923. The text states that most polygraph tests are given by employers and gives an example of a grocery store employee taking a screening test.
Such tests were outlawed by the U. S. Congress in 1988. The text states that Honts, Hodes, and Raskin (1985) showed that it was "quite easy" to beat the polygraph by creating responses to truthful questions.
Honts et al., instructed their subjects to increase their response to deceptively answered control questions in the context of a training session where subjects were fully informed about the nature and scoring of the test. With this intensive training only about half of the subjects could beat the test. Without training, none of the subjects were able to beat the test. States that Floyd Faye failed two polygraph tests. Faye failed one polygraph, the other was so distorted by Floyd's deliberate movements that it was not able to be scored. |
Feldman | Polygraph measures irregularity in breathing pattern and increases in heart rate.
The polygraph measures respiration, but irregularities are not scored. Heart rate is not scored. Biofeedback can be used to defeat the polygraph.
There is no evidence in the studies cited to support this assertion. Moreover, there are no credible data to support it in any source. States that Honts, Hodes, and Raskin (1985) indicates that pressing on a tack in the shoe will allow people to beat the test. No such manipulation was included in Honts et al. (1985). |
Gleitman | No errors. |
Huffman et al. | States that polygraph tests can be fooled by people who take tranquilizers, who have consumed high levels of alcohol, or who are psychopathic.
No cites are provided to support these statements. The empirical literature does not support any of them. The data on psychopaths is particularly clear. They have no special ability to fool the polygraph. |
Kalat | No errors. |
Laird and Thompson | Heart rate is described as a dependent measure.
Heart rate is not used as a dependent measure in the field. Faye's story about teaching other inmates how to beat the test is presented as fact. In reality, Faye's story is hearsay of hearsay from convicted felons. There is no evidence that anyone even took a polygraph and talked to Faye about it. This clearly is not scientific evidence. |
Lefton | Habitual liars show little or no autonomic reactivity when they lie.
The cite provided does not address this issue empirically. The literature indicates that psychopaths are just as detectable as normals. |
Meyers | Heart rate is described as a dependent measure.
Heart rate is not used as a dependent measure in the field. |
Peterson | No errors. |
Pettijohn | Heart rate is described as a dependent measure.
Heart rate is not used as a dependent measure in the field. |
Roediger et al. | Heart rate is described as a dependent measure.
Heart rate is not used as a dependent measure in the field. |
Rubin et al. | Heart rate is described as a dependent measure.
Heart rate is not used as a dependent measure in the field. |
Santrock | Polygraph relies on heart rate.
Heart rate is not used. Drugs and biofeedback can be used to beat the test.
The Waid et al. study failed to replicate. ALL other drug studies have failed to find effects. The Corcoran et al. study addresses the guilty knowledge test which is not in use in the field. There is no evidence to suggest that biofeedback can be used as a countermeasure against actual field techniques. Honts et al. is reported as showing that 80 percent of physical countermeasures could be detected by examiners. Honts et al. actually reported that most physical countermeasures could NOT be detected. |
Shaver and Tarpy | Heart rate is described as a dependent measure.
Heart rate is not used as a dependent measure in the field. |
Smith | Heart rate is described as a dependent measure.
Heart rate is not used as a dependent measure in the field. The recording shown in the figure shows one tracing as Pulse Rate Averaging.
PRA is not used in polygraph. The tracing shown is a relative blood pressure tracing. Faye's story about teaching other inmates how to beat the test is presented as fact. In reality, Faye's story is hearsay of hearsay from convicted felons. There is no evidence that anyone even took a polygraph and talked to Faye about it. This clearly is not scientific evidence. |
Wade and Tavris | Increased heart rate used as an indicator.
Heart rate not used. People can learn to beat the machine by tensing muscles or thinking about an exciting experience during neutral questions.
This would have no impact on the evaluation of a typical field polygraph. States that there are problems with reliability. The literature shows that the reliability of numerical scoring of the Control Question Test is very high, interrater reliabilities are almost always reported to be above 0.90. |
Weiten | Heart rate is described as a dependent measure.
Heart rate is not used as a dependent measure in the field. States that critical questions are compared to nonthreatening questions.
Critical questions are compared to Control Questions that are probable lies. Kleinmuntz and Szucko (1984) is described as an experiment. Kleinmuntz and Szucko is an archival field study. |
Weiten (briefer version) | Heart rate is described as a dependent measure.
Heart rate is not used as a dependent measure in the field. Text indicates that test questions have narrative answers.
In the field all questions must be answered with either a "Yes" or "No". Kleinmuntz and Szucko (1984) is described as an experiment. Kleinmuntz and Szucko is an archival field study. |
Wood and Wood | Text suggests that there is no pretest, that subjects are unaware of the wording of questions, and that subjects give narrative answers.
There is a lengthy pretest were the test is explained and all of the questions are reviewed. Subjects must give "Yes" or "No" answers. The nature of the answer to the control questions is unimportant.
The subject is maneuvered into answering the control questions with a deceptive response. The test is based on differential reactivity between relevant and control questions. Heart rate is listed as a dependent measure.
Heart rate is not used in the evaluation of polygraph tests. Habitual liars are more likely to pass.
There is no empirical evidence that this is true. Waid et al. (1981) cited as source of mental countermeasure study (counting backward by sevens).
This study was actually Honts (1986). Lykken's (1981) popular book is cited as the source for drug and countermeasure studies.
Although some countermeasure studies are discussed in Lykken (1981) no original data by Lykken are presented. Countermeasures during neutral questions are described as effective. Countermeasures during neutral questions would have no effect. |
Worchel & Shebilske | Heart rate is described as a dependent measure.
Heart rate is not used as a dependent measure in the field. Operators avoid asking did-you-do-it questions.
Relevant questions are did-you-do-it questions. They are asked in virtually all tests. The guilty knowledge test described as if it is the most common in the field.
The GKT is rarely used in the field. Faye's story about teaching other inmates how to beat the test is presented as fact.
In reality, Faye's story is hearsay of hearsay from convicted felons. There is no evidence that anyone even took a polygraph and talked to Faye about it. This clearly is not scientific evidence. |
Wortman et al. | Neutral questions are described as control questions.
The control question test and the guilty knowledge test are mixed together in the general description of the techniques. |
Article Submitted: 17 September 1996
Revision Submitted: 16 December 1996
Accepted for Publication: 27 December 1996
Published: 9 February 1997
Number of page accesses since 9 February 1997:
End Document