When the Supreme Court justices ruled that to determine the admissibility of scientific evidence judges must evaluate the scientific reliability (although they really meant scientific validity) of that evidence (Daubert v. Merrell Dow Pharmaceuticals, 1993), their decision prompted us to investigate a variety of psychological questions about the ability of legal decision makers to differentiate valid from junk science and the efficacy of procedural safeguards against junk science. More recently, our laboratory’s efforts have turned to investigations of adversarial allegiance (e.g., the tendency of expert witnesses to evaluate evidence in a way that is favorable to the party that hired them) and possible methods of correcting for this bias.
Legal Decision Maker’s Abilities to Evaluate Scientific Evidence
One psychological assumption underlying the Daubert decision is that judges are capable of differentiating valid science from junk science. We surveyed judges, presenting some of them with valid research and others with research containing methodological flaws (e.g., missing control groups, confounds, experimenter bias). Judges were equally likely to admit the valid research as the flawed research (Kovera & McAuliff, 2000). Attorneys show a similar inability to identify flawed research (Kovera, Russano, & McAuliff, 2002). Without intervention, jurors also fail to identify confounds and experimenter bias in expert evidence (McAuliff, Kovera, & Nunez, 2008). There is some evidence that jurors’ Need for Cognition, an individual difference variable that measures an individual’s willingness to engage in cognitive effort, moderates their ability to identify certain flaws. Specifically, high Need for Cognition jurors appear to understand the problem of missing control groups better than do low Need for Cognition jurors (McAuliff & Kovera, 2008).
Effectiveness of Procedural Safeguards against Junk Science
The Daubert decision highlighted several procedural safeguards that may prevent flawed research that is admitted at trial from influencing jurors’ decisions. Cross-examination does not seem to increase juror sensitivity to methodological flaws (Kovera, McAuliff, & Hebert, 1999). Although opposing experts do identify flaws when they are asked to testify against an expert with flawed research (Kovera et al., 2002), opposing expert testimony about flawed methodology appears to increase juror skepticism about expert evidence rather than increasing sensitivity to methodological quality (Levett & Kovera, 2008; 2006). Opposing experts appear to activate jurors’ heuristics about experts being hired guns and that competing experts indicate a lack of consensus within the scientific community. These heuristics mediate the skepticism effect.
Concurrent Expert Testimony and Adversarial Allegiance
Some courts in Australia and Canada, recognizing concerns about adversarial allegiance and expert partisanship) have begun to use an alternative to traditional methods of presenting adversarial expert testimony. This alternative is known as concurrent expert testimony or more colloquially, hot tubbing. Under typical adversarial conditions, each party in a case will hire its own expert, who will evaluate relevant evidence and form an opinion about an issue in the case. Concurrent expert testimony requires that experts hired by each side work together, independent of their attorneys, to develop a joint report prior to trial that outlines points of agreement and disagreement (Edmond, 2009). The judge then stipulates at trial the issues on which the experts agree. The experts then testify together during trial only on those issues on which they disagree, detailing their opinions, asking each other questions, and taking subsequent questions from the judge and the attorneys (Edmond, 2009). Concurrent expert testimony is intended to assist jurors in their evaluations of the expert evidence (Sanders, 2007). We have recently concluded data collection on three studies that examine the extent to which the practice of concurrent expert testimony reduces adversarial allegiance in clinical psychologists’ evaluations of criminal responsibility, improves juror decision making, and alters attorneys’ strategies for hiring expert witnesses. Data analysis is ongoing.
Selected Publications on Expert Testimony
- Cutler, B. L., & Kovera, M. B. (2011). Expert psychological testimony. Current Directions in Psychological Science, 20, 53-57. DOI: 10.1177/0963721410388802
- Levett, L. M., & Kovera, M. B. (2009). Psychological mediators of the effects of opposing expert testimony on juror decisions. Psychology, Public Policy, and Law, 15, 124-148.
- McAuliff, B. D., Kovera, M. B., & Nunez, G. (2009). Can jurors recognize missing control groups, confounds, and experimenter bias in psychological science? Law and Human Behavior, 33, 247-257.
- Levett, L. M., & Kovera, M. B. (2008). The effectiveness of educating jurors about unreliable expert evidence using an opposing witness. Law and Human Behavior, 32, 363-374.
- McAuliff, B. D., & Kovera, M. B. (2008). Juror Need for Cognition and sensitivity to methodological flaws in expert evidence. Journal of Applied Social Psychology, 38, 385-408.
- Kovera, M. B., Russano, M. B., & McAuliff, B. D. (2002). Assessment of the commonsense psychology underlying Daubert: Legal decision makers’ abilities to evaluate expert evidence in hostile work environment cases. Psychology, Public Policy, and Law, 8, 180-200.
- Kovera, M. B., & McAuliff. B. D. (2000). The effects of peer review and evidence quality on judge evaluations of psychological science: Are judges effective gatekeepers? Journal of Applied Psychology, 85, 574-586.
- Kovera, M. B., McAuliff, B. D., & Hebert, K. S. (1999). Reasoning about scientific evidence: Effects of juror gender and evidence quality on juror decisions in a hostile work environment case. Journal of Applied Psychology, 84, 362-375.
- Kovera, M. B., Levy, R. J., Borgida, E., & Penrod, S. D. (1994). Expert witnesses in child sexual abuse cases: Effects of expert testimony and cross-examination. Law and Human Behavior, 18, 653-674.
This material is based upon work supported by the National Science Foundation under Grant Numbers 9711225, 0453197, 1023796, and 1155251. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.