[Live Blogging] Neuroscience in the Courtroom
Starting at 5:30 pm, the Stanford Interdisciplinary Group for Neuroscience and Society is sponsoring a discussion of neuroscience evidence in the courtroom. The event will feature experts in neuroscience and law, including: David Faigman (Professor of Law, Hastings College of Law), Marcus Raichle (Professor of Radiology, Neurology, Neurobiology,and Biomedical Engineering, Washington University), Anthony Wagner (Professor of Psychology, Stanford) and Hank Greely (Professor of Law, Stanford Law School).
Panelists will be “discussing recent attempts to introduce expert testimony based on brain imaging tests, including fMRI lie detection”.
[UPDATE: This discussion is being recorded and will be available online at a later time: I will post the link as soon as it is published.]
I will be providing live updates during the course of the discussion – note timestamps for the correct temporal progression of events.
5:31 pm: We’ll be getting started in just a few minutes. Looking around the room, I see a couple of neuroscientists are present, including the formidable Bill Newsome. There also appear to be many law students present, so I expect a broadly tuned presentation of both neuroscience and law.
5:34 pm: A note that those interested in the sponsoring organization, the Stanford Interdisciplinary Group for Neuroscience and Society, can find out more about them at their website.
5:36 pm: Hank Greely is introducing the various panelists and the topics for the evening.
The first speaker will be Anthony Wagner. He studies memory issues in fMRI. The second speaker will be Faigman. He is a leading expert on scientific evidence. The third speaker is Dr. Marcus Raichle – he is a neurologist by training who is well known for his work on functional imaging (in particular PET scanning).
The stated objective for SIGNS is to study how neuroscience will be affecting our culture. Greely points out that a revolution in neuroscience has special implications for our culture and our laws because discoveries about how the brain works will directly bear upon our knowledge of how our subjective mental behaviors are generated. In particular, discoveries in 6 areas will have particular impact: Prediction, “Mind-reading” [i.e. correlating and interpreting physical changes in brains in terms of thoughts], Criminal Responsibility [i.e. the question of free will and implications for criminal law; the use of neuroscience to determine whether defendants possess the physical mind state necessary for prosecution], End of Life Care [discussing the recent New England Journal of Medicine article regarding vegetative states and brain imaging], Treatment [of neurological diseases and conditions, from Alzheimers to kleptomania to other "social pathologies"], and lastly Enhancement [e.g. "memory pills"]. Today, these issues have begun to find their way into courtrooms.
Since Jan 2006 (first introduction of neuroimaging in the courtroom) there have been over 30 cases in which neuroimaging was brought up as evidence. Some examples of cases: the use of neuroimaging to “prove” that the defendant was a psychopath, to show that defendants were in chronic pain (particularly useful in deciding cases of disability), and of course, lie detection. Of note: expect in New Mexico, courts do not accept polygraphs as evidence. fMRI lie detection has been around in peer reviewed journals since 2002, and there are currently 2 companies that will use fMRI to declare whether you are telling the truth. In May of this year, two court cases (on a sexual harassment case, another a fraud case) almost allowed reports generated by these companies as evidence to support the defendants. In both of these cases, the judges decided not to allow the evidence, but either for different reasons.
5:54: Now Anthony Wagner will discuss research on using fMRI to detect lies. Tony introduces himself as a cognitive neuroscientist who primarily studies executive function. A few years ago, he became involved in the intersection between neuroscience and law, and today he will be presenting a high level summary requested by a judge to discuss whether fMRI can be used to detect lies.
Overview of published literature: “There are no relevant published data that unambiguously answer whether fMRI-based neuroscience methods can detect lies at the individual-instance level. No relevant data on the sensitivity and specificity of fMRI-based lie detection.” In his background research, Tony found 32 peer-reviewed papers with 28 unique data sets. Of these papers, there are 2 categories: one (21 papers) that exclusively reprots group-level data (these cannot answer whether fMRI can detect individual lies; the other group (11 papers) report whether they can detect if an individual i s”lying”, but Tony will argue that fundamental methodological limitations render these studies uninformative.
The main strategy for these studies involves subjects being instructed to lie. The prevalent paradigm is that of the guilty knowledge/concealed information paradigm.
Conclusions from the group level studies: there is an activation difference between lie and truth conditions somewhere in the brain, there is considerable across-study variability in brain regions (this may be due to differences in the methodologies and analyses), meta-analyses reveal some across-study consistency, regions observed are not specific to deception, and lastly some of these studies attempt to figure out why certain brain areas are active during deception. However, none of these studies document specificity and selectivity at the individual-subject and individual-question level, and so have relatively little legal relevance.
Of the 11 peer-reviewed studies that do examine the individual-subject/question level, three tasks are generally used. Used are modified Guilty Knowledge Tasks: subjects are presented envelope containing two cards or pick a # between 3 and 8 – deny possession of one of the two cards, deny having chosen the number. A study by Langeleben and Davatzikos reported 90% sensitivity and 85% specificity, but there is a motor confound b/c one response more frequent that the other: classifier could jsut be detecting difference in action selection demands between lie and truth trials. If you eliminate the motor confound (a la Monteleone et al, 2009), analysis at the individual-subject level drops to 71% of subjects showed greater MPRC activation on lie vs truth trials. Suggests above chance discrimination between lie and truth w/in an individual but doesn’t tell us whether fMRI can discriminate between subjects who are lying and those who are telling the truth as all subjects in the study were instructed to lie. Another study (Hankun et al, 2008) reported greater activity to target vs control stimuli when instructed to lie about target, but also greater activity when simply passively viewing target vs control stimuli – suggesting that deceptive behavior is NOT required to observe the brain response. Garner et al 2009 saw a difference in BOLD activity between stimuli that subjects were asked to remembered more so than novel stimuli, irrespective of demands to lie.
A couple of studies looked at mock-crime studies, where subjects are asked to lie about location of money in a room. These studies found high-variability in whether individuals subjects demonstrated lie>truth effects in brain regions observed in the group-level analyses. In other studies, subjects were asked to lie about a mock theft, deny possession of items they were instructed to “steal”. In these studies, the analysis observed 3 barin regions showing differential activation: ACC, OFC, and IFG. The company using this paradigm compared the number of voxel in these areas during baseline to during the task, and assume that if there are more voxels activated during the task than during a truth trial, then the subject in lying. Detection rates are estimated to fall between 71% and 86%. A similar voxel counting approach was used to discriminate whether subjects had destroyed a CD or whether they had merely watched video of someone else destroyed a CD – reporting 100% accuracy or identifying those who had destroyed the CD, but 67% false positives for those who watched the video. This suggests a major confound in that these methods may pick up memory signals that may have nothing to do with active participation in a crime – richly imagining an event may be enough to trigger a false positive identification of a lie.
Wrapping up, Tony Wagner reiterates that many forensically relevant factors have not been investigated, including the magnitude of the stakes, the effects of stress, the retention interval (time between the critical events and fMRI scanning), the effect of having practiced telling the same lie, the content of the lie (emotional valiancy), susceptibility to countermeasures, robustness of methods across subpopulations, difference between instructive vs subject-initiated deception.
6:00 pm Now David Faigman will discuss admissibility standards from the perspective of a law professor. He notes that from a lawyers perspective, science is a tool, and important issues are what confidence intervals are necessary for admissibility standards. Raichle contends that under certain contexts, it might be useful to admit neuroscience evidence without strong confidence intervals.
But to back up, he notes the important aspects to determine admissibility. 1) Qualifications: what are the minimum qualifications for calling an expert witness to present neuroscience evidence. From the laws perspective, they expect researchers at first, but as the technology becomes generally accepted, then technicians (possibly w/o a graduate degree) can be called as expert witnesses – DNA evidence is a good example of this transition. So qualifications becomes a question of what the testifiers are testifying to. 2) Relevancy: the science must be able to respond to a specific legal question. For example, having neuroscience discussing lack of volitional control is all well and good, but is it relevant to culpability and responsibility in the legal sense – to claim insanity you cannot claim lack of volitional control in a criminal context, instead it depends on showing that the defendant could not tell right from wrong. A note though that lack of volitional control is a component of civil commitment law (as in for labeling a defendant as a sexual predator, and determining whether the person in question should be put in jail). 3) Reliability and validity: judges must evaluate this – they have the power to determine whether a particular scientific process is valid – several factors can be used during this determination, including testing, error rate, peer-review, publication and general acceptance. This standard (Daubert standard) applies to all expert witnesses in federal courts, as dictated by a Supreme Court decision. Many state courts use the Frye Rule, which merely requires that the methods are generally accepted in the field – this depends on how rigorous the individual field is – there are some obvious issues with this (an example from David Faigman is that tea-leaf reading is generally accepted by tea-leaf readers). However, another standard (Rule 403) says that if the evidence would be too prejudicial, overwhelming the probative value of the evidence, the science should not be admitted into the case.
David Faigman notes that in general, juries do not understand science, and so are not adequately prepared to evaluate expert scientific testimony.
6:41 pm Now Marcus Raichle will discuss his experiences as an expert witness. He notes that he has only been an expert witness twice: once during a malpractice suit involving Stanford Hospital, and then for his knowledge of neuroscience, as a counter witness against the director of Cephos, one of the two companies offering fMRI lie detection. He says that he felt unprepared for the experience, not knowing what would be expected of him. He describes the lawyer for the defense, and the experience of having lawyers manipulate science for the purposes of the law. In particular, he recalls the difficulty of describing a complex scientific story to a judge. Raichle states that if he had to act as an expert witness again, he would need to more carefully consider how he would present the scientific story.
6:49 Questions are now being solicited from the audience.
Question 1 for Marcus: Did he felt that any arguments played better than others against the fMRI lie detection? Answer: He felt that the distinction between group data and individual results in the peer-reviewed literature was lost in the context of the court, and that Cephos is generalizing findings from group data onto the individual, without repeating the paradigm utilized at the group level with the individual. In addition, he notes that discussions of the statistical issues by the scientists went over the heads of the judge.
Question 2: The distinction between for-profit companies and academic neuroscientists: and what are the roles of the academics in making statements about the validity of the fMRI lie-detection? Answer: David Faigman notes that most scientists avoid being expert witnesses. Regarding consensus statements, they may be less important for individual cases, but more for general guidelines for judges, although the usefulness of such statements will likely be context-dependent.
Question 3: Would Marcus recommend that scientists be expert witnesses? Answer: He wouldn’t dissuade people (later saying, yes). He found the experience highly educational, although he says that he wouldn’t want to do it all too often, if only because of the inordinate amount of preparation necessary.
Question 4: The studies on the validity of fMRI seem a bit simplistic, not leaving many gray areas. What can the science claim about more complex positions. From the neuroscientists, does complexity matter in the brain? Answer: Tony Wagner and Hank Greely note that this question lies at the heart of the unresolved issues in neuroscience and detection of deception. As time passes, the representation of memories is altered at the neuronal level – making references to research showing that re-consolidation of memory (recalling a particular memory) results in alteration of that memory. This has major implications if the defendant has told the same lie multiple times, with a true memory and false memory potentially unresolvable via fMRI – although this question has not been directly tested in scientific settings. Marcus notes that an important factor is the behavior being studied, not just the picture of the fMRI.
Question 5: How might fMRI alter defendants rights to remain silent/avoid self-incrimination? Answer: Faigman states that constitutional law trumps evidence based law, but if the test becomes incredibly accurate, then there would be an expectation that the evidence would be presented (although the lawyers would be constitutionally presented from mentioning this expectation). A more pressing constitutional issue would be if a defendant wanted to use this evidence, but are not allowed by law – in this case an argument could be made that the defendant has a constitutional right to present the evidence. Hank Greely notes that the 5th amendment only applied to spoken testimony, so fMRI images taken from passive viewing paradigms (not requiring speech) might not be covered by the amendment, although Greely predicts that the courts will eventually rule it to be covered.
Question 6: What is the legal history of denying polygraphs, and what would be the necessary improvements needed to re-allow polygraphs into the courts? Answer: It was excluded because it just wasn’t reliable. Also, there is a ban from calling witnesses to provide a credibility assessment – the polygraph has been treated as a credibility machine and therefore may be banned in that context. But the ultimate reasoning has been that the polygraph is unreliable. So in order to re-allow the polygraph, you would have to show substantial improvements in reliability. In Faigman’s opinion, in the future, the determination of scientific evidence will need to be more context dependent – although he notes that Hank Greely most likely disagrees with him. So depending on what the outcome being determined with the help of the evidence (e.g. holding a new trial versus capital punishment), it might be more acceptable to use evidence with a possibility of a false positive. Greely notes that reliability is not the only question – there are also questions about whether the use of fMRI (or other scientific evidence) might unfairly prejudice a jury. Wagner notes that polygraphs are used often outside the context of the court – for example used during the determination of whether a suspect should be interrogated further by lawyers/police. Raichle notes that some suspects may confess merely after being confronted with the threat of a polygraph/fMRI. Faigman notes that from the perspective of the law enforcement, the polygraph is better than the fMRI, because they use it primarily as an interrogation tool.
Question 7: In the 11 studies on individual-subjects all the flaws seem a bit simplistic. Why hasn’t a good study been designed? Answer by Wagner: He is unsure why the study design has not been better. Over the 20 year history of neuro-imaging, there are many good studies, and many poorly designed studies, and many of those poorly designed studies were conducted at the beginning. fMRI is a young field, and fMRI use in lie-detection is an even younger field, and with maturity will hopefully come better and better studies. Within the field of neuro-imaging, scientists are cognizant that their research is being applied within the purview of the law, and there is a realization amongst researchers that better science needs to be conducted. Greely notes that funding is very difficult to acquire – the major funding source is from for-profit companies.
Hank Greely closes the discussion by thanking the panelists for their participation. He notes that this is the first of a quarterly series (the next is on Monday, Jan 24th). Those interested in getting on the SIGNS mailing list should email Hank Greely at firstname.lastname@example.org.
—End of Live Blogging Event—-
Latest posts by Astra Bryant (see all)
- Linky and the Brain: May 20, 2013 - May 20, 2013
- Highlight on Neural Prosthetic Systems: NSF IGERT Video Competition - May 20, 2013
- It’s a Hen, not a Him: sex bias in neuroscience research - May 13, 2013