Neuroethics and Neuroscience in the News Highlights: Fall 2017

By Carlie Hoffman,  Neuroscience, ’13

Neuroethics Coverage Image.png

What if we could predict whether someone would be re-incarcerated by scanning their brain? What if you hear voices but aren’t bothered by it? What is your dog thinking? These are some of the main questions raised at this semester’s Neuroethics and Neuroscience in the News series.

Once a month, the Emory Center for Ethics hosts the Neuroethics and Neuroscience in the News seminar where the center features the work of a neuroscientist whose research has broken into the media spotlight. The invited scientist presents their research as well as the response from the media and general public before the floor is opened up for discussion with the audience. These seminars draw interested students, professors, and professionals from a range of disciplines including neuroscience, philosophy, literature, medicine, law, ethics, and religion. Prospective attendees must RSVP for the seminar and the series has gained such popularity that most talks have long waitlists—some within hours of the seminar being announced. Each month, an editorial intern at the American Journal of Bioethics Neuroscience summarizes the seminar and expands on the ethical discussion in a post for The Neuroethics Blog. Over the last few months, we have had three seminars given by Dr. Eyal Aharoni, Dr. Jessica Turner, and Stephanie Hare from Georgia State University, and Dr. Greg Berns from Emory University.  Here are some of the fascinating subjects discussed on campus and on The Neuroethics Blog.

In September’s seminar, Dr. Eyal Aharoni from Georgia State University spoke about his research on whether biomarkers can predict the risk of recidivism in criminal offenders. Jonah Queen wrote a summary of the talk for The Neuroethics Blog entitled, “Too far or not far enough: The ethics and future of neuroscience and law.” Queen starts by describing Aharoni’s goal of using neurological data as an additional risk factor to help improve the accuracy of “risk triage.” Risk triage is the process of determining the risk an individual poses to society, and is used when making decisions around bail, sentencing, parole, and more. Risk triage is currently done through unstructured clinical judgments (where a clinician will offer his/her opinion based on an interview of the subject) and evidence-based risk assessment (which looks at various known risk factors, such as age, sex, criminal history, and drug use).

Aharoni’s study used functional brain imaging to determine whether brain scanning could improve the risk triage process. His results indicated yes—brain scanning could, when used in conjunction with currently recognized risk factors, improve the accuracy of risk assessment. Aharoni also discussed the ethical implications of his work: such scans might not meet the legal standard of proof, and these techniques might threaten offenders’ civil rights in ways the current risk assessment methods do not. Queen also elaborates on these points in his blog post. He raises the concern that using such a technique to keep people in jail runs the risk of violating offenders’ civil rights in an attempt to increase public safety. Furthermore, what if this technology were (mis)used in unintended ways the researchers and the science do not support? For example, the criminal justice system could buy into the hype around brain imaging and develop a process that only looks at the scans and not at other factors. Scans could also be performed on people who have not committed a crime to see if they need “monitoring” or “treatment,” possibly even non-voluntarily. Queen also points out the issue of stigma. How would people view someone with a “criminal brain?” How would they view themselves? And what if this technology started being used in the private sector and companies were to start testing people for predisposition to criminal behavior?  Check out Queen’s post for more.

In November’s seminar, Dr. Jessica Turner and Stephanie Hare from Georgia State University discussed the notion of hearing voices—but not being bothered by it. Hare and Turner discussed the contrast between people with schizophrenia and people scientists call “healthy voice hearers.” Hare and Turner argue that hearing voices should not necessarily be considered pathological. Nathan Ahlgrim summarized this talk in a blog post entitled, “Psychosis, Unshared Reality, or Clairaudiance?” Ahlgrim starts by discussing work out of Dr. Philip Corlett’s lab, which compared how people with schizophrenia and self-described psychics experience auditory hallucinations. The current definition of a “disorder” contains the criterion of causing distress. Because psychics are not bothered by the voices they hear, their hearing voices is not considered to be a symptom of a disorder or psychosis. However, given our society’s negative view on hallucinations and psychosis, how many people are inappropriately pathologized for similar experiences?

Non-voice hearers can balk at the idea that hallucinations are part of a typical “normal” spectrum of experience, suggests Ahlgrim. However, anywhere between 5 and 28% of the general population experience auditory hallucinations at some point of their lives. With such high prevalence, it could be argued that auditory hallucinations are normal. If so, how should healthy voice hearers be treated by psychiatrists and society writ large?  Ahlgrim suggests that healthy voice hearers benefit from a spectrum approach. Auditory hallucinations may belong on one end of a spectrum of richness of perception, inside the range of normal experience for many people. He also states that using deliberate language is key. Calling a person ‘schizophrenic,’ ‘crazy,’ and even ‘hallucinating’ instantly pathologizes and strips the person of identity. Replacing that vocabulary with inclusive language like ‘person with schizophrenia,’ ‘psychosis,’ and even ‘nonconsensual reality’ gives agency and acknowledges divergent experiences. Check out Ahlgrim’s post for more.

Dr. Greg Berns from Emory University discussed his pioneering work performing functional brain imaging in awake, non-constrained dogs in the final seminar of the year. Berns started his talk by discussing his latest book, What it’s like to be a dog. This title summarizes the goal of Bern’s current research: to figure out how dogs think. Berns described how he first started doing brain scans in dogs, from installing a brain scanner, to training dogs to sit still without physical or chemical restraints while they underwent an MRI scan, and ended by showing the compiled brain imaging data from close to 100 dogs. In short, Berns found dogs’ brains share some similarities with human brains: dogs seem to have a facial recognition area in their brains and also process rewards (both verbal and food-related) in the same brain area in which humans process reward. Based on his work, Berns then raised several questions, including, if we know dogs’ brains work in analogous ways to the human brain, should we use dogs in research?  And is training a dog to sit in a brain scanner a form of coercion? Furthermore, if we don’t feel comfortable using dogs in research, what kinds of animals are we comfortable using?  Ryan Purcell’s summary on this talk will be published on The Neuroethics Blog in early January 2018. Check out Purcell’s upcoming post for more.

Sadly, Fall 2017 marked the final semester of the Neuroethics and Neuroscience in the News series, but stay tuned—we will be back in 2018 with a revamped format!  Also, to read the full versions of Queen’s, Ahlgrim’s, and Purcell’s summaries of these seminars, go to www.theneuroethicsblog.com.

Edited by Amielle Moreno, Neuroscience '12