Underrepresentation in Research: Steps Towards Progress

Jadiel Wasson

Originally published April 11, 2015

If you take a look around you, what do you see?  Does your environment reflect what the real world looks like? If not, why do you think that is? And how does this difference influence the nature of science?  In the past few decades, this disparity of representation from different groups, specifically women and minorities, has become increasingly apparent. Many studies have demonstrated that this lack of diversity leads to a different kind of “brain drain” in that it caps interest in STEM careers in the up-and-coming generation which can ultimately take away minds that can contribute significantly to STEM.  Although very few government initiatives have been put into place to address this issue, they have greatly influenced the demography of STEM degrees and careers.  In addition to this, a handful of studies in conjunction with media attention have altered diversity trends in STEM by bringing greater awareness to the underrepresentation that plagues these fields.  But has enough been done to truly alleviate this issue? What steps have already been taken to address this issue of underrepresentation?

Step 1: The Civil Rights Act of 1964 was the first step taken to address inequality in the workforce. This act significantly increased the availability of equal opportunities in education and employment for women and minorities. Before the implementation of this act, less than 5% of women earned PhDs in STEM careers.  That number tripled to around 15% in the early 1980’s. In the late 1950’s, prior to any official census data, it was estimated that about 500 total African-Americans had a PhD of any kind. In 1975, only about 1.2% of PhDs were earned by African-Americans.  

Step 2: The Science and Engineering Equal Opportunities Act of 1980 was mandated by the National Science Foundation to increase the participation of women in science.  This act aimed to increase female participation through various propaganda campaigns. These campaigns aimed not only to increase the public’s awareness of the value of women in science, but also to increase support for women who chose to pursue STEM careers by implementing committees, fellowships and programs.  Since the inception of this act in 1980, the number of women holding STEM PhDs has increased from about 15% to fewer than 40% in 2011, lending to the effectiveness of such measures.

Step 3: Beyond Bias and Barriers: Fulfilling the Potential of Women in Academic Science and Engineering is a national academy of sciences report that was published 2006.  It is incredibly extensive in how it addresses the issues that still plague women in STEM disciplines that prevent them from advancing in their careers regardless of their academic stage.  This report ultimately refuted many of the biases and “reasonings” behind the gap between men and women participation in STEM careers.  Some of the key findings of this report include evidence of institutional biases against women and taking a look at the loss of women from each stage on the track to a career in STEM. One of the main contributions of this work came from some of the recommendations that were made to rectify the representation problem.  In 2007, the National Institutes of Health, or NIH, created the working group on women in biomedical careers in response to the findings in the report.  This group has subsequently established many sub-committees with specific goals, such as public outreach and mentoring, all aimed at increasing the retention of women in STEM.  

Step 4: Current steps to enhance diversity in the STEM disciplines include the Enhancing the Diversity of the NIH-Funded Workforce program. This program was established in 2012 to increase underrepresented minorities in the STEM disciplines through a series of initiatives that will try to address how to keep minorities engaged in the STEM disciplines.  

Current state: Even with these steps, there still remains a discrepancy in equal representation. For example, a recent report entitled “Double Jeopardy?” published in February of this year highlighted some of the prevalent biases that still plague women of color, such as differences in how they are perceived compared to their male counterparts. In addition to these findings, although we have seen the percentage of women employed in the STEM disciplines steadily increase each decade since the 1970’s, there has been a decline in this trend since the 1990’s. All in all, initiatives to increase women in science have worked to a certain extent, but a large gap still remains.  We need more measures that aid to reverse this trend.  

Edited by: Brindar Sandhu

The Creation of Gods: Why We Anthropomorphize

Erica Akhter

Originally published April 10, 2015

Have you ever assigned human-like traits to animals or even inanimate objects? If so, you’ve participated in something called anthropomorphism. We often attribute emotions to animals and intentionality to mindless objects. Every time you mistake your coat-rack for an intruder or claim that your puppy loves you, you are guilty of anthropomorphizing.

Anthropomorphism is the tendency to assign human traits such as physical qualities, emotions, intentions or thoughts to non-human objects such as animals or objects.

Humans naturally treat everything as if it possesses some degree of understanding or responsibility. (Have you ever cursed at your printer or encouraged your car to start?) This is a byproduct of our tendency to anthropomorphize. It permeates our perception. It is a commonality in human life, and one that can be particularly problematic given the right circumstances.

But why do we anthropomorphize?

Long story short: it’s the way we’re naturally wired. Human brains are tuned to try to understand other human’s intentions, thoughts and feelings. This concept is called Theory of Mind. Specific regions of the brain contain populations of ‘mirror’ neurons, which display the same activity when we’re performing an action as when we observe others performing an action.

People with deficits in the regions where these mirror neurons are located correspond to deficits in empathy and Theory of Mind. Unsurprisingly, these are the same regions of the brain that are active when a person is anthropomorphizing.

Predicting the actions of animals and inanimate objects employs the same brain regions as predicting the behavior of another human. Though we can consciously differentiate between human and non-human, the same mechanisms in our brain are activated when we are observing actions of both.

It is important to note that the way we experience our thoughts is not just constrained by our perception, but the language we have available to communicate our perception. Think about it this way: Most people would agree that a mouse cannot think like a human. At the very least, you can probably agree that you cannot tell what a mouse is thinking. To know what a mouse is thinking you would either have to be a mouse or be able to talk to a mouse.

So how do we explain mouse behaviors? What is a mouse doing if not thinking?  We don’t have a word for it. To fall into my own trap, we can’t think like a mouse, so we have no words to describe what may be happening inside a mouse’s head. We’re forced to imagine things like only a human can because after all, we are only humans.

So what’s the use of anthropomorphism?

It’s quite easy to justify why we would want to understand other humans. We’re a social species, and thus need to be able to comprehend others to at least some degree. But is anthropomorphism just a byproduct of an overenthusiastic brain trying to give Theory of Mind to everything?

Doubtful. Evolutionarily speaking, it is almost always better to assume something is smarter than it is. More accurately, it is almost always better to assume that the something is out to get you and that the something is intelligent enough to be worried about.

Believing every shadowy figure is a robber is much safer than believing every shadowy figure is a bathrobe. Believing every spider is full of malicious hate for mankind is safer than not giving any spider a second thought.

Think of your anthropomorphic brain as a highly sophisticated better safe than sorry mechanism.  We’re programmed to believe, at least initially, that everything we see behaving is behaving with some degree of intentionality. The results of this can be good or bad.

What are the consequences of anthropomorphism?

As mentioned above, anthropomorphism is usually a good thing. But when can anthropomorphizing go awry?

Dr. Shannon Gourley, a professor of Neuroscience at Emory University, describes anthropomorphism as “a dual threat.”

“Firstly, we run the risk of trivializing the human condition. Can a mouse really experience the debilitating nature of schizophrenia? Of autism? We just don't know. And the related issue pertains to the limits of our ability as scientists to interpret our data. If we attribute human-like traits to an animal, we run the risk of failing to consider other possibilities. For example, is the mouse huddled in the corner because it is "depressed," or because it's simply cold? Or ill?”

Even despite the risks, it is not uncommon to hear a meteorologist to talk about the wrath of nature or a biologist to talk about what a cell wants to do. It is especially tempting to anthropomorphize when research appears directly translatable to common human experiences.

However, Dr. Gourley reminds us that “Reporting that the mouse develops ‘depression-like’ behaviors is more scientifically accurate -- and it allows us to bear in mind the alternative possibilities, and to acknowledge the limitations of our own knowledge which are bound by the fundamental inability to directly communicate with animals.”

Our brain’s predisposition for giving agency leads us to see intention, thought, and cause in the natural world, even when it is not explainable. We naturally attribute intentionality to everything we see: whether it has a human brain, an animal brain, or no brain at all.

Anthropomorphism is so prevalent that some biologists and biological philosophers claim that it is the basis for people’s perception of higher powers, or gods, acting on the world. When thinking about deities, the same brain regions within the brain are active as when attributing Theory of Mind to other humans.

Since the beginning of time, humans have been attributing unexplainable events to entities that they cannot see or feel, only sense and infer. Some scientists claim the neurological basis for anthropomorphizing contributes to this phenomenon. In essence, we could even be constructing ideas of gods in the image of ourselves.

Edited by: Anzar Abbas

The Case for Basic Science Research

Brindar Sandhu

Originally published April 6, 2015

photo by: jadiel wasson

photo by: jadiel wasson

How many times have you been asked to donate $1 for juvenile diabetes, cancer, ALS, MS, or Alzheimer’s research at the grocery store checkout? What about space exploration, how bacteria fight infections, or the basis of all life? Chances are pretty high that you’ve been asked the former, and about zero that you’ve been asked the latter. We know why – diseases pull at people’s heartstrings. We most likely all know someone, or know someone who knew someone, who has had cancer. We want to cure diseases; that’s why we study biology. Obviously we’re not in it for the money or the fame. Does this mean that everyone should solely study a cure for some disease? Is it wrong to be motivated by wanting to learn more about the world we live in?

A poll conducted by the winners of a 2013 video competition sponsored by FASEB, the Federation of American Societies for Experimental Biology, asked the general public of San Francisco the following question: If you had $10 to spend on research, would you donate to research affordable diabetes treatment, or to study how bacteria protect themselves? The public overwhelmingly chose diabetes, but in the 1960s, the National Institutes of Health, or NIH, chose the latter. By doing so, scientists discovered that bacteria produce restriction enzymes to cut up foreign DNA. Now almost every lab uses restriction enzymes for cloning. Not just that, but this discovery allowed scientists to clone human insulin - which was previously only purified from cattle and pigs or chemically synthesized with poor yields - and express it and purify it from bacteria. The bacterium used, E. coli, quickly earned the nickname “the laboratory workhorse.” This dramatically reduced the cost of insulin for those suffering from diabetes, and today, almost all diabetic people use recombinant human insulin instead of animal insulin.

Anyone who has written a grant application for the NIH knows that the proposed research has to have a translational impetus. “Why should I care?” is a question we are taught to answer. We are required to provide evidence as to what contribution our research will make. If the answer is “We don’t know how this will benefit medicine, energy, or technology...yet,” does that mean it shouldn’t be pursued? A survey of the research laboratories in the Graduate Division of Biological and Biomedical Sciences, or GDBBS, here at Emory, shows that about two-thirds of faculty research descriptions contain a specific type of disease, drug development, or the word disease. That number is most likely lower than the actual percentage of labs that focus on the disease state. I am not arguing that studying the disease state is not fruitful. Of course we need to know how a disease operates if we ever want to treat or even cure it. I argue, however, that sometimes the solution can be answered in a way that would not be obvious if we solely focused our efforts on curing cancer. Studying how nature works in a non-disease state can tell us a lot about how nature stays healthy, and thus, how we can stay healthy. If the stigma surrounding basic science research is prominent among scientists, how can we expect the public, and therefore, the federal government, to support such important endeavors?

“But so much more is known about biology than 60 years ago,” one could argue. Sure. Does this mean we have learned all we need to know? Of course not. Yes, biology has had an explosion of knowledge in the last half century, but people still suffer from disease, even if they are different than the ones we saw 100 years ago. We also know that cancer is a lot more complicated than we originally thought, and the reality of a cure-all cancer drug is now just a figment of our imaginations. Although we have learned a lot in the last 60 years, much is clearly unknown, and focusing solely on diseases can limit our ability to find solutions that could be applied to multiple problems. Most scientific advancements, especially technological ones, are based off of how nature operates, so further exploration into how nature works in general, not necessarily in the disease state, is crucial for scientific advancement.

Edited by: Marika Wieliczko

The Blind and Biased Eye of Objectivity

Sara List

Originally published April 5, 2015

Scientists must take daily snapshots of our work, recording what we see one piece at a time. The full picture is too vast for the lens. The fisheye would distort the image if we included too much, too many variables with too few controls. We want to capture the world as it really is, and in order to do so with our limited frames of view, we use objectivity as our guide. Scientific objectivity refers to the idea of recording only those phenomena that are observable without prejudice or bias. The use for such an approach is vast in the world of science but also limited.

Science seeks to categorize, to filter, and to quantify observations. When we conduct science, we try to leave our social, political, and experiential backgrounds behind in favor of the ideal of pure logic. This practice does allow us to make compelling arguments when trying to convince others that our findings, not those of others, are reflections of the truth.  If someone is open-minded toward having one set of results or another for their experiment, then he or she does not have any reason outside of the merit of the experiment itself to have obtained those results.

Striving for scientific objectivity can seem noble, chivalrous even. The knights of research lay down their beliefs, their emotions, their political contentions, and their self-serving motives all in the name of science. However, there’s one practical problem with this ideal. People who are completely disinterested in an experiment or science in general are often also indifferent about it. These people cannot be paid to do research, given how much effort is required. Those who are paid to do science then, are a passionate and opinionated few.

There are multiple potential threats to scientific objectivity, which can include one or a combination of the following desires on the part of the scientist: desire for approval, desire for potential financial gain, and to avoid controversy. In addition, active advocacy for a certain public policy or vested interest in a particular theory can cloud the objective lens as well. These factors can and do interfere with the scientific approach if ignored by the research community. However, in addition to our scientific enthusiasm, many of us also depend on outside agencies to fuel our work, and the funding can be contingent upon those agencies’ approval of our research. The quest for scientific objectivity may be a noble one, but that quest adds more problems than solutions to the ever-morphing body of knowledge we call modern science.

Striving to be the omniscient, neutral observer can lead us into territory that only leaves us blind to our own biases. Neuroscience in particular is rife with examples of social bias motivating how we study the brain, even though, or perhaps because, objectivity is the goal.  The study of the brain, and by extension, the human mind, gives particular sensitivity to the findings. Scientific objectivity lends scientists a certain authority. That authority is never clearer than when examined within the world of brain science.  With that power comes the responsibility to be aware of our human subjectivity.

One canonical scientist in the field of neuroscience was anthropologist Paul Broca, known for the discovery of Broca’s area in the nineteenth century. Broca’s area is a brain region which when damaged, renders the patient unable to produce intelligible speech, although he or she retains the ability to understand language.  Broca found an excellent example of a brain area that is responsible for a specific function, which drove the modern study of human neuroscience.

Broca’s name also appears in Wikipedia’s Scientific racism entry. In addition to his studies on stroke patients, Broca was a fan of craniometry, the measurement of skull size or brain volume. While craniometry is not inherently a discriminatory method, practitioners, Broca included, used their measurements to justify social views about women and minorities at the time, claiming that biological difference was proof of inferiority.

Broca was not ill informed about scientific objectivity, and strove to meet the demands of this realm of thought, stating “there is no faith, however respectable, no interest, however legitimate, which must not accommodate itself to the progress of human knowledge and bend before truth.” While Broca was a prominent scientist who strove for leaving his personal opinions out in favor of the facts, he also had clear goals to use craniometry to “find some information relevant to the intellectual value of the various human races.” With this hypothesis in mind, he concluded that “In general, the brain is larger in the mature adult than in the elderly, in men than in women, in eminent men than in men of mediocre talent, in superior races than in inferior races.”

The case of Broca is a shocking example of scientific objectivity clouding the inner skeptic. He may have thought that by striving for scientific objectivity, he was immune to being subjective. Broca did not question the premise that craniometry could illustrate differences in intelligence. Instead, he designed experiments and interpreted the findings in ways that upheld the views of the time. His basic assumption, that measurements of the brain could rank humans on a linear scale of mental aptitude equivalent to their place in social hierarchy, was not only false, but also highly subjective, despite his support of scientific objectivity.

Broca’s time was over a century ago, and the optimist may suppose this incident was an isolated farce.  Unfortunately, Broca is hardly the first and will not be the last person studying the brain to use the mask of objectivity. John Money was a psychologist and sexologist most known for work in the 1950s and ‘60s. In 1966, Money met David Reimer and his parents. The Reimers had turned to the expert after their child’s circumcision had been botched. David no longer had a penis, and Money advised the infant be given a sex reassignment surgery and raised as a girl alongside his twin brother, Brian. While Brian played with trucks, David, then known as Brenda, was given dolls.  David and Brian attended multiple therapy sessions with Money geared toward convincing David that he was a girl and his brother was a boy who would fulfill their respective gender roles.  

Money was known well for his part in supporting the theory of hormonal organization of the brain to produce sexually dimorphic behaviors in animals. Much to the relief of parents at the time, he also supported the theory that for humans, gender identity and sexual orientation were a result of environment and upbringing alone. The doctor wrote extensively about the twins, highlighting David’s successful reassignment to the heterosexual female identity. In one of his many books, Money described his rigorous system for interviewing subjects and cataloging their data that allowed "objectivity to reside in the scoring criteria". As a mature adult, David Reimer found out what happened to him as an infant and why he had been so firmly pressed by his parents and Dr. Money into fulfilling the traditionally defined woman's role. Brenda changed his name to David and began living as a man, but tragically committed suicide in 2004.

Even today, studies like that of Skoe, Krizman, and Kraus (2013) uphold objectivity while trying to find “the biological signatures of poverty”. The authors attempt to link socio-economic status (SES) and differences in the brain’s response to sound. They use a "neurophysiologic test used to quickly and objectively assess the processing of sound by the nervous system," which is interpreted by an audiologist. While the brain is quite malleable and the environment can and does affect neural circuitry, studies such as these can encourage the treatment of poverty as a disease. The language in the article suggests many possible methods of targeted “intervention” for low SES students. From this point, the leap is not large to consider these neural differences a factor in the perpetuation of intergenerational poverty. This type of approach can lead to interpretations not far from Broca’s if we suggest that these studies are purely objective.

All of the scientists above were, and are, respected, intelligent, and creative individuals. All of them have used scientific objectivity. These scientists were most likely working in good faith and not intentionally biased. However, they were sorely misguided in applying objective measures to such a degree as to find themselves impervious to partiality. In trying to act as the disinterested observer, these researchers stumbled into the realm of ignorance, focusing on the experiment but not the outside pressures. They aren't the only ones.

The ideal of objective thinking can render scientists blind to the ways that the question begets the answer. Perhaps scientific objectivity then has no place in scientific practice. Donna Haraway, scientific philosopher and feminist, argues that objectivity in science should be discarded in favor of acknowledging the individual, both researcher and participant.  She advocates for the idea of situated knowledges from multiple individuals, meaning that “rational knowledge does not pretend to disengagement” and is instead a collective of scientific voices that consider themselves rational but not invulnerable to their own background and bias.

Objective research as it stands has offered much improvement to the scientific community and the method of inquiry since the nineteenth century, but the time has come to allow the definition of objectivity to morph.  Let’s make an effort, in our reading and our own work, to acknowledge the subjective and take our biases into consideration.  Objectivity is an ideal, but in reality, an eye that observes is not blind and should not pretend to be.  The most beautiful photograph is crafted, not captured from the ether.  The exposure, the contrast, the angle.  The question, the hypothesis, the model.  Perhaps most important for both photography and research, the interpretation.  All of these aspects matter.  Paying attention to each detail, ensuring no one feature overtakes the others, is the quality that separates the novice from the pro.

Edited by: Marika Wieliczko

Staying in Touch

Alessandra Salgueiro

Originally published April 4, 2015

photo by: kristen thomas and jadiel wasson

photo by: kristen thomas and jadiel wasson

All too often scientists get caught up in the nuances of their individual research projects. They are so focused on the function of their protein or gene of interest that they forget about the ultimate goal of biomedical research: understanding and curing human disease. However, Emory has made several efforts to make sure that this is not the case for their graduate students. Emory graduate students have access to several avenues that help them stay in touch with the human aspect of research. These include the Molecules to Mankind Doctoral Pathway, the Certificate Program in Translational Research, and interactive courses such as Cancer Colloquium.

The Molecules to Mankind Doctoral Pathway, or M2M, is an interdisciplinary effort that combines existing laboratory and population science Ph.D. tracks to create “a new breed of scientist.” Students that graduate from M2M are well suited for careers in public health as they are able to not only design and analyze laboratory experiments, but they can also integrate bench science to help solve population based health issues. Ashley Holmes, a third year Nutrition Health Science student in the M2M pathway, shared her perspective on the importance of keeping science in context:

“I think it's pretty easy to become hyper-focused on your dissertation topic and in doing so, you can unintentionally reduce people to data or biological samples.  The M2M program addresses [this] issue in its awesome weekly seminars: the speakers usually have interdisciplinary backgrounds and interesting collaborations that address basic, clinical, and population sciences.  Even when the details of their experiments or statistical analyses get tedious, they "bring it home" by reminding us of the public health implications of their work and how they are helping people.”

Rachel Burke, an Epidemiology student also on the M2M pathway, agrees.

“I like how the M2M seminars try to bring things back to the practical application of the research — the ‘so what’ factor. I think that having this background has helped me in turn think about what are the implications of my research and how can I focus those towards helping mankind.”

Emory’s Certificate Program in Translational Research provides Emory graduate students, post-doc fellows, and faculty with an opportunity to bridge the gap between basic bench science and clinical research. This program has 14 credit requirements, including a clinical medicine rotation which allows program participants to shadow a clinician and interact with current patients. Katherine Henry, a third year student in the Molecular Systems Pharmacology program appreciates the unique perspective of translational research:

“I have always been more interested in the translational aspects of science. I like science that I can explain to my family and it's a lot easier to do that when you can relate your work to some disease or physiological process. The hope is that this program will set me up for a career in clinical/translational science, for example at a clinical trials firm (CRO), or a public health agency like the CDC.”

A third way Laney Graduate students can stay in touch with the human side of bench research is through courses with context such as the Cancer Colloquium course. This course is the capstone for the Cancer Biology Graduate Program. The course director is clinician Dr. Ned Waller, who treats patients as well as runs his own basic research laboratory. Dr. Waller brings oncologists and patients into the classroom setting to create an interactive and collaborative learning environment. The goal of this course is not only to explain to students how the cancers they research are treated but also to remind them of why they are performing this research. Katie Barnhart, a third year Cancer Biology student currently enrolled in Cancer Colloquium says:

“Courses like Cancer Colloquium allow students to make a connection between what they learn in a lecture setting and apply it to real world applications. We learn about molecular pathways and drug development, but to hear about how these therapies are affecting the lives of cancer patients helps put what we do in the laboratory into perspective. Courses like this help students to take a step back and remember the big picture.” 

Cancer Colloquium is offered every other Spring under the listing IBS 562.

Emory students want their research to make an impact in the lives of patients and their families. M2M, the Certificate Program in Translational Research, and Cancer Colloquium provide pathways for students to reach out from their lab bench and stay in touch with the context of their research. In the age of interdisciplinary research programs like these will become critical for advancing medicine.

Edited by: Brindar Sandhu