Sickle and Flow: Connecting Science, Music, and Public Health

Chris Lewis

Background: Chris Lewis introduces Park Cannon to deliver the welcome address.Midground: Lisa Mills manages cheek swabs for Be the Match.Foreground: Sickle cell warriors Constance and Aaron speak with attendees. 


Background: Chris Lewis introduces Park Cannon to deliver the welcome address.

Midground: Lisa Mills manages cheek swabs for Be the Match.

Foreground: Sickle cell warriors Constance and Aaron speak with attendees. 

As the dusky rays of sunlight began to set on World Blood Day, a diverse crowd of scientists, rappers, clinicians and artists gathered at Boulevard and Edgewood to advocate for social justice in healthcare, celebrating the lives of those affected by sickle cell disease. Steps away from the birthplace of Dr. Martin Luther King, in the heart of the Old Fourth Ward, the crowd was spurred into action by a dedicated group of Emory students and Atlanta community members who seek to leverage their talents in music and the arts to promote basic and translational research for this often-stigmatized and chronically under-funded disease. 


Sickle cell is an inherited disorder of red blood cells that is most commonly diagnosed in people of color.  One in 360 African-Americans will be born with sickle cell anemia, suffering intensely painful crises or stroke when sickled red cells get trapped in blood vessels throughout the body. Beyond the preventive health-focused sphere of pediatrics, many adults get haphazard symptom-managing care at best, complicated by a healthcare system that sees them as painkiller addicts; life expectancy is 36 years for men, 48 for women. However, cell therapy research at Emory, developed in part by Dr. Edmund Waller, has shown great efficacy, resulting in cures for an increasing number of patients nationwide. 


Sickle and Flow was birthed from the mind of Marika Wieliczko, a PhD candidate in the Chemistry department at the Laney Graduate School, with help from Chris Lewis, an MD/PhD student studying immunology. They were joined in the planning and coordinating efforts by Moji Hassan (MD/PhD: Immunology) and Becky Bartlett (PhD: Chemistry). Billed as a hip hop benefit concert, the event featured performers from the ATL music scene, running the gamut from hip hop to soul, electronic-infused R&B to indie rock.  The event was hosted at two venues on Edgewood, the Sound Table and Peaceful Clouds, and featured participatory graffiti art, live DJs, the BBQ King and King of Pops. State Rep. Park Cannon, recently re-elected 25-year old Georgia legislator from the Old Fourth, delivered a welcome address in which she stressed the importance of community engagement to effect social justice. The evening also featured special guests from the sickle cell patient advocacy community, including Ms. Aaron Washington and Ms. Constance Benson.  Both Aaron and Constance have been cured of sickle cell by Emory scientists, but they emphasized that though now sickle-free, they are sickle cell warriors for life. 


Atlanta native Dr. Margo Rollins (Children’s Healthcare of Atlanta) also spoke of her work in pediatric hematology and sickle cell care, encouraging guests to donate blood or sign up with Be the Match to become stem cell donors. Contrary to popular belief, selecting “Organ Donor” on one’s driver’s license does not register the person to become a stem cell donor.  Be the Match was on-site, staffed by ATL-based students, with many from Emory including Lisa Mills (Laney PhD: Immunology), Amaka Uzoh and Roberta Gomez (MD program), and Fabrice Bernard (Emory/GT PhD program in Biomedical Engineering). Together these Emory students provided educational materials and collected cheek swabs from potential donors. Twenty-six people signed up to become stem cell donors, partnering with cell therapists worldwide to better treat and cure a variety of hematologic diseases.  Be the Match was elated by this turnout, especially when many of the swabs were from ethnic backgrounds that are under-represented in the registry.


It was an inspiring opportunity where scientists, painters, rappers and clinicians could engage each other in uniquely intersectional conversations, united by the sprit of science and art for social justice.  All proceeds from the evening—$2500—were donated to the the Sickle Cell Foundation of Georgia to aid their public health outreach activities.  What’s next for Sickle & Flow?  Marika is founding her own 501(c)3, and is planning to host future events where artists and scientists can come together to BE the CURE.  “Caring for patients with chronic disease requires an interdisciplinary team of clinicians, and in sickle transplant, this team must extend to those with African-American blood cells,” said Chris Lewis. “Together, we can all help translate bench-to-bedside research into meaningful social change that saves lives.” 


--Read more at www.Sickleandflow.org

Chris Lewis is an MD/PhD student studying immunology at Emory University's School of Medicine.

Edited by: Anzar Abbas and Brindar Sandhu

Mass Incarceration is a Women's Health Issue

Rebecca Fils-Aime

Imagine this: you wake up one morning and notice that something is wrong with you. It could be an abnormal pain in your side or maybe an onset of flu symptoms. Naturally, you would go see a doctor or go to a clinic. We, as humans have a fundamental right to life and therefore the right to seek methods to help extend it. However, there is a growing population in our country of people who cannot say that statement with confidence – those who reside in jails and prisons.

Mass incarceration strongly affects the health of prisoners, specifically with regards to STIs, HIV/AIDS, and mental health. As a matter of fact, prisoners tend to be the unhealthiest individuals in today’s society. But many studies tend to focus on men, since the mass incarceration rates are far higher among the male population than they are for women (women only represent 9-10% of the correctional population). Nevertheless, the number of women in jails and prisons has increased by over 700% since 1980, and that growth rate has outpaced the male imprisonment rate by more than 50%.

Women have unique health needs that need to be addressed, especially the population of women that are imprisoned. With female imprisonment rates rising at such alarming rates, mechanisms should be in place to ensure that their unique health issues can be addressed in order to improve the overall health in correctional facilities. But how exactly does mass incarceration affect women’s health?

Many incarcerated women are under the age of 50. Because these women are of reproductive age, they have health issues (including pregnancy) that make the care they require unique from men in many ways. In a correctional facility, gynecological exams are not required upon entrance, nor is it required for women to receive one every year. In fact, most correctional facilities do not have an OBGYN on site and that leads to inadequate and inconsistent care. As a result, women in prison have a higher risk for breast cancer, ovarian cancer, and general undetected illnesses because pap smears are not regularly administered. Besides not having access to women’s healthcare, women prisoners don’t even have regular access to feminine hygiene products.

Despite the fact that many incarcerated women are under the age of 50, female prisoners over the age of 50 still exist, and they have special health care needs as well. Not only are these women more likely than not going through menopause, they are at a higher risk than men for things like osteoporosis and other chronic diseases that require constant care and treatment. Women in jails also have higher rates of STIs and HIV due to limited access to services, risky behaviors with substances, higher chances of partaking in unprotected sex, and high chances of being sexually assaulted. Jails need to provide not just screenings, but treatment options.

At any point in time, 6% to 10% of incarcerated women are pregnant. Unfortunately, many don’t even know they’re pregnant until they take a pregnancy test upon arrival to the correctional facility. There is a lack of prenatal care, which is necessary for positive birth outcome and maternal outcomes. Pregnant women in jail are more likely to have high risk and complicated pregnancies because of the higher usage of alcohol and drugs. Despite the high risk of having a complicated pregnancy, only 54% of pregnant prisoners received prenatal care in 2008. Pregnant women in prison also have higher levels of psychological stress but usually do not receive the appropriate counseling and support services. The biggest health implication for pregnant women is treatment after giving birth. Women in jail do not have anywhere near enough time to recover after giving birth. The average time they report to general population is a day or two after giving birth, while the recommended recovery period is about six weeks. These women are thrown back into prison to deal with the psychological and physical stress while also dealing with postpartum recovery without the proper education and services to be able to take care of themselves. Women who give birth and go back to prison have higher rates of postpartum depression and psychosis for a few reasons – underlying previous mental health disorders (that may or may not be diagnosed), emotional trauma, and the stress of being away from their child. Women have specific nutritional needs while pregnant and they need certain foods as well as iron supplements and folate. But as the prison protests in Michigan and Alabama have shown, prisons are not supplying foods that are nutritious for the average person, much less a pregnant woman.

Between 70 to 80% of incarcerated women have abused alcohol and/or drugs. Imprisoned women are more likely to have used hard drugs in their past than men and 70% of them are considered to have a substance abuse problem. More than 40% of incarcerated women were under the influence of drugs when they committed their crimes that put them in a correctional facility. In addition, because of their increased risky behaviors like having unprotected sex and using dirty needles while using drugs, they have a much higher risk for HIV. Unfortunately, many women who are released return to drug usage shortly after.

Incarcerated women report higher rates of alcohol/drug abuse, STI, sexual/physical abuse and mental illness than incarcerated men. Mental illness has affected 61-75% of incarcerated women, compared to 44-63% of incarcerated men. The problem is that many of these women should be in mental illness facilities – not prison or jail. They will not receive the proper care they need in order to truly get better and as a result, they will end up in and out of the prison system when they should have been sent to a mental health facility. Solitary confinement is a punishment used disproportionately on people with mental illness (who tend to be women) and it only exacerbates underlying mental health conditions. Punishments used in correctional facilities – such as solitary confinement – increases chances of depression, anxiety, hallucinations, paranoia, and suicide. These women are punished for behavior that is going untreated and is beyond their control.  About half of women in prison and jail have been physically and/or sexually abused, which can lead to depressive disorders, stress disorders, anxiety disorders, substance abuse, and behavioral disorders.

Many of the women in prison come from disadvantaged environments and are therefore at higher risk of chronic illness, substance abuse and other undetected health problems. While prison is a punishment for breaking the law, it should not allow these health issues to simply fester and get worse. Prisons should invest in improving the health of the women in their care. It is recommended that the prison system increase the amount of cancer, STI and other gynecological screenings that they give inmates upon initial arrival. Prisons need to completely revamp how they administer prenatal and postnatal care to women in order to reduce the amount of adverse birth outcomes and to possibly lower rates of complicated pregnancies, psychological stress, postpartum depression and psychosis. Behavioral interventions need to be incorporated with the drug counseling – unfortunately, studies show that the drug counseling that prisons and jails are providing are simply not enough. Lastly, women have a much higher rate than men of coming into correctional facilities with mental health illnesses and developing mental illnesses while in jail. Many of these women need to be sent to a mental health facility and not jail. That environment just adds to the psychological stress and in turn, can make mental illness even worse. All in all, just because they are incarcerated does not mean that they are no longer human. Those whom are incarcerated are people too, and it’s time that the prison healthcare system start treating them as such.

 

References:

The Sentencing Project. (2016) Incarcerated Women and Girls. Retrieved June 01, 2016, from http://www.sentencingproject.org/wp-content/uploads/2016/02/Incarcerated-Women-and-Girls.pdf

Women’s and Children’s Health Policy Center – Johns Hopkins School of Public Health. (2014) Issues Specific to Incarcerated Women. Retrieved June 01, 2016, from http://www.jhsph.edu/research/centers-and-institutes/womens-and-childrens-health-policy-center/publications/prison.pdf

Women's Health Care Physicians. (n.d.). Retrieved June 01, 2016, from http://www.acog.org/Resources-And-Publications/Committee-Opinions/Committee-on-Health-Care-for-Underserved-Women/Reproductive-Health-Care-for-Incarcerated-Women-and-Adolescent-Females

National Commission on Correctional Health Care. (2014) Women’s Health Care in Correctional Settings. (n.d.). Retrieved June 01, 2016, from http://www.ncchc.org/women’s-health- care

Edited by: Brindar Sandhu

Rebecca is a first year student in the Rollins School of Public Health in the Health Policy and Management program. She can be contacted at: rebecca.fils-aime@emory.edu.

The Low Down on Zika

Rebecca Fils-Aime

In the last couple of months it seems as if the hysteria over Zika virus has increased ten-fold. Individuals in South America, Central America and the Caribbean have been advised by public health agencies to avoid pregnancy due to the potential connection between Zika virus infection and birth defects. The potential consequence of Zika that is eliciting the most fear is microcephaly, a condition where babies are born with abnormally small heads that has been associated with infection early in pregnancy. Cases have been reported of traveling Americans returning to the States with Zika, and as a result, American officials are ready to increase mosquito control. Mosquito control is a huge concern, especially in the South, due to the fast approaching spring and summer seasons. But what exactly is the Zika virus and where did it come from?

Zika virus was discovered in 1947 in a monkey that resided in the Zika forest located in Uganda. The virus was first detected in humans in the 1950s. However, the first outbreak of Zika in humans that received international attention was in Micronesia in 2007. A few years later, in 2014, there was another outbreak in French Polynesia. The current outbreak began last year in Brazil and has spread to the Caribbean, South America, Central America and Mexico. Everyone in these areas is at risk and should take all necessary precautions to avoid infection. There are two strains of the Zika virus – the African strain and the Asian strain. The virus that is currently circulating in the Americas is closer to the Asian Strain of the virus.

Daytime-active mosquitoes spread the Zika virus, just like Dengue and Chikungunya are spread. The most common carrier, a type of mosquito called Aedes aegypti, is found all over the world, across all continents. An increasing amount of evidence shows that Zika can be transmitted sexually and from mother to child during pregnancy. Like other sexually transmitted infections, sexual transmission of Zika can be prevented by abstinence or safe-sex practices like condom usage. Zika virus infection in pregnant women has been linked to miscarriage and microcephaly. Microcephaly can cause seizures, developmental delay, feeding problems, hearing loss and vision problems. In very rare instances, Zika has been linked to severe dehydration, neurological diseases like Guillan-Barré and even death. More common effects of Zika are fever, rash, joint pain, conjunctivitis, muscle pain and headaches. The illness caused by the Zika virus is generally mild and people rarely die from it. A large proportion of people do not even show symptoms.

If it is suspected that you or someone you know has Zika, the first step is to go to a doctor for a diagnosis. A history of illness will be recorded and then the doctor will carry out a physical exam and a blood test will be ordered. A blood test is the only way to differentiate Zika virus from related illnesses such as Dengue or Chikungunya. Unfortunately, there is no vaccine or cure for the Zika virus as of now. All current treatment is aimed at reducing symptoms. Treatment includes lots of rest, pain medications for fever and body aches and lots of fluids to prevent dehydration. President Obama and U.S. health officials have requested a large amount of emergency funding – 1.8 billion – in order to combat Zika in the United States and to help protect pregnant women from the terrible effects the virus may have on unborn children.

There are several methods being implemented in attempts to stop the spread of the Zika virus.  Mosquito prevention methods, like window and door screens, insect repellent, insecticides, automatic misting systems, are highly recommended. Even something as small as emptying water from any containers outside can help prevent mosquito breeding. The question of how Zika virus spread so quickly may still be unanswered, but here is to hoping that a cure is discovered before the summer months bring more mosquitoes – and Zika cases – to the United States.

Edited by: Mallory Ellingson

Rebecca is a first year student in the Rollins School of Public Health in the Health Policy and Management program. She can be contacted at: rebecca.fils-aime@emory.edu.

 

From Friends to Foes: The Story of Hans Bethe and Edward Teller

Mallory Ellingson

Originally published December 4, 2015

Photo by: Paula Tyler

Photo by: Paula Tyler

On a summer afternoon, two young couples drove across America enjoying the sun and sights as they took a break from the rigors of academic life. Hans Bethe was falling in love with Rose Ewald while Edward Teller and his wife Mici looked on, perhaps laughing and congratulating themselves on their matchmaking abilities. It was the summer of 1937 and the four friends were taking a cross-country road trip to California. Nearly sixty years later, Hans Bethe would eat breakfast at the Los Alamos Inn. Moments later, as another physicist Ralph W. Moir observed, Teller would enter and seat himself merely two tables away from Bethe. Although each could clearly see the other, neither man would bother to acknowledge the other; a notable chill sat between them. Within decades, the two men would go from the closest of friends to the bitterest of rivals. One would come to be seen almost as a villain; part of the supposed inspiration behind Stanley Kubrick’s titular mad scientist character in Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb, while the other would become a Nobel laureate and respected scientific advisor, maintaining influence in the White House late into his nineties. The tale of the friendship between Edward Teller and Hans Bethe is inextricably intertwined with the history of nuclear physics and science policy in the United States, an epic tale spanning decades and involving numerous presidents and multiple wars.

In some ways, the entire history of nuclear policy and technology can be traced back to one man who had a revolutionary thought while crossing a London street in 1933. Leo Szilard’s concept of a nuclear chain reaction, the idea that the splitting of an atom could produce massive amounts of energy if the conditions were right, was the first domino in a long line of discoveries and innovations, which eventually lead to the atomic bomb and nuclear energy. The later creation and use of the atomic bomb set the stage for the Cold War, a period of high tensions between the United States and the Soviet Union which caused a number of bitter policy debates over the future of nuclear research, nuclear energy, and defense initiatives. Today the US still finds itself embroiled in international nuclear policy. Current Secretary of State John Kerry recently negotiated with other world powers and clashed with the GOP over the future of the bans on Iran’s nuclear program.

Both Bethe and Teller got their start at Los Alamos working side by side on the infamous Manhattan Project constructing the atomic bombs that would later end World War II, but their paths diverged as they found themselves taking two very different stances on nuclear policy. Bethe was a staunch technological skeptic, a concept very strongly associated with the Presidential Science Advisory Committee (PSAC) of the 1950s and 1960s. Technological skeptics focused on the boundary between science and technology, encouraging basic research and shedding light on the limits of technological solutions to social and political issues. Teller, on the other hand, is perhaps one of the most renowned technological enthusiasts and supported technology as the primary solution for most issues. Technological enthusiasm was a prevalent belief in the Cold War era and lead to several of the largest technological initiatives in American history, such as the Apollo Project and the Hydrogen Bomb.

For two scientists so divided later in life, the early days of Hans Bethe and Edward Teller were remarkably similar. Both were born to Jewish families in the early 1900’s, Teller in Hungary and Bethe in Germany. Teller began his scientific career in Hungary but quickly moved to Germany, which was the center of scientific advancement in the first few decades of the 20th century. They most likely first met at the University of Munich in 1928 where Bethe was studying under renowned physics professor Arthur Sommerfeld, when Teller arrived and started attending Sommerfeld’s lectures. However, as Hitler and the Nazi party began to gain popularity and power in Germany and surrounding countries in the 1930’s, both promising young physicists joined the veritable flood of great minds that left Europe to avoid Nazi persecution. The refugees eventually made their way to the United States where Teller joined the faculty at George Washington University and Bethe took a position at Cornell University. It was during these years at Cornell that Hans Bethe began his work on the production of energy in stars, which would eventually win him a Nobel Prize. Teller was exploring the field of astrophysics as well, focusing on the nuclear processes of stars, collaborating with Bethe on numerous occasions. 

Nuclear physics was a burgeoning, yet still relatively small, field in America at the time and it is no surprise that Teller and Bethe maintained their friendship from their student days. The two often visited each other at their respective universities and they collaborated on multiple projects. Edward Teller and his wife, Mici, played matchmaker for Bethe and Rose Ewald who married in the late 1930s and the two couples remained very close for many years. Their paths also crossed with numerous other famous personages in nuclear physics – upon visiting The University of California, Berkeley at the end of a California road trip with Hans and Rose Bethe nee Ewald, Teller met J. Robert Oppenheimer, who would eventually lead the Manhattan Project, for the first time. Bethe made multiple trips to Columbia University to visit I.I. Rabi, who would later work on the Manhattan project and win a Nobel prize for his work with nuclear magnetic resonance, and out to Chicago to see Arthur Compton, another Nobel Laureaute in Physics, whom he had met at a conference in London . Teller also enjoyed a close friendship with Leo Szilard, the man who first conceived the idea of a nuclear chain reaction, whom he had met while working in Germany. In 1938, Edward and Mici Teller planned a trip to return to Hungary to see family; however, they were forced to postpone their trip as German forces arrived on the borders of Hungary. Neither would see their family again for nearly twenty years nor return to Hungary for half a century.

At the very end of the 1930’s, the dominoes began to fall and Edward Teller, by virtue of his close friendship with Leo Szilard, found himself right in the center of what we now mark as the beginnings of the atomic bomb. Teller was recruited to come to Columbia to work as an intermediary between Leo Szilard and Enrico Fermi as the two worked on nuclear fission and the production of a nuclear chain reaction using uranium. All three men, and many of their colleagues, recognized the potential of what they were doing and the need to alert the government. They turned to Albert Einstein, who wrote a famous letter to President Franklin D. Roosevelt. In 1940, prior to the official entry of the United States in World War II, the US government began to look into the possibility of an atomic bomb and recruited a variety of physicists to assess the issue. Bethe recalls that when he first heard the rumors in 1940, he was not yet convinced that a bomb was possible, unlike his friend Teller. His mind was changed after he visited Enrico Fermi’s laboratory in 1942 and witnessed a nuclear chain reaction, although he remained uneasy about working on the project. After hearing President Roosevelt declare at the Eighth Pan-American Scientific Conference that it was the responsibility of scientists to use all of the knowledge at their disposal to protect America, Edward Teller saw the atomic bomb as his only path.

In the summer of 1942, J. Robert Oppenheimer began to recruit scientists from across the country to join him at Berkeley to begin work on the design of an atomic bomb. Despite his initial misgivings, Bethe was desperate to contribute to the fall of the Nazi party, so he and his wife Rose piled into a car, picked up their best friends, Edward and Mici Teller, and made their way to California. Shortly after, Oppenheimer recruited scientists to Los Alamos and offered Hans Bethe a position as the Head of Theoretical Physics. Teller was asked to join as a group leader in the same department, but wanted the director post for himself. He felt slighted and chafed under the leadership of his old friend. This marked the beginning of the end of the friendship between Hans Bethe and Edward Teller. Their inability to work together stemmed from this perceived slight but was also based in very different methodologies. In his Memoirs, Teller describes Bethe’s methodology as detail oriented and Enrico Fermi said that Bethe worked on “little bricks.” Teller, on the other hand, considered himself more of a bricklayer. He liked to look at the larger picture rather than focus on the little details. So when Bethe asked Teller to work on the minutiae of the implosion scheme of the bomb, he declined, instead choosing to focus on the possibility of a thermonuclear, or hydrogen, bomb. Bethe did not take this well and Oppenheimer was forced to separate the two former friends and move Teller’s team to work under Fermi.

After working non-stop on the atomic bomb, many of the scientists working on the project – both at Los Alamos and at other facilities – were forced to pause and think about their creation following the surrender of Germany in the spring of 1945. Beating Germany to the creation of the bomb had been the primary motive behind many of the scientists’ participation, Bethe included. With Germany out of the picture, a new question had to be asked: Do we use the bomb on Japan? Leo Szilard worked very closely with James Franck in the writing of a petition, later known as the Franck Report, to President Truman, warning of the arms race that was sure to happen if the United States decided to use the bomb. Szilard sent the petition to Teller, asking him to sign and share it with the rest of the Los Alamos scientists. Unsure, Teller consulted Oppenheimer, who – unbeknownst to Teller – was sitting on a military committee convened to answer the very same question. Oppenheimer told Teller that the report was based on incomplete information and that the decision needed to be left in the hands of the politicians and the military. Bethe felt this way as well, appreciating Franck and Szilard’s moral objections and foresight, but not wanting to protest without fully understanding the situation. The Franck report was published and signed by 68 scientists; however, it did not include Teller, Bethe, or any other Los Alamos scientists, something that Teller deeply regretted and held against Oppenheimer for the rest of his life. Testing of the atomic bomb proceeded in July of 1945 and once complete, the decision to use the bomb was made. When asked about this on multiple occasions later in his life, Teller firmly replied that the US Government was wrong to use the bomb without giving Hiroshima proper warning. Bethe believed that the destruction, however horrible, was necessary to make the Emperor surrender and to save the lives of the nearly one million Americans it was estimated would have died if the United States had been forced to invade Japan, as he expressed in an interview with the Cornell Chronicle in 1994.

Oppenheimer resigned his directorship of Los Alamos shortly after the successful testing of the atomic bomb. The new director, a reserve naval officer by the name Norris Bradbury, wanted to keep Teller on in the theoretical division, however he would not meet Teller’s condition that the hydrogen bomb, Teller’s pet project at Los Alamos, be a priority. Teller then made his way to Chicago where he joined Fermi at the Institute of Nuclear Studies. Bethe was asked to stay on as well; however, he chose to return to teaching at Cornell. It was during these immediate post war years that Teller became involved in politics, which would later become one of the largest parts of his life. It seemed for a few years that life would return to normal for these two scientists, however, everything changed again when the Soviet Union had their first successful atomic bomb test in 1949. Teller began to lobby the Atomic Energy Commission (AEC) for the construction of the hydrogen bomb and compiled a list of scientists that he wanted to join him, including Hans Bethe. Bethe initially refused, as he was morally opposed to any continued work on the creation of nuclear weapons. However, he later joined the effort at Los Alamos, when he realized that he might be a better advocate for disarmament if he were at the center of the nuclear weapons world.

Teller had long been vehemently anti-communist, even prior to his involvement with the Los Alamos project. In addition to this position, he had multiple personal conflicts with Oppenheimer, including feeling betrayed over the Franck Report. Therefore, it is not as surprising that when J. Robert Oppenheimer was falsely accused and brought to trial for espionage for the Soviet Unionin 1953, Teller was the only major scientist who agreed to testify against Oppenheimer. In his youth, some of Oppenheimer’s family had friends that had flirted with communism. This association would haunt him through his years on the Manhattan Project and was part of the basis for his trial in the post-war era of anti-communism led by Senator McCarthy among others. All of the other scientists called to testify against Oppenheimer refused and chose instead to stand by their colleague. Although Teller did not directly accuse Oppenheimer of espionage or being a communist, during his cross-examination, he expressed that he did not trust the nation’s secrets in Oppenheimer’s hands. This action permanently damaged Teller’s relationship with the majority of his scientific peers. As Bethe later observed, it was no surprise that scientists would blame Oppenheimer’s downfall on the only witness they knew well. However, it also allowed Teller to step into Oppenheimer’s role as a scientific statesman. He became very politically vocal and even appeared on the cover of Time Magazine’s Men of the Year issue in 1960, which acknowledged the accomplishments of the scientists of the past decade. Around this same time, Bethe was entering the political sphere as well, however he generally found himself politically at odds with his ex-friendas the two established the camps of technological skepticism and enthusiasm respectively. Bethe played a key role in the creation of, and subsequently served on, the Presidential Science Advisory Committee (PSAC), an organization that often served as a stronghold of technological skeptics.

Prior to the establishment of the Presidential Science Advisory Committee under President Eisenhower, presidential science advising was inconsistent and often came from the military. With the development of PSAC, Eisenhower was the first President to acknowledge the necessity of scientific authorities in the White House as scientific developments in physics, technology and biomedical sciences began to intersect more with the role of Commander in Chief. President Nixon abolished PSAC in 1972 and scientific advisors were once again relegated to the sidelines until President George W. Bush established the President’s Council of Advisors on Science and Technology (PCAST) in 2001.

The successful launch of the satellite Sputnik into space by the Soviet Union in 1957 sparked the first major clash between the scientific skepticism espoused by PSAC and the scientific enthusiasm that was advocated by Teller and other scientists (collectively known as the “Teller-Lawrence Group”). Sputnik occurred around the same time that the Soviet Union and the United States were in the middle of discussing a Nuclear Test Ban. Teller strongly believed that Sputnik represented a technological defeat for the United States. He believed that the only way for the United States to regain its prestige was through continued nuclear research and testing, and he gained the support of the military-industrial complex with his convictions. Bethe, on the other hand, stood firmly with PSAC in support of a complete Nuclear Test Ban. He was called upon to lead an interagency panel on the nuclear test ban in 1958 because of his experience at Los Alamos, position on PSAC, and longtime position on the Air Force panel in charge of monitoring Soviet Nuclear tests. Although the Bethe Panel did not come to a conclusion as to whether a nuclear test ban would be a detriment to the United States, the conclusions of the panel clearly reflected Bethe’s firm belief in disarmament as well as the limits of the realms in which scientists can provide advice. When negotiations for a test ban with the Soviet Union stalled after a US spy plane was shot down over Soviet territory, Bethe expressed his disappointment in Eisenhower publicly through an article in Atlantic Monthly in 1960. This critique, albeit gentle, of Eisenhower’s nuclear policy enraged Teller so much that he challenged Bethe to a televised debate, which Bethe refused, believing that nothing good could come of such a spectacle. It was clear at this point that these two former friends had become rivals – fierce opponents each strongly advocating for different sides of the same issue.

Both men remained influential in nuclear policy throughout the changing political climates of the next few decades while still continuing to make meaningful contributions to the field of nuclear physics. Bethe’s work won him the Fermi Award from the Kennedy administration in 1961 and a Nobel Prize in 1967for his work on the production of energy in stars. Bethe typically found himself politically aligned with more liberal candidates in arguing for nuclear disarmament and the Limited Test Ban Treaty that was enacted in 1963. Teller, on the other hand, stood staunchly with the conservatives and was very active in several presidential campaigns, including Richard Nixon’s. In fact, he became a close scientific advisor of Nixon and was almost given membership of PSAC during Nixon’s term, but was told by the then chairman of PSAC, Edward David Jr., that putting Teller on the still very Oppenheimer-loyal PSAC would destroy the committee. However, despite his lack of an official position, Teller still had the president’s ear on scientific and nuclear policy matters, including the debate over the implementation of an anti-ballistic missile system.

The idea behind the anti-ballistic missile system was that it would be a defensive measure consisting of a number of Chinese-oriented thermonuclear missiles and would serve to deactivate any nuclear attacks that the Chinese may decide to make against the United States. Bethe strongly believed that the implementation of such a system would only serve to increase tensions between the United States and various communist nations, and even more, would do nothing to defer an attack from China, as he expressed in an article in Scientific American in 1968. Teller, on the other hand, unsurprisingly strongly advocated that the United States needed to have such mechanisms in place in order to properly defend itself. This debate drove a deep wedge between the Nixon administration and his science advisors and perhaps, along with other cultural factors, lead to a decline in the influence of science advising in the White House including the disbandment of PSAC. Bethe and Teller clashed again during the Reagan administration over the Strategic Defense Initiative, however for the most part they spent the last couple of decades of their lives in relative peace.

Hans Bethe and Edward Teller both stamped their legacies across nuclear physics and scientific advising, albeit in very different manners. Bethe’s staunch stance against nuclear weapon development and use lead him to be revered among his peers and he maintained strong friendships with his fellow scientists throughout his life. Teller, on the other hand, was vilified and vituperated by many of his colleagues for his pursuit of thermonuclear weapons. He is remembered as the father of the Hydrogen Bomb and despite his many contributions to the field of nuclear physics, was overlooked for many prestigious awards. Looking back over their lives, it seems like the friendship of Hans Bethe and Edward Teller was doomed from the beginning. Although both respected the minds and scientific accomplishments of the other, they could never come to any compromise on the issue of nuclear policy. At the cost of their friendship, both men made extremely meaningful contributions to American history and played an important part in keeping the United States safe during the Cold War.

Edited by: Brindar Sandhu

Mallory is a student in the Rollins School of Public Health and can be reached at mallory.ellingson@emory.edu.

Blue on the Big Red-A History of Water and Life on Mars

Tej Mehta

Originally published November 26, 2015

Image of Mars as taken by the Hubble Space Telescope

Image of Mars as taken by the Hubble Space Telescope

Are we alone in the universe? Humanity has been searching for evidence of life elsewhere in the cosmos for hundreds of years, and recently, the key to life on Earth, liquid water, was found on our neighboring planet Mars. In September of 2015, the National Aeronautics and Space Administration (NASA) confirmed evidence of liquid, flowing water on the planet Mars. The lead author of the new report, Lujendra Ojha, noted the presence of hydrated minerals on some of the hills of Mars, which appear to flow and change direction over time. Given the growing public interest in the Red Planet, in part evidenced by the success of modern space dramas like The Martian, we wanted to highlight the importance of NASA’s new discovery in the context of previous water-related discoveries on Mars. While few people expect to find little green men, the discovery of even single-celled organisms on the Red Planet would finally answer one of humanity’s greatest questions.

Scientists have speculated about the possibility of water on Mars since the early days of telescopic observation, when William Herschel, the famous German astronomer, recorded his observations of the Red Planet in the 1780s, noting striking similarities between Mars and Earth. Like other scientists at the time, Herschel believed that Mars’s polar ice caps were evidence of water, but he also believed that large, dark spots on Mars’s surface were oceans and clouds. Herschel went so far as to postulate that the differences in surface topography in relation to seasonality were caused by inhabitants on Mars who were growing vegetation.

Herschel’s popularization of the theory of Martian residents likely influenced one of the greatest Mars sensationalists, Percival Lowell. Taken in by a popular scientific idea of the 1890s, Lowell constructed the Lowell Observatory in Flagstaff, Arizona to help observe and make detailed drawings of the planet’s surface, believing that the “inhabitants” of Mars were building visible canals on the Martian surface to direct water flow. His ideas were rejected by later astronomers, though the Lowell Observatory was ultimately used by Clyde Tombaugh to discover the dwarf-planet Pluto and Lowell himself was able to generate tremendous public enthusiasm to hunt for life on the Red Planet.

Despite ongoing public and scientific interest, water-related discoveries on Mars did not truly surmount until modern analytical techniques for astronomical observation were developed. In the 1930s, astronomers began to observe Mars using the techniques of spectroscopy, which works by splitting light into different wavelengths and analyzing those wavelengths separately. Early experiments by Walter Adams and Theodore Dunham showed effectively no water vapor or oxygen in the Martian atmosphere; later, more refined experiments determined the amount of oxygen to be around one percent that of Earth and even water vapor was detected in small quantities, though not until 1963. Eventually, and after years of contention, a number of observations determined that Mars has two polar water-ice caps –  one that has a perennial dry-ice coating and one that acquires a dry-ice coating during the Martian winter.

Modern spacecraft-based Mars exploration began with the Mariner 4 flyby in 1965. The spacecraft’s images and measurements showed a very thin Martian atmosphere and a pox-marked surface, indicative of many asteroid collisions and little to no geologic activity. Such a lack of geologic activity suggested a lack of flowing water on the Martian surface and a thin atmosphere implied that any liquid water on the surface would either quickly boil or freeze. These observations led many to question the possibility of any significant water on the surface, and the chance of finding life on Mars seemed unlikely to much of the scientific community.

The view of Mars as a “dead” planet hampered future exploration; however, nearly a decade after the Mariner 4 mission, Mariner 9 was the next probe to reveal significant information of past water on Mars in 1971. This discovery rekindled scientists’ hope to find water and life on Mars. While previous spacecraft had only conducted flybys, Mariner 9 was the first to enter the orbit of another planet and remain there for its entire mission, which proved to be a much more desirable method of planetary observation. Mariner 9 was able to reveal not only the presence of riverbeds and canyons, but also weather fronts, fog, and other past and present indicators of liquid water on Mars.

The success of the Mariner program, and Mariner 9 specifically, influenced the design of the following Viking missions to Mars. Between 1976 and 1982, the probes Viking 1 and Viking 2 provided a wealth of information about Mars and were the first successful rovers to land on the Martian surface. Chemical analysis of the soil by the rovers indicated the possible presence of organic materials and water in the surface, though it was noted that the presence of strong UV light and perchlorate in the soil would make it extremely difficult for life to exist in the Martian top-soil. Combined data from the Viking orbiters and rovers showed a wealth of information for water-based erosion on the Martian surface. Strong evidence was found for past river valleys, natural dams, streams, rainfall, and even mud caused by heating of select locations by meteor strikes or volcanism.

The Viking program was retired on November 13, 1982, and the information garnered by it has been in use to this day. Since then, a variety of data about water on Mars have been collected by multiple probes and rovers. In the late 90s, the Mars Global Surveyor discovered evidence of past lava flows, implying geologic activity and warming. In 1997, Pathfinder found evidence of wet soil. Between 2002 and 2008, Mars Odyssey, Phoenix, and Mars Express each found evidence of past water distribution, while in 2004, Opportunity found evidence of past oceans and coastlines. Some of the latest, most influential evidence for water on Mars comes from the Mars Reconnaissance Orbiter (MRO). The September 2015 announcement by NASA came in light of flowing, hydrated minerals detected by MRO.

Warm seasonal flows in Newton Crater on Mars as captured by Mars Reconnaissance Orbiter

Warm seasonal flows in Newton Crater on Mars as captured by Mars Reconnaissance Orbiter

Using similar spectroscopic techniques as Walter Adams and Theodore Dunham had nearly 80 years prior, MRO was able to detect these hydrated minerals on “recurring slope lineae” when local temperatures were above -10 degrees Celsius. While flowing water was previously thought to be impossible given current conditions on the Martian surface, the extreme saltiness of these hydrated minerals could allow for liquid water to exist, much like how salt spread on roads can help keep water liquid below 0 degrees Celsius. Of course, the big question on everyone’s minds is now “Can we find evidence of life on these slopes?” to which the answer is “Maybe”. The hydrated minerals are not present on the surface year round, and evidence for ground water beneath the slopes is still unclear. Additionally, the extreme saltiness of these minerals and the possible water around them is toxic to all but the most resistant forms of known life. Future expeditions will attempt to characterize more recurring slope lineae as well as other potential sources of current water on Mars, and the Curiosity rover is currently searching for fossilized bacteria on the Martian surface when it’s not too busy taking selfies.                 

                 Self-portrait of Curiosity rover taken on November 1, 2012

                 Self-portrait of Curiosity rover taken on November 1, 2012

Faced with the current worldwide situation of economic instability and crisis, the value of Mars-research and space exploration in general are often undercut, such as with the Obama administration’s 20% budget reduction of NASA’s planetary-sciences division – but the importance of Mars-research and space exploration cannot be overstated. From an economic standpoint, the discovery of water on Mars would make colonization and resource-extraction efforts profoundly more plausible and could be a long-term solution to humanity’s population explosion and resource-mismanagement. Not only is Mars-exploration economically enticing, but humanity has been fascinated by the Red Planet since ancient times, and the prospect of life on Mars feeds our curiosity about the universe and could help tell us whether or not Earth is the only planet capable of harboring life. From William Herschel with his telescope, to NASA’s Jet Propulsion Lab, we have spent hundreds of years studying Mars and launched over 50 missions to investigate the planet. The discovery of water on Mars is not simply bringing closure to decades of past speculation, but is the beginning of new possibilities and a chance to challenge our current solitude in the universe.

Edited by: Marika Wieliczko

Tej Mehta is a graduate student in the Rollins School of Public Health and can be contacted at tej.ishaan.mehta@emory.edu.

Protons for Patients at Emory - Cancer Treatment with a Big Cost?

Tej Mehta

Originally published October 14, 2015

photo by: paula tyler

photo by: paula tyler

In May of 2013, Emory Healthcare and the Winship Cancer Institute (in partnership with a private funding entity) began construction of the Emory Proton Therapy Center. The $200+ million, 107,000 square-foot facility, which is slated to open in January 2017, will be the first of its kind in Georgia. Based on current construction estimates, the Emory facility will be the 17th operating center nationwide, and will prominently feature new “pencil beam scanning” technology, a major development in the field of proton therapy. The technique itself holds significant promise for many patients in the Emory Healthcare system, but is the potential benefit of proton therapy worth its additional price? The Emory facility has partnered with Advanced Particle Therapy LLC, a proton therapy developer, to help fund the construction and operational expenses.

In 1946, Robert R. Wilson at the Harvard Cyclotron Laboratory, first proposed protons as a therapeutic tool, and the first treatment in the United States occurred in 1954 at the Berkeley Radiation Laboratory. Due to major engineering limitations, proton therapy continued to be used sparingly for many years, predominately for research purposes. In 1989, the first hospital-based center was developed at the Clatterbridge Center for Oncology in the UK, followed by the first US hospital-based center in 1990 at Loma Linda University Medical Center. Since that time, proton therapy centers have been developed at a slow but steady pace, currently numbering 15 operating facilities and an additional 20+ centers under construction. This implies a current access rate of 1 facility for approximately every 20 million US citizens. Globally, there is a dramatic upsurge of interest in proton therapy, with nations like Norway developing widespread access by commencing construction on approximately 1 facility for every 1 million inhabitants.

Like other forms of radiotherapy, proton therapy kills cancer cells by causing direct or indirect DNA strand breaks. In modern external beam radiotherapy, tumor-specificity is achieved by physically shaping the beam to minimize exposure of normal tissue to unnecessary radiation; however, because high energy X-rays continue to travel through tissue, some radiation is always deposited in normal tissues in the exit path of these beams. Protons, like high energy X-rays, can also be shaped to match the shape of the tumor,  but because of their unique physical property of losing energy while traversing tissue and then depositing all of their energy at a pre-defined depth (based on the initial energy of the beam), there is close to zero exit dose. The result is that while other forms of radiation therapy will deliver radiation to healthy tissue beyond the tumor, proton therapy delivers little to no excess radiation to healthy tissue posterior to the entry beam. This special ability of protons is due to a phenomenon called the “Bragg Peak,” which represents the precipitous and sudden loss in energy that stops protons. Modern proton therapy clinics, such as the Emory facility, also use the previously mentioned technique - pencil-beam scanning - which is analogous to 3-D printing. Like 3-D printing, where thin layers of material are repeatedly applied to make a larger 3-D shape, pencil-beam scanning continuously applies thin layers of protons to a tumor until the entire tumor is treated with radiation. This unique capacity not only eliminates radiation damage past the stopping point of the beam, but also reduces radiation damage to normal tissues all around the tumor. These advantages of proton therap​y have been lauded by clinicians and the therapy is generally recognized as particularly useful in patients with a likelihood of long-term survivorship, such as children and young adults. These individuals are typically at greater risk of developing organ dysfunction and secondary cancers as a result of radiation to their normal tissues. Additionally, in situations where an adequate dose to control the tumor simply cannot be safely deposited with conventional modalities, proton therapy is often the best recourse.

Despite these apparent advantages, proton therapy has received significant criticism, both with regards to cost and the level of evidence documenting its true effectiveness. Currently, the primary concern about proton therapy is whether the obvious dosimetric advantages result in “cost-effective” clinical advantages. The upfront construction cost of a multi-room proton therapy facility is well over $100 million, and various analyses have projected that treatment with protons is twice as expensive as X-ray radiotherapy. However, other studies have determined proton therapy to be reasonably priced for most cases, particularly given the potential costs of treating adverse effects caused by standard radiotherapy. For example, one study published in the journal Cancer in 2005 found the average cost of treatment for conventional radiation therapy and proton therapy to be $5,622 and $13,552 respectively. However, the same study found the cost of treating adverse events from conventional radiotherapy and proton therapy to be $44,905 and $5,613 respectively, demonstrating that while the upfront cost of proton therapy may be high, the total costs are significantly reduced.

The other primary criticism against proton therapy is whether or not it is truly as effective as its proponents claim it to be. Thus far, there have been few controlled, randomized clinical trials to demonstrate improved survival or quality of life with proton therapy over other forms of external beam radiotherapy. One such trial randomized patients with ocular melanoma to particle therapy with Helium (similar to proton therapy) against a localized form of radiation known as plaque brachytherapy, and reported superior clinical results for patients on the Helium therapy arm. Several non-randomized trials have supported the clinical benefits of proton therapy. Conversely, detractors of proton therapy point to the lack of randomized clinical trial data as cause for concern with proton therapy, while proponents argue that no clinician has true equipoise to conduct such a randomized trial, stating that to knowingly withhold proton therapy from patients who could benefit from the treatment is unethical. Other critics of proton therapy raise concern over the potential inaccuracy of the exact placement of the Bragg Peak within the tumor. Because of the sudden deposition of energy caused by the Bragg Peak, small errors in measurement or slight movements of the patient could cause a dose-shift. Most modern proton systems deal with this by reducing error through a series of technical refinements, and by performing what is known as “robustness evaluation,” as well as accounting for this uncertainty through a process known as “robustness optimization”. ​

Emory Healthcare and its affiliates have weighed the costs and benefits of proton therapy and elected to build the facility. The construction of a proton therapy center in Atlanta by Emory Healthcare demonstrates Emory’s conviction to patient care and treatment options and aims to improve the survival and quality of life outcomes of many patients. As the first proton therapy center in Georgia and one of only a handful of treatment centers in the country, Emory has again distinguished itself as a nationwide leader in healthcare.

Edited by: Carson Powers

Tej Mehta is a student in the Rollins School of Public Health and can be contacted at tej.ishaan.mehta@emory.edu.

Letter from the Editor

Brindar Sandhu

Originally published September 22, 2015

Hi there! Welcome to Inscripto, the popular science website that hosts content created, written, and produced by Emory graduate students! My name is Brindar and I am a fifth year graduate student in the Genetics & Molecular Biology program. Inscripto is a part of Emory Sci Comm, a graduate science communications group that aims to relay complex scientific topics to the general public.

Our website has been live for 5 months now, and we are constantly evolving. Starting this academic year, we hope to bring you other forms of media in addition to articles. We will venture into podcast-land by producing interviews with top Emory scientists and graduate students, so be on the lookout for those!

As scientists working in a lab all day, it’s easy for us to disconnect ourselves from the world and forget that what we do is important for the public to know. At Inscripto, we hope to bridge the general scientific knowledge gap between scientists and the public. We don’t want our politicians thinking vaccines are bad for us, do we? We want those politicians to vote in favor of funding our research, not thinking we’re wasting federal money because there’s no cure for cancer (spoiler alert: there is no cure-all).

In order to inform the misinformed or even the uninformed, we have to effectively communicate what we do. We welcome students to pitch an idea for any type of media we can put on a website, on topics ranging from current issues in their field, topics of public interest, and even their own research. So pitch an idea already!

Baby Crazy: The Mind Control of Motherhood

Amielle Moreno

Originally published August 13, 2015

Photo by: jadiel wasson

Photo by: jadiel wasson

Any wilderness expert will tell you the most dangerous animal to see in the wild is a baby bear. Accidently stumbling between a mother bear and her cubs is a sure way to get mauled. And none of us would be here today if it wasn’t for the selflessness of our ancestors, putting the survival of their offspring sometimes before themselves.

So apparently, the most basic drive for self-preservation can be trumped by babies. While we can’t live forever, we can pass on our genes. Thus, what Richard Dawkins termed “selfish genes” have created animals built for their own survival, and that drive for self-preservation can be redirected to reproduction and then parental care.

The Power of Hormones

I’m sorry to be the one to tell you this, but the areas of your brain responsible for decision-making can be overpowered by hormone-driven signals from deeper brain regions. During development, hormones influence the structure of our bodies, including our brains. During puberty, the same hormones can act again on these existing systems to make you feel awkward during gym class. But perhaps the largest natural shift in hormone concentrations is during pregnancy.

The milieu of hormones pregnant women experience can make long-lasting structural changes to the neurons in our brain. The neurons in deep brain regions responsible for maternal behavior can grow in size when exposed to pregnancy hormones such as estrogens. So, the same time motherhood occurs, the brain is experiencing significant changes. The growing neurons start to communicate with areas of the brain that make the signaling molecule dopamine.

Dopamine rules what neuroscientists call “the reward pathway” and it’s the reason you like anything… ever… in your entire life. Your body releases this magical molecule when you perform activities that will keep you and your genes alive and spreading. Dopamine is released while consuming food or having sex, and because your genes want to be passed on, the drive for parental care relies on this reward pathway too. Large doses of estrogen, such as those occurring during late stages of pregnancy and labor trigger the release of dopamine, stimulating the reward system. This makes new mothers primed and ready to love that 7 pound 5 ounce screaming, floppy pile of responsibility, you named “Aden.”  

Hormones lead to new and permanent changes in brain circuitry, which is how areas of the brain interact and respond to one another’s activity. Perhaps surprisingly, animals that haven’t been around babies are not initially fond of infants. Virgin, pup-inexperienced female mice have a natural avoidance to infant stimuli, which is not completely unreasonable. Think about what a baby would seem like if you didn’t know what it was: they cry for seemingly no reason, smell, and demand a lot of time, money and attention.  In mice, researchers have explored a natural avoidance and defensive response associated with animals that are new to infant care. There are defined circuits in the brain responsible for this avoidance response. The hormones of pregnancy, silence this circuit, and neural circuits responsible for maternal responses can then be more active.

Changes in Behavior

You might have heard from your friendly neighborhood neuroscientist you don’t have free will. Let me reassure you that yes, you’re a slave to the power of babies. The immediate changes in a mom’s behavior after childbirth suggest major changes are occurring in brain circuitry. This new baby addiction or “sensitization” is caused by changes in the reward system’s dopamine release. Cocaine and other addictive drugs trigger the reward system and release dopamine throughout the brain. Like a drug, the allure of babies is so strong that when given the choice, rats with maternal experience prefer to press a lever that delivers infant pups over one that delivers cocaine. Using this knowledge, let’s take care of two societal problems at once: “Orphanages: The New Methadone Clinic!”

Sensitization causes mothers to act differently. Mother rodents show increases in risk-taking behavior. For example, mother mice on an elevated maze with enclosed and open arms, will spend more time exploring the potentially dangerous open arms than virgin mice. On the up side, new mothers display increases in memory. In a maze, mother rats were better than virgins at remembering where the food was and were faster to retrieve it. The researchers concluded that improved foraging memory increases the chance of survival for a mother’s pups.  If this held true in humans, the concept of “baby brain” might be unfounded. However, funding cuts have halted the construction of the human-sized maze stocked with baby supplies.

Even abstaining from motherhood won’t save you from becoming a slave to baby overlords. Mere exposure to infants can activate changes in the brain regions responsible for maternal behavior, and start the process of sensitization in rodents. The process does take more exposure time than in natural mothers, without the surge of hormones to speed things along. This suggests that women become “baby crazy” by exposure to infant stimuli. While we all might inherently be ambivalent or avoidant of infants, through exposure to babies, they become conditioned and highly rewarding stimuli. Do you want kids? Then it might already be too late.

Not being female won’t save you either. A study looking at brain activity using an fMRI machine found that when fathers are shown images of their children, they display similar brain activity as mothers. Recent research out of Emory made headlines when it found this increase in activity in the reward pathway was inversely correlated with testes size, and blood testosterone concentration. The conclusion: more parental care equals less testosterone and smaller balls, fellas!

Against Logic

The combination of a higher consciousness and a desire to reproduce means, unlike other animals, humans are presented with the question of if we should reproduce. However, the reason they instruct you on airplanes to put the air mask on yourself before you assist young children is because the human drive to protect our genes, I mean, children, sometimes overrides logic. There’s also an illogical drive to have our own children. In a planet with millions of orphan children, you would assume that the baby-loving masses (and cocaine addicts) would decrease the supply of foster kids overnight. However, our biological nature has a way of convincing humans that we don’t just want a child, but we want our child.

In a modern environment where motherhood is a choice, it’s illogical for anyone to be pressured to give birth to an eighteen-year commitment. Because not all people (or laboratory animals) naturally become sensitized to infants, it might be better for everyone if people who don’t want children aren’t pressured to have them. By simply understanding the literally mind-altering process of parenthood, individuals can make decisions that benefit everyone, including our baby overlords.

Edited by: Bethany Wilson

Recap: The Second Annual Atlanta Science Festival

Anzar Abbas

Originally published April 15, 2015

The sun’s barely out in downtown Atlanta on this cold Saturday morning, but David Nicholson, a graduate student studying Neuroscience at Emory University, is carrying a box labeled ‘Teaching Brains’ to one of – what seems like – a sea of white tents set up in Centennial Olympic Park. The banner outside his stall reads, ‘Hey, You Touched My Brain!’

“What’s in the box?”

“Human Brains! We’re going to give people a chance to see what a brain actually looks like and try to teach them a little bit about how it works.” David is one of hundreds of scientists, engineers, and enthusiasts setting up their stalls to prepare for the Exploration Expo, which is the last hurrah hosted by the second annual week-long Atlanta Science Festival.

In its inaugural year, the Atlanta Science Festival brought together 30,000 people into Atlanta’s streets, classrooms, auditoriums, concert halls, squares, breweries, parks – you name it – to teach the public about science.

“Why do you think what you’re doing is important?” I ask David as he puts on gloves to show me the brains. “Put quite simply, people want to do stuff. They don’t want to just be told about it. What we’re doing here today is hands-on work, and that makes a greater impact on people. You tell me, how many people have held an actual human brain in their hands?”

And he was exactly right. Just in a few hours, the Exploration Expo was bustling with families, students, tourists, and enthusiasts, all being entertained by over a hundred interactive exhibits, hands-on experiments, mind blowing demos and a full line-up of science-themed performances. But the Expo was just the Festival’s way of ending with a bang.

Within a week, the Atlanta Science Festival hosted over 140 events celebrating science and technology attended by thousands of people. The events, approaching hands-on science in a plethora of different ways, ranged from talks held on the science of beer to robot demonstrations, with events geared towards people of all ages and interests.

I got a chance to speak with Jordan Rose, one of the founders and directors of the Festival. Though he claimed he was exhausted, you couldn’t have guessed it. Springing with energy, he described how this year’s Atlanta Science Festival was different.

“The events are just bigger, better, and more collaborative. And this is only the second year we’ve been hosting this.”

I asked him what importance the Festival holds in the greater mission of communicating science to the public.

“Nobody gets to see science outside of the classroom or the lab. Science always happens behind closed doors. This is a way to get scientists and engineers outside those walls and into the community, giving them opportunities to interface with the public so that people can get excited about local opportunities for educational and scientific advancement. Science is usually lectures, talks, and panels, but that’s not what this festival is about. This is science in your face.”

The effect that the festival had on the community was evident when I spoke with Lula Huber, an 8-year attending the Expo, about her experience.

Having learned about the effects of pollution, she told me she wanted to organize a club in school to pick up litter on the streets so that she could contribute towards making earth a cleaner place.

It didn’t seem that science class next week was going to be as boring for her anymore.

Explained At Last: Why Alkali Metals Explode in Water

Benjamin Yin

Originally published April 14, 2015

Photo by: Kristen thomas and Jadiel wasson

Photo by: Kristen thomas and Jadiel wasson

In the pilot episode of the iconic 80s TV show, MacGyver, the titular character made his debut as a resourceful secret agent by making a sodium bomb to take down a wall, rescuing a couple of scientists. For MacGyver, with his extensive knowledge of the physical sciences, the process was simple: he immerses pure sodium metal inside a bottle of water and the explosive reaction between sodium and water is great entertainment for viewers of all ages.

Today, this little display of pyrotechnic shenanigans is often seen in high school chemistry demos. Alternatively, one can find many dozens of internet videos documenting this violent reaction between alkali metals like sodium or potassium and water, often accompanied by exclamations and whistles of joy. It’s no surprise that some of these videos have also gone viral. This amusing diversion of chucking alkali metals into water to watch it explode has been around since the 19th century and scientists have had a solid description of the nature of this reaction for about as long. Or so we thought.

The classic explanation of elemental sodium’s volatile reaction with water involves the simple reduction-oxidation chemistry of sodium and water: electrons flow from sodium metal into the surrounding water, forming sodium hydroxide and hydrogen gas. This is a very fast reaction that produces a lot of heat. Hydrogen gas is extremely flammable in air, and in the presence of a heat source, this mixture can lead to a hydrogen explosion, not unlike the infamous incident that allegedly set the Hindenburg zeppelin aflame. The release of the large amount of energy in these reactions results in rapid expansion of the surrounding gas, which is what causes chemical explosions.

Generations of chemists have accepted this seemingly obvious explanation without much deliberation. It is perhaps surprising then, that one curious soul decided to look at this century-old reaction more in-depth.

Philip Mason earned his PhD in chemistry and has co-authored more than 30 scientific papers, but is probably better known for his YouTube channel, where he regularly posts videos, often in vlog format, under the pseudonym “Thunderf00t” (yes, that’s two zeros substituting for the letters “O”). His favorite post topics are often pieces of popular science he encounters, and Mason has earned the support of a huge public following with his YouTube channel. In 2011, using donations from some of his more than 300,000 YouTube subscribers, Mason purchased the materials and consumer grade high-speed cameras necessary to look at what he thought would be “home chemistry.”

The YouTube project, it turns out, raised many questions, for which Mason found traditional answers unsatisfactory, namely the explosive nature of alkali metals in water. Compelling footage also showed a secondary gas explosion above the water surface that resembles a hydrogen explosion, demonstrating that the initial stronger and faster explosion can’t be explained with our traditional understandings of this reaction. Some scientists have suggested, instead, that the explosion is caused by the sheer amount of heat released during the reaction. If this were the case, the heat would boil the water and a rapid generation of steam leads to explosion. Mason remained unconvinced. A key insight by Mason and his colleagues was that as hydrogen and steam are generated when the alkali metal comes into contact with water, the interface between the metal and water should be blocked off by the products and therefore inhibit further reaction. This would result in the exact opposite of the explosive reactions being observed. Crucially, immersing solid chunks of sodium and potassium under water still results in rapid explosions, so this too could not be the explanation for the initiation of the explosion. These enigmas led Mason to bring his YouTube project into the lab.

To get a better look at the reaction, Mason and his colleagues turns to research grade high-speed cameras. Filming at around 10,000 frames per second, they were able to capture the beginning of the reaction between alkali metals and water in astounding detail. What they captured is striking: the reaction is immediate, and the metal shatters on contact with the water surface. Within two-ten thousandths of a second, spikes of metal are flying apart from anywhere the surface touches water. As the sheer force of the rupturing metal bursts forth, a brilliant blue wash appears to stain the blast of water in the very next frame. This stunning blue color is due to solvated electrons in water, which is usually far too short-lived for people to see.

What isn’t so easy to interpret are the metal spikes flying apart, piercing the water in the process. However, with some chemical intuition and computing time on supercomputers, Mason and his colleagues came up with an explanation for this observation that ultimately describes the explosive nature of alkali metals in water.

When large numbers of electrons escape from the alkali metals into the surrounding water, the metal itself becomes extremely positively charged. Like the static charges that can make our hair spike up for that mad scientist look, the positive metal atoms now repel each other, except with much more violent force. Atoms that were previously bonded together as a solid now suddenly fly apart at extraordinary speed. This, in turn, exposes fresh metallic surfaces to water for the explosive reaction to take place. This little-known phenomenon is called Coulomb explosion.

The immediate application of this knowledge for preventing explosions in industrial use of alkali metals will be useful. Just as important, the discovery of this mechanism of explosion in a chemical reaction over a century old reminds us not only of how little we know, but also how much we simply fail to even consider. In the face of public apathy for science, it is encouraging that such a significant scientific discovery should come from a YouTuber, funded partially by the YouTube community, and documented in vlog format throughout the research process. It leaves us wondering what other remarkable discoveries such public engagement could lead to.

Mason and his colleagues published their research in the February issue of Nature Chemistry, they acknowledged the support of his YouTube followers.

Here's the video:

Link for article: 

http://www.nature.com/nchem/journal/v7/n3/full/nchem.2161.html

Edited by: Marika Wieliczko

Towards Precision Medicine: Promises and Hurdles

Hyerim Kim

Originally published April 13, 2015

Photo by: Jadiel Wasson

Photo by: Jadiel Wasson

If you’ve ever seen an ad on the internet that seemed as if it was intentionally catered to your interests, you’ve probably been subject to customization in marketing. While mass customization has become commonplace in fields such as marketing and manufacturing, its scope is extending further than just business. Advances in genetics are allowing doctors to start customizing medical treatments for individuals through a new field being called ‘Precision medicine.’

Before precision medicine, diagnosis and treatment of disease was determined by categorization without consideration of individual diversity. These procedures, albeit leading to a huge improvement compared to non-scientific treatment, have brought concerns about diagnostic accuracy and side effects of prescribed medicine.

Some of these concerns can be resolved by studies on the variation between the genes of different people. Scientists have launched international projects in order to illustrate the common patterns of human genetic variation. These ongoing efforts have helped in understanding why a certain population of people would be susceptible to a particular disease and what their responses would be to certain types of drugs.

A medical movement applying individual genetic data into medical practice is considered as a milestone in the journey toward precision medicine. The movement is gaining momentum, with the President’s 2016 Budget allocating it a $215 million investment. In particular, the system is expected to reap enormous benefits in cancer treatment.

Hence, we can now ask ourselves: What’s the success story for precision medicine we’re looking for, and what are current challenges faced by this medical movement?

Back in the 60s, chronic myeloid leukemia (CML) was a devastating disease and the average lifespan of patients was 3-7 years after diagnosis. However, a new genetic technique in 1970 enabled scientists to identify what caused the deadly disease, an abnormal molecule called Bcr-Abl. The elucidation of this abnormal molecule promoted the development of a new target therapy, “Gleevec” for CML patients. Since nearly all CML patients carry the fusion molecule, a therapeutic outcome was outstanding with higher efficacy and lower side effects compared to conventional chemotherapy.

The success of Gleevec brought to the field a concept of targeted therapy in cancer treatment, resulting in the development of many similar therapies. A wider breakthrough in targeted therapy, however, could not be accomplished without further technological innovation.

As technologies did develop, they were unfortunately too costly. However, newer sequencing platforms have led to the rapid reduction in DNA sequencing costs, and in the near future, $100 genome sequencing will be open to most people. As such, one can imagine genomic mutation profiling becoming routine in determining the best therapeutic regimens for individual cancer patients.   

Despite the mapping of a personal genome being expensive, the extraction of biological information from complex human genomes is a major obstacle to the start a precision medicine era.

To illustrate, the detailed biological functions of protein-coding genes (1 % of human genome) still needs to be explored, although the ENCODE Project launched to identify all functional elements of genome after the completion of the Human Genome Project has brought about substantial understanding about the genome. In addition, most genomic regions outside of what codes for proteins (99 % of human genome), such as promoters, enhancers, and insulator regions that regulate gene expression, remain to be elucidated. In particular, intergenic regions considered as “junk” DNA before are now thought to play regulatory roles in gene expression yet most of them remain uncategorized..

In other words, current genomic data is incomplete. In addition, there are no standard informatic programs to analyze raw sequencing data, and data storage and sharing are practical issues to be discussed. In order to overcome these limitations, international collaborations for breakthrough efforts are necessary.

In the case of cancer research, The Cancer Genome Atlas in the US, the Cancer Genome Project in the UK, and the International Cancer Genome Consortium have been launched to aim for a deeper understanding of individual cancer patients by managing and sharing the data from these projects. Such combined efforts to elucidate the human genome, standardize data processing, and generate publicly available datasets will pave the way for precision medicine.

Precision medicine is not a void dream. Along with technological innovation in genome research, individual diseases will be minutely categorized depending on an individual’s genetic makeup, and treatments will be carefully chosen based off of that information. In addition, patients will be able to access their own genomes so that they are able to play an active role in prevention of predisposed disease and treatment. Moreover, the pharmaceutical industry will have to restructure itself toward a more patient-oriented outlook. In terms of medical costs, we can save money from unnecessary examination to identify the cause of disease. As such, precision medicine is expected not only to realize optimal medical services to patients but also to transform medicinal industry in future. This realization, nevertheless, cannot be done by only an institute or a country. Collaborative research across the world to clarify undermined and undiscovered genomic data is essential.

Edited by: Anzar Abbas

What We Talk About When We Talk About Sex

Edward Quach

Originally published April 12, 2015

Photo by: Jadiel Wasson

Photo by: Jadiel Wasson

The manner in which societies and cultures have constructed gender and gender identity has been changing for ages. Although an academic or philosophical dichotomy was not acknowledged for several thousand years, the separation of physiology and gender identity has existed perhaps since the dawn of man. It may have its origins in the very moment early hominids forewent stark individualism and entered into John Locke’s social contract. A couple days ago, I had the opportunity to chat with the John L. Loeb Associate Professor of the Social Sciences at Harvard University, Sarah Richardson. Dr. Richardson is a historian and philosopher of science with an impressive collection of research concerning gender and the social dimensions of science. According to Dr. Richardson, the academic sex/gender distinction is traced to the 1960s and sexologists dealing with gender identity disorders or feminist theorists of the same era. Nonetheless, she goes on to trace less specific philosophical sparks of this distinction to the 19th century and earlier.

As we can see, for several decades now, social scientists have been studying countless aspects of gender. The sociology, psychology, history, politics, and performance of gender have been under scrutiny in humanities classrooms around the world. However, one thing that has seemed to remain relatively static is the biological understanding of “sex”, a steadfast counterpart to the fluid and constructed idea of gender. Or so it was presumed. Growing up in the in the fairly progressive time period that I did, my childhood had been at least minimally shaped by the idea that a person’s gender may not actually match up with what they have between their legs. That being said, it was fairly well established that your sex was your sex, and that the distinction was primarily binary (male or female). However, a recent Nature News feature published by Claire Ainsworth examines several studies both new and old, which may complicate the issue of biological sex in the way that the social sciences have unpacked and examined gender. According to Ainsworth, the binary male vs female distinction is antiquated, and biology requires a more comprehensive spectrum.

When I asked Dr. Richardson about this, she elaborated on the concept, explaining how a hard-line distinction between the biological sexes was often an underpinning of more traditionalist ideologies which use this dimorphism to reinforce restrictive gender roles. She feels it is “important to really allow the scientific data to speak for itself and to learn from the great degree of variation…”

But what are these data and wherein lies the variation? We have all heard about individuals born without physiologically distinct male and female parts. In the medical field they are referred to as individuals suffering from Disorders of Sex Development, or DSD, but you might have heard the term “intersex”. These individuals, while not entirely uncommon (some form of DSD occurs in approximately 1% of individuals), are likely not going to rewrite government and medical forms or completely change the way we understand sex. Besides, the fact that many of you will recognize the term “intersex” implies that this condition (or set of conditions) is something that isn’t outside the realm of general knowledge. A lot of us are already aware of intersex individuals, but we’ve yet to adjust our concept of biological sex.

A more broad-scale change in our attitudes toward biological sex may require a more radical challenge. Coincidentally, there are some interesting cellular and molecular events that may give us a little more food for thought. Take, for example, the axiom that human males have one X and one Y sex chromosome in their cells while females have two X chromosomes. This is one of the most universally accepted facts about biological sex, especially among non-scientists. You don’t have to have a degree in a life science to know that this difference in our chromosomes makes us male or female. In the case of this guiding principle of biological sex, there is a good basis for it. It is true in general that if you snatch a single cell from anywhere in my body and look at the chromosomes, you’ll see one X and one Y. However, in the case of the merging of twin embryos, there can be individuals born with some cells bearing an XX and some bearing an XY. Indeed, given certain circumstances of this chimerism as we call it, one may not even notice that they are living with two distinct groups of cells in their body.

Certainly in the case of would-be twins that fuse in the womb, this is an interesting phenomenon. However, chimerism in humans is not so rare. There are a number of much more common processes by which we can acquire the cells of another genetically distinct individual and grow with that person’s cells becoming, quite literally, a part of us. Consider the significant interchanges that can occur between mother and fetus during gestation. These exchanges of materials can and often do include cells, specifically stem cells which are multipotent or pluripotent, a term we use to mean that they can turn into many different kinds of cells. We call this phenomenon microchimerism, and it means that you can have cells from your mother inside of your body right now. In fact, if you have any older siblings, it is possible that their cells continued to grow in your mother’s body, and were subsequently transferred to your body during your growth. I can sense the outcry from younger siblings already. These cells are not just mooching off your energy, either. In many cases, they are earning their keep by working. Cells from your mother or siblings may mature into cardiac tissue, neurons, immune cells, and the like, lending a whole new meaning to the term “you’ve got your mother’s eyes”.

In addition to actually having cells in your body from another person of a different sex, there are instances where the sex differences may be even sneakier. For decades, scientists believed that sex development was a pretty clean switch from female to male, with female being the default program. It was thought that the female programming had to be suppressed by the male programming in order for genes responsible for testes, male sex hormones, and other sex characteristics to win out. However, more recent studies have identified a signal for testicular development which female programming must suppress in order for feminine characteristics to develop. Development of one sex over the other (although expressing them in a binary appears to be getting harder and harder) is not one program overriding the default program. Rather, it is a constant competition of factors. There isn’t just one “yes” or “no”, but rather a chorus of “yes” and “no” shouting in a cacophony that may well come out sounding like a “maybe”.

When I first began researching this issue, these phenomena all seemed like fun or intriguing biological quirks. I thought it was fascinating that some people could be born with genitals not matching their chromosomes or with a cellular makeup that was a mosaic of male and female. However, I began to wonder what these new understandings meant for us as a country, a society, and a species.

One issue, which Ainsworth and many before her have highlighted, is the common practice of genital “normalization” procedures that occur quite frequently. They allow intersex babies to go on and develop as one sex or the other. We have come a long way in our societal treatment of gender. Many people no longer care what pronouns you use to refer to yourself, your choice of sexual partner, and the way you choose to dress. If I am being too optimistic about this, then there are at least signs of progress in that direction. On the other hand, there are no such advances being made in the world of medicine and biological sex. Babies are too young to be able to consent to this change in their genitals, often occurring just days or hours after birth. Do parents have the rights to decide which sex their intersex child continues down the path of? Is this in the same vein as trying to change someone’s sexuality or gender identity? Richards was quick to emphasize that there are clear differences between gential normalization procedures and something like gay conversion therapy. Nevertheless, she underscored that a healthier way to approach this kind of surgery would be to ensure it is coming from a place of informed and empirical science, perhaps for individuals who are old enough to understand what is unique about their bodies, and not out of our sense of panic that intersex does not conform to the binary.

Another issue may arise in individuals with chimerisms, which may describe many of us. There are certain diseases, both genetic and acquired, which affect one sex (or perhaps I should say “chromosomal profile”?) more severely than the other. Let’s say that certain cells in my brain developed from my mother’s cells which I acquired in the womb. If I were suffering from a brain disease affecting XX individuals more severely than XY individuals, doctors might not necessarily diagnose me correctly until they had exhausted many other options, simply because I’d checked M on the form in the waiting room.

It would appear that the issue of sex development and biological sex has not quite reached a critical mass, but this growing body of work certainly complicates sex in ways we could not have anticipated during the development of our medical system and our societal opinions on sex. While pressure to adhere to a given gender is beginning to alleviate, pressure to conform to a single, specific sex is alive and well. It can be a little discomforting to think so differently about a concept we often consider black and white, but understanding the intricacies which govern sex development can help us to appreciate the beauty of gray.

Edited by: Brindar Sandhu

Underrepresentation in Research: Steps Towards Progress

Jadiel Wasson

Originally published April 11, 2015

If you take a look around you, what do you see?  Does your environment reflect what the real world looks like? If not, why do you think that is? And how does this difference influence the nature of science?  In the past few decades, this disparity of representation from different groups, specifically women and minorities, has become increasingly apparent. Many studies have demonstrated that this lack of diversity leads to a different kind of “brain drain” in that it caps interest in STEM careers in the up-and-coming generation which can ultimately take away minds that can contribute significantly to STEM.  Although very few government initiatives have been put into place to address this issue, they have greatly influenced the demography of STEM degrees and careers.  In addition to this, a handful of studies in conjunction with media attention have altered diversity trends in STEM by bringing greater awareness to the underrepresentation that plagues these fields.  But has enough been done to truly alleviate this issue? What steps have already been taken to address this issue of underrepresentation?

Step 1: The Civil Rights Act of 1964 was the first step taken to address inequality in the workforce. This act significantly increased the availability of equal opportunities in education and employment for women and minorities. Before the implementation of this act, less than 5% of women earned PhDs in STEM careers.  That number tripled to around 15% in the early 1980’s. In the late 1950’s, prior to any official census data, it was estimated that about 500 total African-Americans had a PhD of any kind. In 1975, only about 1.2% of PhDs were earned by African-Americans.  

Step 2: The Science and Engineering Equal Opportunities Act of 1980 was mandated by the National Science Foundation to increase the participation of women in science.  This act aimed to increase female participation through various propaganda campaigns. These campaigns aimed not only to increase the public’s awareness of the value of women in science, but also to increase support for women who chose to pursue STEM careers by implementing committees, fellowships and programs.  Since the inception of this act in 1980, the number of women holding STEM PhDs has increased from about 15% to fewer than 40% in 2011, lending to the effectiveness of such measures.

Step 3: Beyond Bias and Barriers: Fulfilling the Potential of Women in Academic Science and Engineering is a national academy of sciences report that was published 2006.  It is incredibly extensive in how it addresses the issues that still plague women in STEM disciplines that prevent them from advancing in their careers regardless of their academic stage.  This report ultimately refuted many of the biases and “reasonings” behind the gap between men and women participation in STEM careers.  Some of the key findings of this report include evidence of institutional biases against women and taking a look at the loss of women from each stage on the track to a career in STEM. One of the main contributions of this work came from some of the recommendations that were made to rectify the representation problem.  In 2007, the National Institutes of Health, or NIH, created the working group on women in biomedical careers in response to the findings in the report.  This group has subsequently established many sub-committees with specific goals, such as public outreach and mentoring, all aimed at increasing the retention of women in STEM.  

Step 4: Current steps to enhance diversity in the STEM disciplines include the Enhancing the Diversity of the NIH-Funded Workforce program. This program was established in 2012 to increase underrepresented minorities in the STEM disciplines through a series of initiatives that will try to address how to keep minorities engaged in the STEM disciplines.  

Current state: Even with these steps, there still remains a discrepancy in equal representation. For example, a recent report entitled “Double Jeopardy?” published in February of this year highlighted some of the prevalent biases that still plague women of color, such as differences in how they are perceived compared to their male counterparts. In addition to these findings, although we have seen the percentage of women employed in the STEM disciplines steadily increase each decade since the 1970’s, there has been a decline in this trend since the 1990’s. All in all, initiatives to increase women in science have worked to a certain extent, but a large gap still remains.  We need more measures that aid to reverse this trend.  

Edited by: Brindar Sandhu

The Creation of Gods: Why We Anthropomorphize

Erica Akhter

Originally published April 10, 2015

Have you ever assigned human-like traits to animals or even inanimate objects? If so, you’ve participated in something called anthropomorphism. We often attribute emotions to animals and intentionality to mindless objects. Every time you mistake your coat-rack for an intruder or claim that your puppy loves you, you are guilty of anthropomorphizing.

Anthropomorphism is the tendency to assign human traits such as physical qualities, emotions, intentions or thoughts to non-human objects such as animals or objects.

Humans naturally treat everything as if it possesses some degree of understanding or responsibility. (Have you ever cursed at your printer or encouraged your car to start?) This is a byproduct of our tendency to anthropomorphize. It permeates our perception. It is a commonality in human life, and one that can be particularly problematic given the right circumstances.

But why do we anthropomorphize?

Long story short: it’s the way we’re naturally wired. Human brains are tuned to try to understand other human’s intentions, thoughts and feelings. This concept is called Theory of Mind. Specific regions of the brain contain populations of ‘mirror’ neurons, which display the same activity when we’re performing an action as when we observe others performing an action.

People with deficits in the regions where these mirror neurons are located correspond to deficits in empathy and Theory of Mind. Unsurprisingly, these are the same regions of the brain that are active when a person is anthropomorphizing.

Predicting the actions of animals and inanimate objects employs the same brain regions as predicting the behavior of another human. Though we can consciously differentiate between human and non-human, the same mechanisms in our brain are activated when we are observing actions of both.

It is important to note that the way we experience our thoughts is not just constrained by our perception, but the language we have available to communicate our perception. Think about it this way: Most people would agree that a mouse cannot think like a human. At the very least, you can probably agree that you cannot tell what a mouse is thinking. To know what a mouse is thinking you would either have to be a mouse or be able to talk to a mouse.

So how do we explain mouse behaviors? What is a mouse doing if not thinking?  We don’t have a word for it. To fall into my own trap, we can’t think like a mouse, so we have no words to describe what may be happening inside a mouse’s head. We’re forced to imagine things like only a human can because after all, we are only humans.

So what’s the use of anthropomorphism?

It’s quite easy to justify why we would want to understand other humans. We’re a social species, and thus need to be able to comprehend others to at least some degree. But is anthropomorphism just a byproduct of an overenthusiastic brain trying to give Theory of Mind to everything?

Doubtful. Evolutionarily speaking, it is almost always better to assume something is smarter than it is. More accurately, it is almost always better to assume that the something is out to get you and that the something is intelligent enough to be worried about.

Believing every shadowy figure is a robber is much safer than believing every shadowy figure is a bathrobe. Believing every spider is full of malicious hate for mankind is safer than not giving any spider a second thought.

Think of your anthropomorphic brain as a highly sophisticated better safe than sorry mechanism.  We’re programmed to believe, at least initially, that everything we see behaving is behaving with some degree of intentionality. The results of this can be good or bad.

What are the consequences of anthropomorphism?

As mentioned above, anthropomorphism is usually a good thing. But when can anthropomorphizing go awry?

Dr. Shannon Gourley, a professor of Neuroscience at Emory University, describes anthropomorphism as “a dual threat.”

“Firstly, we run the risk of trivializing the human condition. Can a mouse really experience the debilitating nature of schizophrenia? Of autism? We just don't know. And the related issue pertains to the limits of our ability as scientists to interpret our data. If we attribute human-like traits to an animal, we run the risk of failing to consider other possibilities. For example, is the mouse huddled in the corner because it is "depressed," or because it's simply cold? Or ill?”

Even despite the risks, it is not uncommon to hear a meteorologist to talk about the wrath of nature or a biologist to talk about what a cell wants to do. It is especially tempting to anthropomorphize when research appears directly translatable to common human experiences.

However, Dr. Gourley reminds us that “Reporting that the mouse develops ‘depression-like’ behaviors is more scientifically accurate -- and it allows us to bear in mind the alternative possibilities, and to acknowledge the limitations of our own knowledge which are bound by the fundamental inability to directly communicate with animals.”

Our brain’s predisposition for giving agency leads us to see intention, thought, and cause in the natural world, even when it is not explainable. We naturally attribute intentionality to everything we see: whether it has a human brain, an animal brain, or no brain at all.

Anthropomorphism is so prevalent that some biologists and biological philosophers claim that it is the basis for people’s perception of higher powers, or gods, acting on the world. When thinking about deities, the same brain regions within the brain are active as when attributing Theory of Mind to other humans.

Since the beginning of time, humans have been attributing unexplainable events to entities that they cannot see or feel, only sense and infer. Some scientists claim the neurological basis for anthropomorphizing contributes to this phenomenon. In essence, we could even be constructing ideas of gods in the image of ourselves.

Edited by: Anzar Abbas

Does Fat Play A Role In The Development of Alzheimer’s Disease?

Claire Galloway

Originally published April 9, 2015

photo by: kristen thomas

photo by: kristen thomas

Due to the growing prevalence of Alzheimer’s disease and limited efficacy of drugs for the associated memory-loss symptoms, scientists and non-scientists alike are interested in the potential for practical lifestyle changes such as diet to reduce the risk of developing Alzheimer’s disease and memory loss.

The food you eat can alter the chances you and your loved ones will develop diseases such as Alzheimer’s or dementia. Yet even if you’ve merely dipped your toe into the vast ocean of information on nutrition and Alzheimer’s disease, it’s easy to become confused about which foods or food-types you should add to or remove from your diet if you want to reduce your risk of developing Alzheimer’s disease.

Unlike the protective effect of some foods, such as leafy green vegetables, the role of fat and fatty foods in Alzheimer’s disease risk seems to be less understood, and controversial. What does seem to be clear is that it’s role in Alzheimer’s disease is a  little complicated. Luckily, despite the present uncertainty about whether some types of fats or specific fatty foods are actually beneficial or harmful, the research on diets and Alzheimer’s disease risk does seem to coalesce around some common themes that can be translated into real-life changes. What seems most certain is that not all fats are created equal.

photo by: kristen thomas

photo by: kristen thomas

photo by: kristen thomas

photo by: kristen thomas

What fatty foods should I avoid? Red meat & Dairy.

Consumption of saturated fats – including those found in red meats and high fat dairy – has been linked to lower cognitive performance in healthy elderly people, as well as an increased risk of developing dementia or being diagnosed with Alzheimer’s disease.

Saturated fat intake may increase Alzheimer’s disease risk or exacerbate cognitive decline by degrading the integrity of the blood brain barrier (which usually protects the brain from potentially harmful agents in the blood), increasing inflammation in the brain, or decreasing the ability of brain regions important for memory to use glucose for energy.

Saturated fats also increase cholesterol, which is involved in the regulation of the beta-amyloid proteins that are thought to play a major role in driving the disease process. In short, you may want to hold off on the cheese burgers.

What fatty foods should I eat? Fatty Fish.

photo by: kristen thomas

photo by: kristen thomas


Several epidemiological studies have found that the more people report consuming fatty fish, the less likely they are to develop cognitive impairments and be diagnosed with Alzheimer’s disease or dementia.

The “fattiness” of the fish is likely key to the protective benefit, as the consumption of lean fish has been linked to an increased risk of developing dementia. It also seems likely that the particular fat itself found in fish, mainly polyunsaturated fats that are rich in Omega-3 Fatty Acids, may play a role in keeping the neurons in your brain healthy and communicating effectively.

Indeed, positive results from clinical trials with just Omega-3 Fatty Acid supplements corroborate the role of these fatty acids in improving cognition – or at least slowing cognitive decline in healthy elderly and Alzheimer’s disease patients. Studies in animals have found that Omega-3 Fatty Acids may also prevent Alzheimer’s disease by enabling the synthesis of Acetylcholine, a neurochemical important for attention and memory that is drastically reduced in the brains of Alzheimer’s disease patients. Omega-3 Fatty Acids may also promote the clearance of some of the pathological proteins (e.g. beta-amyloid) that likely drive the disease process.

However, the benefits of from eating fatty fish may go beyond their fattiness: Fatty fish are also a rich source of minerals that may protect against oxidative stress that occurs during aging and Alzheimer’s disease. For example, sardines contain high levels of Selenium, a mineral with antioxidant properties that is decreased in Alzheimer’s disease patients. To reap some of these promising benefits, pile your plate high with fatty fish such as salmon, sardines, and mackerel.

 

What fatty foods should I be cautious about? Coconut Oil.

photo by: kristen thomas

photo by: kristen thomas

Excitement over the health benefits of coconut oil seems to have taken the internet world by storm. It’s purported role in Alzheimer’s disease is largely driven by the testimony of one physician who reported that giving her husband coconut oil actually reversed his Alzheimer’s disease symptoms.

However, coconut oil specifically has not been linked to Alzheimer’s disease risk in epidemiological studies, and there seem to be no clinical trials that have systematically evaluated the effects of coconut oil.  

In animal studies, some have found beneficial effects on cholesterol levels (e.g. less of the “bad” LDL cholesterol) and cognition, whereas others report harmful effects on cholesterol levels and cognition. There could be many reasons for these inconsistencies, but one possibility is that the optimal amount of dietary coconut oil to consume in order to ward off dementia lies within a narrow range.

Indeed, a mere tablespoon of coconut oil has 12g of saturated fat – which is over half of your recommended daily intake. So what is driving all the hype? The saturated fat in coconut oil mostly consists of medium-chain-triglycerides, which may not be as harmful as the long-chain-triglycerides found in cow milk, for example. In fact, medium-chain-triglycerides are converted into ketones, which serve as an alternative energy source for the neurons in your brain. This could be especially helpful in Alzheimer’s disease, as the neural and cognitive dysfunction may be partly due to the decreased ability of Alzheimer’s disease brains to properly metabolize and use the primary energy source of the brain, glucose.

Even more promising, some studies have also found that ketones may be able to protect neurons from beta-amyloid and its associated attacks on neuronal function. Nevertheless, you may want  to restrain from dousing your food in coconut oil until scientifically-rigorous research has time to catch up to the enthusiastic anecdotes that can be found on the internet.  In the meantime, coconut oil in low amounts is probably a good alternative to butter and margarine, but only in those recipes in which a rich source of Omega-3 Fatty Acids – such as olive oil – simply will not do.

An important caveat to remember is that your genetic background can play an important role in the efficacy of nutritional interventions. For example, many nutritional correlations or interventions show no effect or even an opposite effect in carriers of the APOε4 allele, a gene that codes for a protein involved in blood cholesterol transport. It may be worthwhile, then, to find out whether you or your loved one is a carrier before implementing any dietary change to lower your Alzheimer’s disease risk.

Another important thing to remember is that Alzheimer’s disease normally occurs late in life, when the nutritional status and dietary patterns reflect decades of habits of eating. Not to mention other lifestyle habits – such as physical activity levels or coping with stress – that may synergistically or antagonistically interact with your diet to affect your overall risk. That is, it may not be possible to reverse 70 years of cheeseburgers with 2 years of sardines.

All things considered, it is very unlikely that we will find a secret, super-diet that will protect us all from Alzheimer’s disease and dementia. If anything, dietary changes are more likely to delay the onset, decrease the speed of cognitive decline, or otherwise lessen the severity of Alzheimer’s disease in subtle ways.

However, given the limited treatment options for Alzheimer’s disease, incorporating a few relatively inexpensive and tasty diet changes is worthwhile. Especially when a particular diet or food has other known benefits – such as reducing your risk of developing diabetes and heart disease or promoting healthy weight loss – what do you have to lose?

Edited by: Anzar Abbas

Solar Water Splitting: Future Paths to Clean Unlimited Energy

Marika Wieliczko

Originally published April 8, 2015

photo by: kristen thomas

photo by: kristen thomas

With recent advances in science, the ability to artificially split water – much like what plants do all the time – might be humanity’s way of finally having access to unlimited clean energy.

Looking back, one of the most important scientific developments of the 20th century was the industrial synthesis of the compound ammonia from nitrogen and hydrogen. Analogous to the oxygen atom in water, H2O, ammonia, or NH3 is the most basic combination of nitrogen with the simplest element, hydrogen. Developed initially for military applications, the revolutionary Haber-Bosch process, compresses nitrogen gas and hydrogen gas under extreme conditions to yield ammonia.

The ability to produce ammonia from its elements facilitated an unprecedented boom in human population. Living things require useable forms of nitrogen for making proteins and DNA, but most of the nitrogen on the planet is in the form of unreactive nitrogen gas, which is the bulk of the air we breathe. Unlike the oxygen, which most organisms use for metabolism, nitrogen gas is almost completely unreactive. Only a handful of microorganisms are capable of “fixing” the N2, by breaking its very strong nitrogen-nitrogen bond so that the individual nitrogen atoms can be used in more complex molecules required by the organism. Before the Haber-Bosch process was developed, the amount of usable nitrogen on the planet was limited by the rate at which these microorganisms could enrich the soil with usable nitrogen to incorporate into the food chain, which by then had essentially plateaued. Once human beings mastered the ability to fix nitrogen on their own, the bottleneck that had been keeping population growth in check was gone, and in only one hundred years, 1.6 billion people grew to over 6 billion.

Such a dramatic and sudden increase in the population has led to fears of overpopulation, and the consequences for the environment of the additional demand for resources and surplus waste. Scientists have long recognized the need for sustainable energy; most current sources, such as burning fossil fuels, are not renewable and release excess waste that is unlikely to be inconsequential on this massive global scale. One of the most promising sources of sustainable energy is sunlight, and provided we can find ways to convert that light energy into solar power we can use efficiently, could sustain all of the planet’s energy needs for generations to come.

Plants, and a handful of microorganisms have been capturing and harvesting solar energy for millions of years through photosynthesis. In this process, energy from sunlight is used to convert carbon dioxide and water into carbohydrates, or sugars. The waste product is molecular oxygen, the essential component of the air we breathe. The energy is stored in the chemical bonds until they are broken during metabolism. In effect, all of the oxygen on Earth originated through this process, and allowed for the evolution of complex life.

photo by: kristen thomas

photo by: kristen thomas

Whether it occurs in plants or bacteria, photosynthesis begins with the absorption of light. The light energy is transferred to an electron in the light-absorbing molecule. In this energetically excited state, the electron can be captured by a neighboring acceptor molecule, leaving a positive “hole” behind. The electrons to neutralize these positive holes are harvested from water by the oxygen-evolving complex, or OEC. The OEC contains four manganese ions and one calcium, arranged in a cube-like shape, along with several oxygen atoms and water molecules. The cluster takes two water molecules and combines them into molecular oxygen, O2. The process has been studied thoroughly, but many of the details are not completely understood. We do know that the process occurs in steps, and the positively charged hydrogen ions, H+, are released in sequence, and kept separated from the negatively charged electrons in order to generate a potential across the cell, that it can use for energy.

Both metabolism and combustion of fossil fuels are effectively the reverse of photosynthesis. In the presence of oxygen, molecules that contain carbon and hydrogen are converted into carbon dioxide and water. Despite debates over the existence or causes of climate change, scientists have long anticipated the need for sustainable sources of renewable energy, and have recognized that one of the most promising paths is to learn from nature and develop technology for artificial water splitting and solar energy capture.

Significant progress has been made in recent years, with, for instance, the appearance of so-called “artificial leaves”. Usually, in artificial systems and plants alike, each of the overall reaction steps is accomplished by separate components. One component absorbs light, another generates oxygen and still another for managing protons or hydrogen production, in addition to other components that may be needed to keep the oxygen and hydrogen reactions separated, as both do not usually occur under the same conditions.

photo by: kristen thomas

photo by: kristen thomas

One particularly exciting system was recently reported by researchers in China and Israel. These scientists, led by Zhenhui Kang, have discovered that by combining two materials, carbon nitride and carbon nanodots, an indirect route involving hydrogen peroxide allows for a synergistic effect. While each component alone is a capable catalyst, carbon nitride is quickly deactivated by the products it generates. When combined however, each component alleviates shortcomings of the other. This solid material, when placed in water and irradiated with light, generates a 2:1 ratio mixture of H2 and O2 gas. This particular system is difficult to understand, and almost seems too good to be true.

The major drawback is that this mixture of gases is explosive, ideally they would be generated in separate, isolated components. Another weak point is the overall efficiency, when comparing the energy stored in the chemical bonds of O2 and H2 that are formed to the energy in the light needed to drive the process. While the efficiency is far from optimal, the material is made of abundant elements and is extremely stable, and after 200+ days of continuous use, the material continues to steadily split water using sunlight.

Nature often serves as a source of inspiration or guidance, but artificial systems address different needs. While it is amusing to imagine putting water into the gas tank of a solar-powered car, with only water vapor on the tailpipe end, the need for technology for splitting water into O2 and H2 is more extensive than that. The Haber-Bosch process, after all, relies on the reaction of N2 and H2, and the ultimate sources of H2 are typically fossil fuels. Water splitting technology, especially if powered by sunlight, holds enormous promise for changing the way we use resources by closing loops in inefficiency and reducing or even eliminating unnecessary waste. Solar water splitting continues to be an active and exciting field of research, and in the face of impending urgency for green technology, may one day be called the most important scientific development of this century.

Edited by: Anzar Abbas

A Scientist’s Best Friend: The Biology Behind The Human-Dog Relationship

Kristen Blanchard

Originally published April 7, 2015

photo by: kristen thomas

photo by: kristen thomas

If you had visited Tokyo’s Shibuya Station in the 1920s, chances are, you would have met Hachikō. At the end of every day, this purebred Akita Inu faithfully waited for the train to deliver his master, Hidesaburō Ueno. Yet, when Ueno died in May 1925, Hachikō continued to wait. For the rest of Hachikō’s life (nine years, nine months, and fifteen days to be exact), Hachikō continued to arrive at the station and wait for his owner’s train. Hachikō may be the most famous symbol of canine loyalty, but he is by no means alone. Capitán, a German Shepherd in Argentina, stayed with his owner’s grave for 6 years after he passed. Orlando, a black lab guide dog in New York City, helped save his owner from an oncoming subway train after the man fell onto the tracks. In addition to these “celebrity” dogs, we rely on countless others to help us track missing persons, locate explosives, and help those with disabilities. In return for their services, we love these creatures like no other on the planet. More than one third of American households own a dog, and we spend more than $50 billion annually on our pets. It seems obvious that dogs are man’s best friend, but why does the bond between man and dog seem so different than any other animal?

The answer to this question likely lies in dogs’ unique history. Domesticated dogs, as we know and love them today, evolved from ancient wolves. Unlike the evolution of virtually every other animal species throughout history, the evolution of dogs was artificial. Humans essentially “created” this species by domesticating wolves. Through lucky accidents in different regions throughout the planet, ancient humans noticed that some wolves were more cooperative than their peers. They took in these unusually helpful wolves, cooperating to hunt prey and avoid predators. Biologists believe that by continually selecting for better companions, wolves and humans evolved side-by-side, eventually creating the unique bond between modern humans and dogs.

This strong evolutionary pressure has had significant consequences for the way humans and dogs interact. In the past few decades, scientists have been able to quantify what dog lovers have long understood on an intuitive level; the bond between canines and humans is unique among the animal kingdom. Scientists have demonstrated that dogs are adept at communicating with humans. They can pick up on auditory cues and even physical signals. Dogs understand the meaning behind human pointing, while even our closest relatives, great apes, cannot interpret this gesture. Despite the fact that great apes are more intelligent than dogs, and more closely related to humans, dogs are better at communicating with us. This communication skill goes beyond merely communicating basic information. Some evidence shows that dogs can understand human emotion. Scientists at the University of Veterinary Medicine in Vienna, Austria tested dogs for their ability to recognize human facial expressions. The study found that dogs could distinguish between “happy” human faces and “angry” human faces, even while only seeing part of the face in a photograph.​​

photo by: kristen thomas

photo by: kristen thomas

What do dogs do with this information about human emotion? Do they “care” if a human is happy or angry? Do dogs have some sort of empathy with their human companions? Scientists have recently tried to answer these questions as well. Contagious yawning (i.e., the tendency to yawn when witnessing someone else yawn) is thought to be a sign of empathy. Contagious yawning is usually more common between people who are emotionally close, than between strangers. Scientists at The University of Tokyo recently investigated whether dogs were also affected by the contagious yawning phenomenon. They found that not only did dogs exhibit contagious yawning when seeing humans yawn; this effect was also more common between the dog and its owner than the dog and a stranger. These results suggest that dogs do display this measure of empathy.

The idea that dogs possess empathy for their human companions is further supported by an interesting study by researchers at the University of Otago in New Zealand. These scientists measured the human physiological response to a baby crying. Humans respond to this trigger with an increase in cortisol and heightened alertness. Fascinatingly, they discovered that a human baby crying affected dogs in the same way. In addition to the increase in cortisol and heightened alertness, dogs also became more submissive. While it is difficult to quantify whether or not dogs “love” their human companions, there is certainly some evidence that dogs have empathy for humans and an understanding of human emotion.

But what about the other side of this relationship? It certainly seems as though humans’ artificial selection of ancient wolves contributed to the way dogs relate to humans today. Were humans also affected by this parallel evolution? Is the bond between humans and dogs hard-wired into us the way it seems to be in dogs? Some recent studies suggest that this evolution was a two-way street; just as dogs evolved to cooperate better with us, we evolved to cooperate better with them. While dogs are adept at recognizing human facial expressions, we are adept at recognizing theirs as well.  A recent study found that even people without any experience with dogs were able to infer a dog’s emotion from a photograph. The fact that even those without dog experience were able to complete this task suggests that this recognition may be innate – an artifact from our co-evolution with domesticated dogs.

Fortunately for us, while dogs may have influenced our evolution, it seems to be worth the tradeoff. Our relationship with dogs provides us numerous benefits as a species. Dogs have a well-documented effect of reducing stress. The American Heart Association has even reviewed the scientific literature and agreed that dog ownership may slightly reduce the risk of heart disease. These health benefits explain why dogs are frequently used as therapeutic aids – they can help soldiers recover from trauma and help children with autism improve in socialization. Thanks to our truly unique interspecies bond, these creatures really are man’s best friend.

Edited by: Bethany Wilson

The Case for Basic Science Research

Brindar Sandhu

Originally published April 6, 2015

photo by: jadiel wasson

photo by: jadiel wasson

How many times have you been asked to donate $1 for juvenile diabetes, cancer, ALS, MS, or Alzheimer’s research at the grocery store checkout? What about space exploration, how bacteria fight infections, or the basis of all life? Chances are pretty high that you’ve been asked the former, and about zero that you’ve been asked the latter. We know why – diseases pull at people’s heartstrings. We most likely all know someone, or know someone who knew someone, who has had cancer. We want to cure diseases; that’s why we study biology. Obviously we’re not in it for the money or the fame. Does this mean that everyone should solely study a cure for some disease? Is it wrong to be motivated by wanting to learn more about the world we live in?

A poll conducted by the winners of a 2013 video competition sponsored by FASEB, the Federation of American Societies for Experimental Biology, asked the general public of San Francisco the following question: If you had $10 to spend on research, would you donate to research affordable diabetes treatment, or to study how bacteria protect themselves? The public overwhelmingly chose diabetes, but in the 1960s, the National Institutes of Health, or NIH, chose the latter. By doing so, scientists discovered that bacteria produce restriction enzymes to cut up foreign DNA. Now almost every lab uses restriction enzymes for cloning. Not just that, but this discovery allowed scientists to clone human insulin - which was previously only purified from cattle and pigs or chemically synthesized with poor yields - and express it and purify it from bacteria. The bacterium used, E. coli, quickly earned the nickname “the laboratory workhorse.” This dramatically reduced the cost of insulin for those suffering from diabetes, and today, almost all diabetic people use recombinant human insulin instead of animal insulin.

Anyone who has written a grant application for the NIH knows that the proposed research has to have a translational impetus. “Why should I care?” is a question we are taught to answer. We are required to provide evidence as to what contribution our research will make. If the answer is “We don’t know how this will benefit medicine, energy, or technology...yet,” does that mean it shouldn’t be pursued? A survey of the research laboratories in the Graduate Division of Biological and Biomedical Sciences, or GDBBS, here at Emory, shows that about two-thirds of faculty research descriptions contain a specific type of disease, drug development, or the word disease. That number is most likely lower than the actual percentage of labs that focus on the disease state. I am not arguing that studying the disease state is not fruitful. Of course we need to know how a disease operates if we ever want to treat or even cure it. I argue, however, that sometimes the solution can be answered in a way that would not be obvious if we solely focused our efforts on curing cancer. Studying how nature works in a non-disease state can tell us a lot about how nature stays healthy, and thus, how we can stay healthy. If the stigma surrounding basic science research is prominent among scientists, how can we expect the public, and therefore, the federal government, to support such important endeavors?

“But so much more is known about biology than 60 years ago,” one could argue. Sure. Does this mean we have learned all we need to know? Of course not. Yes, biology has had an explosion of knowledge in the last half century, but people still suffer from disease, even if they are different than the ones we saw 100 years ago. We also know that cancer is a lot more complicated than we originally thought, and the reality of a cure-all cancer drug is now just a figment of our imaginations. Although we have learned a lot in the last 60 years, much is clearly unknown, and focusing solely on diseases can limit our ability to find solutions that could be applied to multiple problems. Most scientific advancements, especially technological ones, are based off of how nature operates, so further exploration into how nature works in general, not necessarily in the disease state, is crucial for scientific advancement.

Edited by: Marika Wieliczko

The Blind and Biased Eye of Objectivity

Sara List

Originally published April 5, 2015

Scientists must take daily snapshots of our work, recording what we see one piece at a time. The full picture is too vast for the lens. The fisheye would distort the image if we included too much, too many variables with too few controls. We want to capture the world as it really is, and in order to do so with our limited frames of view, we use objectivity as our guide. Scientific objectivity refers to the idea of recording only those phenomena that are observable without prejudice or bias. The use for such an approach is vast in the world of science but also limited.

Science seeks to categorize, to filter, and to quantify observations. When we conduct science, we try to leave our social, political, and experiential backgrounds behind in favor of the ideal of pure logic. This practice does allow us to make compelling arguments when trying to convince others that our findings, not those of others, are reflections of the truth.  If someone is open-minded toward having one set of results or another for their experiment, then he or she does not have any reason outside of the merit of the experiment itself to have obtained those results.

Striving for scientific objectivity can seem noble, chivalrous even. The knights of research lay down their beliefs, their emotions, their political contentions, and their self-serving motives all in the name of science. However, there’s one practical problem with this ideal. People who are completely disinterested in an experiment or science in general are often also indifferent about it. These people cannot be paid to do research, given how much effort is required. Those who are paid to do science then, are a passionate and opinionated few.

There are multiple potential threats to scientific objectivity, which can include one or a combination of the following desires on the part of the scientist: desire for approval, desire for potential financial gain, and to avoid controversy. In addition, active advocacy for a certain public policy or vested interest in a particular theory can cloud the objective lens as well. These factors can and do interfere with the scientific approach if ignored by the research community. However, in addition to our scientific enthusiasm, many of us also depend on outside agencies to fuel our work, and the funding can be contingent upon those agencies’ approval of our research. The quest for scientific objectivity may be a noble one, but that quest adds more problems than solutions to the ever-morphing body of knowledge we call modern science.

Striving to be the omniscient, neutral observer can lead us into territory that only leaves us blind to our own biases. Neuroscience in particular is rife with examples of social bias motivating how we study the brain, even though, or perhaps because, objectivity is the goal.  The study of the brain, and by extension, the human mind, gives particular sensitivity to the findings. Scientific objectivity lends scientists a certain authority. That authority is never clearer than when examined within the world of brain science.  With that power comes the responsibility to be aware of our human subjectivity.

One canonical scientist in the field of neuroscience was anthropologist Paul Broca, known for the discovery of Broca’s area in the nineteenth century. Broca’s area is a brain region which when damaged, renders the patient unable to produce intelligible speech, although he or she retains the ability to understand language.  Broca found an excellent example of a brain area that is responsible for a specific function, which drove the modern study of human neuroscience.

Broca’s name also appears in Wikipedia’s Scientific racism entry. In addition to his studies on stroke patients, Broca was a fan of craniometry, the measurement of skull size or brain volume. While craniometry is not inherently a discriminatory method, practitioners, Broca included, used their measurements to justify social views about women and minorities at the time, claiming that biological difference was proof of inferiority.

Broca was not ill informed about scientific objectivity, and strove to meet the demands of this realm of thought, stating “there is no faith, however respectable, no interest, however legitimate, which must not accommodate itself to the progress of human knowledge and bend before truth.” While Broca was a prominent scientist who strove for leaving his personal opinions out in favor of the facts, he also had clear goals to use craniometry to “find some information relevant to the intellectual value of the various human races.” With this hypothesis in mind, he concluded that “In general, the brain is larger in the mature adult than in the elderly, in men than in women, in eminent men than in men of mediocre talent, in superior races than in inferior races.”

The case of Broca is a shocking example of scientific objectivity clouding the inner skeptic. He may have thought that by striving for scientific objectivity, he was immune to being subjective. Broca did not question the premise that craniometry could illustrate differences in intelligence. Instead, he designed experiments and interpreted the findings in ways that upheld the views of the time. His basic assumption, that measurements of the brain could rank humans on a linear scale of mental aptitude equivalent to their place in social hierarchy, was not only false, but also highly subjective, despite his support of scientific objectivity.

Broca’s time was over a century ago, and the optimist may suppose this incident was an isolated farce.  Unfortunately, Broca is hardly the first and will not be the last person studying the brain to use the mask of objectivity. John Money was a psychologist and sexologist most known for work in the 1950s and ‘60s. In 1966, Money met David Reimer and his parents. The Reimers had turned to the expert after their child’s circumcision had been botched. David no longer had a penis, and Money advised the infant be given a sex reassignment surgery and raised as a girl alongside his twin brother, Brian. While Brian played with trucks, David, then known as Brenda, was given dolls.  David and Brian attended multiple therapy sessions with Money geared toward convincing David that he was a girl and his brother was a boy who would fulfill their respective gender roles.  

Money was known well for his part in supporting the theory of hormonal organization of the brain to produce sexually dimorphic behaviors in animals. Much to the relief of parents at the time, he also supported the theory that for humans, gender identity and sexual orientation were a result of environment and upbringing alone. The doctor wrote extensively about the twins, highlighting David’s successful reassignment to the heterosexual female identity. In one of his many books, Money described his rigorous system for interviewing subjects and cataloging their data that allowed "objectivity to reside in the scoring criteria". As a mature adult, David Reimer found out what happened to him as an infant and why he had been so firmly pressed by his parents and Dr. Money into fulfilling the traditionally defined woman's role. Brenda changed his name to David and began living as a man, but tragically committed suicide in 2004.

Even today, studies like that of Skoe, Krizman, and Kraus (2013) uphold objectivity while trying to find “the biological signatures of poverty”. The authors attempt to link socio-economic status (SES) and differences in the brain’s response to sound. They use a "neurophysiologic test used to quickly and objectively assess the processing of sound by the nervous system," which is interpreted by an audiologist. While the brain is quite malleable and the environment can and does affect neural circuitry, studies such as these can encourage the treatment of poverty as a disease. The language in the article suggests many possible methods of targeted “intervention” for low SES students. From this point, the leap is not large to consider these neural differences a factor in the perpetuation of intergenerational poverty. This type of approach can lead to interpretations not far from Broca’s if we suggest that these studies are purely objective.

All of the scientists above were, and are, respected, intelligent, and creative individuals. All of them have used scientific objectivity. These scientists were most likely working in good faith and not intentionally biased. However, they were sorely misguided in applying objective measures to such a degree as to find themselves impervious to partiality. In trying to act as the disinterested observer, these researchers stumbled into the realm of ignorance, focusing on the experiment but not the outside pressures. They aren't the only ones.

The ideal of objective thinking can render scientists blind to the ways that the question begets the answer. Perhaps scientific objectivity then has no place in scientific practice. Donna Haraway, scientific philosopher and feminist, argues that objectivity in science should be discarded in favor of acknowledging the individual, both researcher and participant.  She advocates for the idea of situated knowledges from multiple individuals, meaning that “rational knowledge does not pretend to disengagement” and is instead a collective of scientific voices that consider themselves rational but not invulnerable to their own background and bias.

Objective research as it stands has offered much improvement to the scientific community and the method of inquiry since the nineteenth century, but the time has come to allow the definition of objectivity to morph.  Let’s make an effort, in our reading and our own work, to acknowledge the subjective and take our biases into consideration.  Objectivity is an ideal, but in reality, an eye that observes is not blind and should not pretend to be.  The most beautiful photograph is crafted, not captured from the ether.  The exposure, the contrast, the angle.  The question, the hypothesis, the model.  Perhaps most important for both photography and research, the interpretation.  All of these aspects matter.  Paying attention to each detail, ensuring no one feature overtakes the others, is the quality that separates the novice from the pro.

Edited by: Marika Wieliczko

Staying in Touch

Alessandra Salgueiro

Originally published April 4, 2015

photo by: kristen thomas and jadiel wasson

photo by: kristen thomas and jadiel wasson

All too often scientists get caught up in the nuances of their individual research projects. They are so focused on the function of their protein or gene of interest that they forget about the ultimate goal of biomedical research: understanding and curing human disease. However, Emory has made several efforts to make sure that this is not the case for their graduate students. Emory graduate students have access to several avenues that help them stay in touch with the human aspect of research. These include the Molecules to Mankind Doctoral Pathway, the Certificate Program in Translational Research, and interactive courses such as Cancer Colloquium.

The Molecules to Mankind Doctoral Pathway, or M2M, is an interdisciplinary effort that combines existing laboratory and population science Ph.D. tracks to create “a new breed of scientist.” Students that graduate from M2M are well suited for careers in public health as they are able to not only design and analyze laboratory experiments, but they can also integrate bench science to help solve population based health issues. Ashley Holmes, a third year Nutrition Health Science student in the M2M pathway, shared her perspective on the importance of keeping science in context:

“I think it's pretty easy to become hyper-focused on your dissertation topic and in doing so, you can unintentionally reduce people to data or biological samples.  The M2M program addresses [this] issue in its awesome weekly seminars: the speakers usually have interdisciplinary backgrounds and interesting collaborations that address basic, clinical, and population sciences.  Even when the details of their experiments or statistical analyses get tedious, they "bring it home" by reminding us of the public health implications of their work and how they are helping people.”

Rachel Burke, an Epidemiology student also on the M2M pathway, agrees.

“I like how the M2M seminars try to bring things back to the practical application of the research — the ‘so what’ factor. I think that having this background has helped me in turn think about what are the implications of my research and how can I focus those towards helping mankind.”

Emory’s Certificate Program in Translational Research provides Emory graduate students, post-doc fellows, and faculty with an opportunity to bridge the gap between basic bench science and clinical research. This program has 14 credit requirements, including a clinical medicine rotation which allows program participants to shadow a clinician and interact with current patients. Katherine Henry, a third year student in the Molecular Systems Pharmacology program appreciates the unique perspective of translational research:

“I have always been more interested in the translational aspects of science. I like science that I can explain to my family and it's a lot easier to do that when you can relate your work to some disease or physiological process. The hope is that this program will set me up for a career in clinical/translational science, for example at a clinical trials firm (CRO), or a public health agency like the CDC.”

A third way Laney Graduate students can stay in touch with the human side of bench research is through courses with context such as the Cancer Colloquium course. This course is the capstone for the Cancer Biology Graduate Program. The course director is clinician Dr. Ned Waller, who treats patients as well as runs his own basic research laboratory. Dr. Waller brings oncologists and patients into the classroom setting to create an interactive and collaborative learning environment. The goal of this course is not only to explain to students how the cancers they research are treated but also to remind them of why they are performing this research. Katie Barnhart, a third year Cancer Biology student currently enrolled in Cancer Colloquium says:

“Courses like Cancer Colloquium allow students to make a connection between what they learn in a lecture setting and apply it to real world applications. We learn about molecular pathways and drug development, but to hear about how these therapies are affecting the lives of cancer patients helps put what we do in the laboratory into perspective. Courses like this help students to take a step back and remember the big picture.” 

Cancer Colloquium is offered every other Spring under the listing IBS 562.

Emory students want their research to make an impact in the lives of patients and their families. M2M, the Certificate Program in Translational Research, and Cancer Colloquium provide pathways for students to reach out from their lab bench and stay in touch with the context of their research. In the age of interdisciplinary research programs like these will become critical for advancing medicine.

Edited by: Brindar Sandhu