"What would you say is the most important thing you've gained from this ethics course?" the assistant chemistry professor asked her mixed audience of chemistry, bioengineering, molecular biology, and neuroscience graduate students.
"It's a bit scary how much we rely on an honor system in order to root out scientific fraud," I recall saying, three years ago. "The system is not designed to correct itself. I wish we could do better."
I don't remember the prof's response, other than the fact that it was not reassuring. I left the classroom even more tormented about the Ph.D I had already dedicated two years of my life towards.
Graduate students in the sciences are required to take courses with the name "Responsible Conduct in Research" when they receive their funding through the National Institutes of Health (NIH), the most common source of academic science funding in the US. The NIH began to formally require ethics training for all recipients of its funding in the early '90s, and in 2009, it codified these requirements even further. Now, in my 5th year in my graduate program, I'm part of the first cohort of graduate students required to take a refresher course in "Responsible Conduct in Research." It's curious how the NIH thinks nearly-ripe Ph.Ds need a refresher in the responsible conduct of research.
We've learned plenty about responsible conduct of research in our classes; we've been on the front lines of research, in world-class labs; and we continue to read about scientific misconduct in the news. From all this, there's no need for review. Young scientists have every reason to be skeptical that mere classroom training in ethics is sufficient for upholding the rigor of the scientific method. It's time for the leaders of science, from faculty to universities to the NIH, to put their money where their mouths are, and act to prevent and fight research misconduct.
Sezen, Obokata, and Imanishi-Kari
[caption id="attachment\\_12492" align="aligncenter" width="640"][![](http://berkeleysciencereview.com/wp-content/uploads/2015/04/107868154\_71a21d11c9\_z.jpg)](https://www.flickr.com/photos/stevec77/107868154/) Steve Calcott - CC BY-NC 2.0[/caption]
Examples of research misconduct, and science's meek and opaque handling of such misconduct, dot much of scientific history. But it's probably best to consider how misconduct has been handled in the recent past, when considering whether teaching ethics can substitute for consistently enforcing it.
Bengu Sezen was a graduate student in chemistry at Columbia University who worked in the lab of Dalibor Sames, a young chemistry professor, in the early 2000s. Over the course of her Ph.D, Sezen deftly fabricated the chemistry data needed to publish multiple high-profile papers, with Sames' name on every one. Even within Sames' lab, several other graduate students couldn't replicate Sezen's work - so Sames fired them, ruining their careers. Sezen was Sames' prized student, and Sames was determined to defend Sezen's work.
In 2005, Sezen successfully graduated, but by 2006, Columbia had begun an investigation - none of Sezen's supposedly landmark experiments could be replicated by outside labs. But Columbia wasn't interested in investigating Dalibor Sames, the mentor who let three fake papers slip through his fingers. Bengu Sezen, the graduate student, was the sole target of the investigation. Sezen was eventually stripped of her Ph.D, but not before she had fled to Europe to obtain another Ph.D. She apparently now works in her native Turkey as an assistant professor of bioengineering (cue unhappy face), after having disappeared from mainstream science for at least half a decade.
Sames, who received tenure in 2003 in the midst of Sezen publishing her now-retracted, fraudulent papers, was never punished for letting such obvious fraud slide - let alone for punishing others who had caught onto the fraud early on. Moreover, Columbia slyly prohibited anyone at Columbia from discussing the case. Sames is currently a tenured professor at Columbia, and you can bet that he still hides behind Columbia's gag order to avoid discussing the case.
Now let's consider the more recent case of Japanese scientist Haruko Obokata. Obokata, a rising star in stem cell biology, claimed to have discovered a startlingly simple way of generating stem cells from regular adult cells in the body, a technique dubbed stimulus-triggered acquisition of pluripotency (STAP). In two papers published in Nature, Obokata, along with dozens of other scientists, presented data showing that exposing mouse blood cells to a simple citric-acid bath could convert them into stem cells.
Such a technique would revolutionize biology and medicine, and possibly warrant a Nobel Prize. However, any excitement over the finding soon evaporated as dozens of labs failed to replicate Obokata's work, while others found blatant inconsistencies in certain figures in the two Nature papers (like photographs copied from Obokata's Ph.D thesis that had been cropped and rotated). Obokata defended her work even as RIKEN, the prestigious Japanese research institution where her research had taken place, began a formal investigation of misconduct. Meanwhile, American scientist Charles Vacanti, Obokata's former mentor and a co-author on these papers, made public claims that he had his own recipe for generating STAP stem cells. These recipes also failed in the hands of numerous other labs.
Eventually, RIKEN found Obokata guilty of scientific misconduct: most damningly, she may have re-labeled embryonic stem cells in an attempt to fake their derivation from adult mice. RIKEN overhauled its staff and made public moves to demonstrate reform. Still, soon afterwards, one of Obokata's mentors and co-authors - Japanese researcher Yoshiki Sasai, a leader at RIKEN - committed suicide, leaving a note to Obokata asking her prove that her discovery was real. Obokata was allowed to work under strict supervision to try to prove her findings, but she gave up after 8 months, and insisted until the end that she had not knowingly falsified her data. But the dozens of other co-authors on the STAP paper - namely Vacanti - seem to have slipped away unscathed, as did Nature, the journal that published the papers in the first place.
Obokata's work was the fruition of Vacanti's ideas about stem cells. In fact, Vacanti, an anesthesiologist hailing from Brigham and Women's Hospital in Boston, had brought Obokata into his lab to more rigorously test out his hypotheses, as he lacked expertise in basic stem cell biology. As John Rasko and Carl Power noted in The Guardian a few months ago:
"Did Obokata begin cooking data in order to please her supervisor? Did Vacanti ever suspect that her results were too good to be true? Whatever the case, the Stap cell scandal is their monster child. > > > > It makes you wonder why Vacanti hasn’t been dragged over hot coals like Obokata and her Japanese colleagues, and why Brigham hasn’t followed Riken’s example by publicly flogging itself. > > > > The answer is simple: in the US, investigations into scientific misconduct usually take place under a veil of secrecy. In all likelihood, Brigham has begun its own inquiry but, in stark contrast to the one carried out by Riken, we probably won’t learn anything about it – even the fact of its existence – until after a verdict is reached. > > > > The Stap cell case is not yet closed."
Both cases indicate that while fraudulent science can be detected after the fact, if enough noise is made, investigations rarely target higher-ups involved in the publication of that work. Moreover, in the US, universities are loathe to admit wrongdoing, particularly amongst their faculty, and are not transparent with regard to investigating scientific misconduct.
Granted, justice is a two-way street. The famous case of Theresa Imanishi-Kari and David Baltimore, in which Imanishi-Kari was accused of fabricating data for a 1986 paper in the journal Cell, is a testament to the need for careful, balanced investigation of any accusations of misconduct. Baltimore, a Nobel laureate who co-authored that 1986 paper, suffered significant damage to his reputation in the early '90s for supposedly not catching the misconduct when it occurred. But in 1996, after a decade of investigation by a federal panel, Imanishi-Kari was exonerated, as was Baltimore by proxy. False accusations can do just as much harm as failing to catch misconduct. Regardless of the nuances of these three cases, something is amiss with how ethical standards are upheld in science.
Addressing misconduct through transparent investigations and honest reflection
As my refresher ethics course approaches, I find myself uninterested in learning anything more about ethics. From Sames to Vacanti, we see scientists scurry away unharmed from misconduct carried out under their supervision. What I do want to learn about, though, is whether change is possible in how science handles misconduct.
How can we assure scientists the freedom to carry out their research without burdensome oversight, while also fairly considering accusations of misconduct? How should academic departments, universities, journals, and the scientific community at large handle misconduct that mostly falls on the shoulders of researchers, but also their supervisors and collaborators on publications?
First, there need to be explicit safeguards for whistle-blowers - particularly graduate students. Others have written about whistle-blowing in science. Sometimes, whistle-blowers see their accusations recognized, investigated, and sufficiently punished. Other times, cases fall apart, or false accusations are disproven. While some accusations of misconduct may prove to be unfounded, honest whistleblowers should be reasonably protected from retaliation from their superiors - something that clearly did not happen in the Sezen/Sames debacle.
In fact, even beyond the scope of research misconduct, it remains very difficult for young scientists to level ethical complaints of any sort against their superiors. If scientific culture has nothing to hide, graduate students and post-docs deserve protection when trying to rectify less-than-responsible behavior of their peers or of faculty.
Secondly, along the same lines, there should be transparent mechanisms for punishing not only the direct perpetrators of misconduct, but also scientists who shield misconduct, and universities who fail to adequately enforce ethical standards. Talking about ethics in graduate school is a silly exercise if fraud is only exposed and dealt with when it reaches the highest levels of publicity.
Some have argued for research misconduct to be prosecuted legally, as well. This might not be necessary if misconduct were handled publicly - but currently, universities, like Columbia, are free to make preposterous demands of their employees, like gag orders. Should we really accept a Tweed Code of Silence, like the Blue Code of Silence of many police departments, to cover any discrepancies in our taxpayer-funded work? If RIKEN can make public amends for misconduct on its watch, so can we.
[caption id="attachment\\_12499" align="aligncenter" width="640"][![858381180\_74e802627a\_z](http://berkeleysciencereview.com/wp-content/uploads/2015/04/858381180\_74e802627a\_z.jpg)](https://www.flickr.com/photos/svenwerk/858381180/in/photolist-8iUVNB-2iRqPw-qTrov-5btVAJ-fJnVCF-9NZyt-jLgLG1-cuuKqj-5T7Vts-nHqLWZ-c5CgKb-99UUiu-ciirvy-7WXmd1-dQfQog-5D6NiM-6EipE-9XgP2-dxsmUF-6a8Ko1-mJ4Ydt-7Jzzq4-bVh76d-dA3x33-pF5VbF-4muygi-gXzR2M-ht9e4y-pVmjmJ-pXzTCm-mJ4MSM-p1Jngn-p1FhTb-5qL5rM-b8sPe8-5huKab-8c5rCZ-91Qy4g-nKip3r-dCBZ6R-o5c1ot-6a8KiJ-h987kS-ePUJ8a-WuiiL-7pFDc7-o4THe8-4vq5sR-9o2KoV-5hqqgF) svenwerk - CC BY-NC-ND 2.0[/caption]
Lastly, we need to be having frank conversations about ethics with scientists at all stages of their careers. Faculty should be held to the same standards of face-to-face discussion about ethics as their students - and in fact, those discussions should be held with their students. Not much can be gained from simply reflecting the worst misdeeds of scientists over the last century (Shame on those lying scientists! We're nothing like them!), but much could be gained by building vertical trust within departments and discussing present-day ethical issues, big and small.
Research shows that misconduct by a single superior in an organization can damage that organization's reputation. Similarly, scientists who are forced to retract their published work can expect to see a dip in citations of their other papers - but scientists who self-correct their errors publicly sometimes see an increase in others' citations of their work. Only with honest, collective discussion - and an acceptance that sometimes, even great scientists err - can science ever hope to decrease (admittedly low) rates of misconduct. Though as a friend pointed out to me recently, it will become increasingly more difficult for fraudulent papers to slip by in the internet age, with the advent of websites like Retraction Watch and PubPeer. Hubris is a defining quality of many a rockstar scientist, but it is a quality that opposes the process of self-correction that should theoretically gird all of science.
If graduate students, post-docs, or faculty are committing scientific misconduct, it is not occurring in a vacuum. Competition is a healthy fuel for science, but when it backfires, by pushing scientists to fudge their data for money and prestige, we should be prepared to investigate it transparently and work to prevent its recurrence. No one needs to review the definition of Responsible Conduct in Research. But scientists do need to be reminded of what is at stake when misconduct occurs (the reputation of science as a whole), and we must incentivize honesty, and deter misconduct, in the most transparent ways possible.
Top image: Michael Gallagher CC BY-SA 3.0