In most academic environments, retractions are a taboo topic. They are only mentioned as break room gossip or in cautionary tales meant to scare early-career scientists away from research misconduct. Ivan Oransky is determined to change that.

Oransky is a professor of medical journalism at New York University and editor-in-chief of The Transmitter—the editorially independent neuroscience blog funded by the Simons Foundation. He is also the co-founder of Retraction Watch (along with fellow journalist Adam Marcus), a blog entirely dedicated to reporting on retracted papers and the often bewildering stories behind them.
“Back in 2010, we thought we’d start this thing, there would be a couple of retractions a month, and only our mothers would read it,” said Oransky. He and Marcus were surprised to find that retractions were rampant—just that year, there were over 400 retraction notices in scientific journals—and, maybe even more surprisingly, people actually wanted to read about them. Since 2010, they have reported on thousands of retracted papers, with their retraction database containing over 50,000 entries.
Oransky recently gave a talk at Rockefeller for the R3 lecture series. The lectures are part of a broader effort by the Center for Clinical and Translational Science and the Markus Library to assist researchers in enhancing scientific rigor, reproducibility, and reporting (the three titular R’s) in their work. Natural Selections sat down with Oransky after his talk to chat about his views on the current landscape of paper retractions and the role they play in science. Our conversation has been edited for length and clarity.
Natural Selections: So I guess the first question we have for you is what would you qualify as a retraction?
Ivan Oransky: A retraction just means that a journal publisher says a paper is unreliable for some reason and what they’re supposed to do is mark it as retracted. There are actually best practices [for doing this] … from the National Information Standards Association. One of the things they recommend is putting “retracted” in front of the title and also having a notice that says why it was retracted. Most of the time, the PDF [of the paper] will also have a big red watermark over it. The journals are not supposed to remove it from the world. There’s some very rare cases where they do that, but those have to do with privacy. Let’s say that you published a case report, or some kind of clinical data where someone could identify themselves. In those cases, the studies actually get removed from journals, but in general they are supposed to remain available for people, but with a big warning about the reliability of the information in them.
NS: What role do you think retractions play in science?
IO: I think that retractions, when they work properly, are a way of cleaning up the literature. I liken it sometimes to sewage treatment plants. There are papers in the literature that people should know have problems, and they should know what the problem is. They may still want to read them, but they will read knowing the information there is not completely reliable. I think that the purpose of retractions in general is to make sure that there’s a higher likelihood that something you’re reading that isn’t retracted is reliable.
NS: How do you think reporting on retractions affects the general public’s trust in science?
IO: [At Retraction Watch], we try to put context into everything that we write about. We put caveats like, “Listen, we think that peer review in its current form is pretty problematic, and we think the system’s overwhelmed.” But I think we have to be honest about that, so that we can actually make it better. Richard Nixon taught us that the cover-up can be worse than the crime. So whatever happened at Watergate wasn’t good, but pretending it didn’t happen and then trying to distance yourself, that’s where trust was really lost. Our main thesis for fifteen years has been that talking about the problems in peer review is the only way forward. The same way it is in science, you have to be honest about what’s happening and work together to try and fix it, as opposed to pretending it isn’t happening. Unfortunately, most human endeavors, and most human institutions, take the latter approach.
NS: Within science, do you think there’s some things that we could do to reduce the stigma around retracting papers?
IO: That’s why I spent several slides in my talk highlighting cases where people had done the right thing. I think if you increase the number of retractions that are for what we colloquially think of as “honest error,” you actually end up eventually overcoming the stigma. Because instead of it being that two-thirds are due to misconduct, maybe only one-third is due to misconduct, and the other two-thirds are just like, “Hey, I made a mistake.” We actually have a list on our site of Nobel Prize winners who have retracted papers. If you look among that list, you will see Frances Arnold. Not only did she not hide her retraction, she publicly announced the retraction before it even appeared in Science. Things like that are a great way to normalize retractions.
NS: In your lecture, you used the metaphor of cancer screening to talk about retractions. Do you think journals need to adopt some kind of screening mechanisms for research misconduct or data manipulation when reviewing papers?
IO: I do think so. In fact, what’s happening now is this whole industry, mostly for profit, of companies creating screening tests for papers, using our database as well as others.
What they’re doing is looking for signals that predict papers that might get retracted. Some examples are if the author of a paper has retracted a paper in the past, or if the paper cites a lot of retracted papers, or if there were multiple changes in authorship throughout the submission process. They’re all good screening tests, but you know, back to the metaphor, just like when you have a screening test for cancer, a human has to interpret those signals. Because you can get both false positives as well as false negatives. One of the things that authors have started to do is try to evade these systems. With plagiarism, for example, what overlap softwares do is give a percentage of how much overlap a certain paper has with something else. So people do a little rewrite and shave a little bit off until it’s 29% instead of 30% overlap. I worry that any system, no matter how sophisticated, can eventually be overcome, because it becomes an arms race.
Our main thesis for fifteen years has been that talking about the problems in peer review is the only way forward.
NS: You kind of posed this question in your talk, but do you think there are more papers being published that need to be retracted, or are we just better at catching them?
IO: I don’t think those two are mutually exclusive. It can be a wave and a particle, you know? It’s very clear that we’re catching more of it. On the other hand, it’s starting to feel like there’s more of it, but it’s clustered and less serious. In other words, a lot of retracted papers are coming from so-called “paper mills”—companies that are hired to produce research papers, often through plagiarism or data fabrication—and are just complete junk anyway.
NS: Do you think there’s a difference between a paper getting something wrong and a paper that needs to be retracted?
IO: I always go back to the Committee on Publication Ethics Guidelines, and they’re pretty clear. They focus a lot on misconduct and fraud. Even if you come to the right answer, which has happened because people speculated, but they didn’t actually have the data to come to that conclusion, that paper should still be retracted. If there’s a significant error, you know, like you ordered the wrong mice, you made the wrong calculation, and it affects the conclusion, then you should be retracted. But getting it wrong, if you didn’t intentionally do it, then that paper shouldn’t be retracted. The data is still there, and people should trust it, even if you’re interpreting something wrong. Somebody should do another paper and follow up on it and link to it clearly and cite it. The problem is a lot of journals really discourage the kind of give and take. People will say, “Well, if something’s wrong, just write a letter to the editor.” Sure, but a lot of places won’t publish those letters. They’ll find reasons not to. They’ll say it has to be 600 words, when the original paper was 12,000 words. They’ll say you’ve missed the three-month deadline to comment on a paper, or they will send your letter to be reviewed by the original authors. I don’t think that really helps anybody.
NS: Out of everything you’ve covered, is there one story that’s stuck with you the most?
IO: One narrative that’s stuck with us is this story of what happened at Duke around 2013, involving what turned out to be likely falsified data. When we first saw the retractions in 2013, we learned that one of the authors had been charged with embezzlement at Duke. She was using a lab card to purchase supplies at places like Staples and Target. She would then go return the items she bought, but instead of having the money return to the lab card, she asked for it back as cash. She did this a bunch of times, and ended up being caught.
Now, this had nothing to do with fraud, but it made people sort of think, “If she’s embezzling money, maybe we should take a look around and see what else she’s doing.” So when [the university] went to the lab, they saw huge stacks of pipette tip boxes. So then they said, “Wait a second, if you did all the experiments that you claim to have done, you would not have so many pipette tips.”
Duke actually tried to cover up the whole case. Unfortunately for Duke, there was another lab tech in the same lab whose brother was a whistleblower attorney, and they sued the university under the False Claims Act [a law that allows members of the public to sue people or institutions that are attempting to defraud the U.S. government]. Eventually, Duke settled that case for $112.5 million. The judge was so impressed with the whistleblower in this case that he awarded $33.75 million of the settlement money to him personally.