SPEAKER 1: What are you doing, Peter? PETER: Charles your editor, tell me to check his source next time. SPEAKER 1: It's a fake. Empire State Photographic Department confirms it. Pack your things. Get out of my building. PETER: I was just-- SPEAKER 1: You're fired. You know we're going to have to print a retraction now. I haven't printed a retraction in 20 years. [MUSIC PLAYING] CHARLIE BENNETT: You are listening to WREK Atlanta. And this is Lost in the Stacks, the Research Library Rock 'n' Roll Radio Show. I'm Charlie Bennett in the Virtual Studio with Ameet Doshi, Fred Rosco, Wendy Hagenmaier, and the Ghosts of the Replacements. Each week on Lost in the Stacks, we pick a theme and then use it to create a mix of music and library talk. Whichever you are here for, we hope you dig it. AMEET DOSHI: That's right, Charlie. Our show today is called Retractions. CHARLIE BENNETT: Oh, are we going to take back all of the stuff we've said in previous episodes about like Radiohead and Flannery O'Connor? AMEET DOSHI: Maybe on another episode. This time we're going to talk about retractions in published scholarly research. WENDY HAGENMAIER: The scholarly record is definitely not a permanent record. Things change all the time and sometimes corrections have to be made. FRED ROSCO: And with more and more scholarly papers being published every year, keeping track of those corrections can be a challenge. WENDY HAGENMAIER: Which is where Retraction Watch comes in. Today we'll be speaking with Ivan Oransky, the co-founder of the Retraction Watch website and database about keeping the public informed about unreliable research. AMEET DOSHI: And our songs today are about lying, taking shortcuts, and bad behavior getting exposed. Those are pretty negative music themes, I know. But while there are sometimes honest and reasonable reasons for scholarly research to be retracted, sometimes it's because somewhere along the way, somebody did something they weren't supposed to do. So let's start with "Something Bad" by Fox and the Law right here on Lost in the Stacks. [MUSIC PLAYING] AMEET DOSHI: This is Lost in the Stacks and joining us is Dr. Ivan Oransky, co-founder of Retraction Watch. Dr. Oransky is also Editor in Chief of Spectrum and Professor and distinguished Writer in Residence at NYU's Carter Journalism Institute where he teaches medical journalism in the science, health, and environmental reporting program. He also serves as the President of the Association of Health Care Journalists. Ivan, welcome to the show. IVAN ORANSKY: Thanks for having me. AMEET DOSHI: Well, let's start off with the basics because there's already some jargon here. What is a retraction? IVAN ORANSKY: So in the scientific or academic scholarly literature, a retraction is a notification to readers, maybe subscribers to a journal, anyone who might come across something, some people just searching for it on the web even, that a particular piece of research is unreliable for some reason. Now, there are lots of reasons why research might be unreliable. Some of them are honest error. People make mistakes and they want people to know that. And that's actually a good reason for what's known as retraction. But often actually about 60% or even about 2/3 of the time, depending on what data set you look at, it's because there was some misconduct involved, something that might be considered fraud. And that could be someone made up the data or they made the data look better than they really were. They sort of cherry picked it, if you will, or they did something like they plagiarized. They sort of stole someone else's work and presented it as their own. So that's what a retraction is. And again, it's the sort of, if you will, nuclear option in correction in academia in the scholarly literature. AMEET DOSHI: And you have a very interesting background. You're an M.D. You have worked in journalism, you're teaching journalism. How did you end up doing this kind of work? How did you create Retraction Watch? What was the impetus there? IVAN ORANSKY: Well, as part of my background, I actually became very interested in two, if you will, parallel streams without realizing I was becoming interested in them throughout college and medical school. And one was just academia and how it all works. I covered the faculty when I was an undergraduate at my college paper, my daily college newspaper. And I'd see tenure fights and all sorts of things happening, good behavior, bad behavior. The other thing I did when I was in medical schools, I was the co-editor in chief of something called Pulse which was the medical student section of JAMA, the Journal of the American Medical Association. And so I learned how peer review works, or in many cases, doesn't work, and sort of really learned from the best there. And so those two things were always in my mind as I was figuring out what to do with my career. And so what I did was about, now it's 11 years ago. So in August of 2010, Adam Marcus and I co-founded Retraction Watch. And what had happened was Adam had, like me, Adam is a medical journalist, had worked and still does at a trade professional publishing. He edits a magazine for gastroenterologists. At the time, he was editing a magazine, a publication for anaesthesiologists. And he had broken a really big story of a guy named Scott Rubin who was a pain researcher, so in anesthesiology, studying a particular painkiller. And it turns out had made up all the data in all the clinical trials. There was all the trials of humans that he allegedly had done and in fact published a lot of them, dozens of them. All the data were made up. And Adam found out about this because he saw retraction start happening, he was talking to all these editors and these researchers all the time, very well connected there. And so he broke this story in his own publication. And I followed that up myself where I was at Scientific American at the time. And Adam and I knew each other just from being trade medical journalists and all that. And I said to him one day, there's something going on here. These retractions, I mean, nobody's paying attention to them. You are, I said to him. And I have over the years periodically, but you've really shown I think how these can be huge stories that nobody's paying attention to and they're important stories because often they're about misconduct, fraud, et cetera, bad behavior. In fact, often with taxpayer dollars. So I said, and the other problem that we noticed was that all these retraction notices, they didn't say anything or they didn't say what had really happened which is a transparency problem. Because if you're thinking about science and you're thinking about it being self-correcting and academia sort of always wanting to get closer to the truth, it's never quite at the truth because how can you know exactly what's true? But you can get really, really close to it and always learn and improve your knowledge, all knowledge is provisional. Well, that's a problem because retractions are supposed to be, they're pretty serious and everyone should take them seriously. So he said sure, that sounds great, whatever, we'll start a blog. And frankly at the time, we didn't really know what we were getting into. We were off by an order of magnitude. In other words, off by about a factor of 10 in the number of retractions we thought there were. We were sort of off to the races almost immediately and really just couldn't even keep up with all the retractions. And so I like to think we still would have done it if we'd known there would be so much work and it's become in many ways what I certainly what I'm known for and certainly my share of my passion and my life's work. On the other hand, there's a lot of work and effort, but it's really rewarding. FRED RASCOE: So it started as a blog, but Retraction Watch now has-- its a database. Now, you can like really track retractions over the years, track trends. There's a lot of data. How did you come to the decision that this was like serious enough? This needed to be database not just something that I report on like as a news person or journalist? IVAN ORANSKY: So we very quickly realized we couldn't keep up. And this is within probably a few months. And we'd have these massive spreadsheets not well organized just sort of so we could list all the retractions and keep up or try and keep up with them. And again we quickly realized, hey, we couldn't keep up and be none of the other sort of databases that they do other things as well, but that would include retractions we're keeping up either. There were lots of retractions were just weren't making it into the typical places that people look for them. So 2010, we co-founded Retraction Watch. It was around probably 2013/2014 that we really started thinking, OK, what is a better solution for this? And as it happened, 2014 is when three different foundations really came to us and said, hey, we think what you're doing is important and we'd like to support you and help you grow and all of that. And we said great because we've got this need for a database. And so we sort of thought through what would be included in the database, the usual sort of metadata, and there was authors and institutions and journals and all that. But we also added reason for attraction which are not available in any other database. And so just because that's what we do. Others you'd have to go click through to the retraction notice to find that out. And so we started creating that. We hired someone who actually her thesis was all about retractions. So perfect. There's someone who essentially had a PhD in retractions and was looking for a job as we were thinking about what do we need to actually make this thing work? And wow, that was terrific sort of a bit of, I don't know, fate or just luck or coincidence. And Alison Abritis is her name. And she's still with us, she still runs our database. And so we launched it officially at the end of 2018 and we continue it. Right now there are more than 28,000 retractions in the database. AMEET DOSHI: If you're just joining us, we're speaking with Ivan Oransky of Retraction Watch. And we'll be back with more about retractions after a music set. AMEET DOSHI: File this set under hv6117.b41. [MUSIC PLAYING] WENDY HAGENMAIER: You just heard "Burn the Witch" by Queens of the Stone Age and before that, "False Claims" by Nar. Songs about not being truthful. [MUSIC PLAYING] FRED RASCOE: Welcome back. Today's Lost in the Stacks is retractions. And we are talking to Ivan Oransky from Retraction Watch about misbehavior and mistakes in science and how to correct the record, which is the point of Retraction Watch. So Ivan, you've got this blog that turned into a website that turned into a database of retracted papers. And so I think you mentioned 28,000 retracted papers that you have in your database. Could one of those papers is retracted in the journal, what happens next? I mean, obviously you catalog it in your site, but what happens at the journal? Does what you do have any repercussions that make something else happen at the journal or how it's covered in the press? IVAN ORANSKY: Yeah. So there's sort of a set of best practices which are not always followed to put it mildly. If a paper is retracted, it should be clearly marked as retracted. You should not be able to access it or even see sort of an abstract from it that is in a particular place you would look. I mean for example, researchers in life sciences, in other words, in biomedicine, often start their searches with something called PubMed. So with MEDLINE and there's PubMed Central. There's all sorts of things wrapped into that. But that's a government database. The many, I mean, millions of abstracts about biomedicine from around the world. And so when you go there, you should be able to tell that a given abstract, even before you click on the full text or the rest of the paper, you should be able to tell it's been retracted. In fact, PubMed is one of the places that has done a really nice job when they get that information. There's actually a big banner that comes up, that pops up that says this is, I don't remember the exact language, but retracted. So you kind of can't miss it. I'm told it's salmon colored. But I'm colorblind, so I just know that it's big and very obvious. So that sort of thing should happen. And then when you click on the paper, you should get-- if it's a PDF, big watermark over it, maybe in red letters even retracted, you know, retracted paper. This article's been retracted. There should be a notice, something that actually explains why the paper is retracted in some level of detail, so that researchers and others know why the paper is retracted. All those things should happen frankly from the point of view of journalists. And I am, of course, a journalist. If you find out that a paper you have covered has been retracted, well, I think you're obligated to sort of let your readers know in somewhere, listeners, viewers know. And maybe that means putting a note on top of the paper, we sometimes, excuse me, on top of the article or the broadcast. Sometimes it's writing a whole new article because it's that important. And so journals should also I think put out press releases about retractions if certainly if they've put out a press release about the original paper. So all those things should happen. Most of the time, not most of the time, but many of the times, they don't. And this by the way, has been demonstrated, if you will, with evidence, peer reviewed evidence. And it's always ironic I think a little bit when I talk about peer reviewed evidence because we talk about a lot of the issues in peer review. But there are for example librarians such as yourselves who have studied this and have actually done analysis of how often the journals, the publishers, the databases like PubMed et cetera, something called Web of Science from Clarivate and things like that, how often do they actually note that a paper is retracted? And depending on the study you're looking at, the evidence, it's just not good. And so that's actually one of the reasons we created databases to make sure it was all there in one place. There are other follow-ons from it in terms of what happens to that research and whether or not people cite that researcher or work by that researcher or site that work of course. All those things are also downstream effects that I think we need to pay attention to. We actually have a sort of leaderboard of the most cited retracted papers on RetractionWatch.com. And so if people are interested in that, they can see what kinds of things happen after something's retracted in terms of the literature. But those are all the things that should happen. But frankly, they often don't. AMEET DOSHI: Is there a debate within Retraction Watch or just perhaps within the community about the approach of watermarking which you described big salmon colored letters [CHUCKLES] everywhere this paper might be seen versus treating it like a really we'll say a very like terrible thing that you used to be able to find on the internet and the large Silicon Valley internet companies will do everything in their power perhaps even mandated by law to remove that content wherever they can find it? Because there is evidence that some of these retracted papers they kind of take on a life of their own inspite of the watermark. Like is there a debate there? Can you actually remove a retracted paper from the servers across the globe or is that infeasible? IVAN ORANSKY: So it is certainly feasible although challenging. And to be fair, in very rare cases. For example, if someone has realized that a photo that someone took of them, clinical photo, is identifiable and they either didn't give-- they didn't give permission or they thought they'd given permission or something else in terms of patient privacy issue, well those are removed. They're disappeared. And there's a note about why. Or sometimes with sort of legal issues, let's say someone has published a proprietary data set that they didn't have the right to. That's not really, it's not on the same scale, but you can understand why. OK well then sorry, that's got to be removed or at least the data set has to be removed. But those are the vanishingly rare cases. And best practices are that you leave the paper up at watermarked because you actually don't want to turn into this sort of well, that was an unpopular idea and we therefore should remove it. No, we should be vociferous about what was wrong with it. But again, unless it is illegal in some way or again violates patient privacy. And so you're absolutely right that some of these papers they take on a life of their own. I'm thinking in particular about, of course, the paper that sort of spawned, if you will, the anti-vaccine movement and turned out to be fraudulent. This was published in 1998 in the Lancet alleging that the MMR vaccine-- Measles, Mumps, Rubella vaccine was linked to autism in children which of course is not true. And this paper, it took 12 years to be retracted. And even after it's been retracted, continues to be cited. Now sometimes it's cited to say this is wrong. But that has sort of-- the horse never mind has left the barn. The horse has probably traveled around the world by that time, maybe back at the barn by now. But the sort of damage of that paper, the after effects of that paper are long lasting and don't seem to be affected by the fact that it was retracted. Maybe in some ways bolstered by that because people can say, oh, this was somehow a vast conspiracy to get away from the truth and which is again not true. So I think that it's a little bit of a free speech issue, although most of these are not government, it's not a First Amendment issue. These are private publishers, whether for profit or not. But it's sort of, do you want the record to reflect what happened or do you want the record to reflect some sort of vaguely sanitized version of what happened? And I think we need to be conscious of that. FRED RASCOE: When you're talking about the anti-vaccine, the lie gets halfway around the world before the truth gets its shoes on according to, I think that was a Mark Twain quote. IVAN ORANSKY: Well, I looked that up actually at one point because I was quoting it and it turns out to Twain may or may not have said it, but it was somebody else I think. FRED RASCOE: Oh, misattribution. I retract! IVAN ORANSKY: No, no. Most, not most, but many quotes attributed to Mark Twain turn out not to be Mark Twain. Martin Twain and Winston Churchill apparently said most of the clever quotes in the universe, but it turns out that they didn't. And so anyway, that's a minor detail. But we all make that [LAUGHS] sort of error. FRED RASCOE: You're listening to Lost in the Stacks and we'll talk more with Ivan on the left side of the hour. [MUSIC PLAYING] MARK REIDL: Hi there. This is Mark Reidl of Georgia Tech's Computer Science Department. Or is it? Maybe my voice has been deep faked and this is just a digital forgery created by a neural network. Either way, you're definitely listening to Lost in the Stacks on WREK Atlanta. [MUSIC PLAYING] CHARLIE BENNETT: Today's show is called Retractions. Retractions remain a problem because, whether caused by honest error or fraud, the end result of a paper being retracted is a stigma on the author, as well as the journal that publishes it. As a science journalist, Jeffrey Brenner and Gau wrote in an article about retractions in the journal Science in 2018, "Ironically, the stigma associated with retraction may make the literature harder to clean up. Because a retraction is often considered an indication of wrongdoing, many researchers are understandably sensitive when one of their papers is questioned. But now we should say perhaps removing that stigma could encourage retraction of honest mistakes." As Kuon Hong Vong wrote in an article in Nature in 2020, "Right now stigma keeps researchers from admitting their mistakes, yet multiple examples show that researchers who act to correct mistakes are lauded rather than shamed. If such transparency were routine, it might ease the pain of retraction and increase the public's understanding of how science works." Well, while we try and pass that mix of shame and pride, let's file this set under q175.37.j84. [MUSIC PLAYING] You just heard "Sneaking Sally Through the Alley" by Robert Palmer. Before that, "Lobachevsky" by the venerable Tom Lehrer and by request of our guest today. Those were songs about people who took dubious shortcuts instead of doing the right thing. [MUSIC PLAYING] AMEET DOSHI: We're talking about retractions on Lost in the Stacks today and our show is all about how to improve research and the scientific record. Our guest is Dr. Ivan Oransky, co-founder of Retraction Watch. So we didn't get to this in the last segment, but I am really curious about some of the trends you're seeing with respect to retractions. You alluded to this earlier in the show that there have been instances of fabricated data and willful misconduct, sometimes using taxpayer funds. I imagine that's very exciting for politicians [CHUCKLES] to say look at what your tax dollars are funding. But also it has implications on just real lives, especially in biomedicine. But do you see these as being the predominant source of retraction or are there other kinds of error happening? IVAN ORANSKY: So the first thing in terms of trends to note is that retractions have risen dramatically, the number of retractions has risen dramatically over the past two decades. So there were about 40 in 2000 and there were still entering some from last year actually because they keep flowing in. But something like 2,300, 2,400 from last year. So this is a pretty dramatic increase. Now again to be fair, the denominator has changed. So there are more papers being published. But it certainly isn't 40 or 50 times the number of papers being published. It's maybe three or four times. So clearly, that's been an increase. And a lot of that has to do with the fact that people are looking for these things and people are more aware of retractions. I mean, we became more aware of attractions and we created a blog about them. So we have readers who are obviously interested in these issues. In terms of what leads to attraction, about depending on your definition of misconducts and also which data set you're looking at, it's something like 2/3 of retractions are due to misconduct. So you could say that predominates. About 20% of the time is due to something as an honest error. I mean, this morning as we're speaking, the post we head up today was about someone who made an error essentially using Excel, which unfortunately happens a fair amount. Honest error, they realized they made a mistake and they fessed up about it because they found it, retracted the paper. We think that's actually an example of, quote unquote, "doing the right thing." But 2/3 of the time, sometimes it's sort of the tried and true recipe. You make up the data or you do something which is known as p-hacking which technically isn't misconduct but is pretty damn close. The idea is that you're kind of beautifying the data. You're making them look better or you're cherry picking the data in some way. But then there are weird things like making up email addresses so that you can do your own peer review which is a story we first reported on 10 years ago now and still fascinates me, because the sort of, if you will, it's ingenious even though it's fraudulent and identity theft in some cases. But you have someone who's dedicating, either an individual researcher or something known as a paper mail, they're dedicating all this energy to doing that instead of the science. Well, wouldn't it be great if they actually were that clever and could do the science? And they probably can. But now it's tainted forever. So there's sort of weird examples of misconduct which may not be covered by the federal codes if you will. But about 2/3 of the time, it's due to misconduct. So the stigma that retractions have, which is part of the reason why a lot of journals and researchers fight against retraction, in other words, find reasons not to retract, the stigma you could say is well earned. FRED RASCOE: So retractions, as you say, are increasing from two decades ago just a handful up to I think you said 2,300 in 2020. And researchers are coming to you for that database because as you say you can't always get the full retraction information from the individual journals. Is it your, I guess, hope or ambition that Retraction Watch is like the one trusted source for retraction information, like everybody would be able to come to Retraction Watch? Or maybe that idea kind of frightens you a little bit, I don't know. IVAN ORANSKY: Well, no. I mean look, I'm a big believer in diversity in all forms. And so the same way I don't think that there should be only one car company or even one electric car company or one company that makes my laptop, I mean, I think there should be lots of high quality information about retractions. It happens that right now we have far more retractions in our database and far more descriptive information and metadata than anyone else does. That doesn't need to stay the same. And frankly, maybe it shouldn't stay the same. I often say, look, if publishers and indices and lots of others were doing what is considered best practices uniformly and universally, I have lots of other things that I could be doing with my life. Maybe they'd be as quixotic as creating a Retraction Watch database. But the point is I often say that. And so in many ways, no, I hope that others adopt this sort of approach to things. On the other hand right now I'm very happy that we can provide such great data and metadata to whether its researchers who are doing research on retractions or scholarship or publishing or researchers who are just looking through their own libraries of information and librarians, for example are helping them use something like Zotero which we partner with in order to basically flow our data into whenever you look at. Zotero is a sort of bibliographic management software to do lots of other things too. If you have your library there, you will get an automatic ping. You don't even have to send anything up. You'll get a ping whenever anything in your library has been retracted. Why? Because they pull in our database once a day and we partner with them, which I'm delighted about. And we partner with others as well. And so we just want it to be useful. And if others can make it more useful, whether because they're studying it or adding to it or sort of making it better, that's fantastic. Again, the other thing though is we've noticed even in the Retraction Watch, the blog side, lots more reporters are covering these issues now, which we think is terrific. So it's not that we were the only people covering it, but there weren't that many at the time that we launched it in 2010. Well now, we compete and others compete with us for stories which is terrific. I mean again, there shouldn't be only one news outlet like Retraction Watch covering scientific misconduct. And everyone's going to cover it a bit differently, make different choices. That's terrific. Again, you're going to buy this car or that car, the other car. But you wouldn't want only one car in the world, one car company in the world either. FRED RASCOE: We've been speaking today with Ivan Oransky. He is the co-founder of Retraction Watch which is the website which tracks journal articles and scientific research that has been retracted due to either scientific error or misconduct or fraud. Ivan, thank you so much for joining us today. IVAN ORANSKY: Thanks so much for having me. This was a lot of fun. [MUSIC PLAYING] CHARLIE BENNETT: File this set under z7405.f73.h54. [MUSIC PLAYING] You just heard "Outlaw Song" by 16 Horsepower. And we started that set with "Your Cheatin Heart" by Hank Williams. Those are songs about bad behavior becoming known to all. And we went Cowboy for that set. [MUSIC PLAYING] Today's show is called Retractions. Let's close with a few facts about retractions, courtesy of analysis from the Retraction Watch database team. Ameet, why don't you start? AMEET DOSHI: All right. Kicking off, did you know, nearly half of retracted papers involved errors or problems with reproducibility, not fraud? WENDY HAGENMAIER: Authors from the US and China lead all countries in the number of overall numbers of retractions. CHARLIE BENNETT: High impact journals are taking the lead in issuing retractions. FRED RASCOE: And retractions overall remain relatively rare with a rate of about four retractions per every 10,000 published papers. CHARLIE BENNETT: And after we've heard those few facts, let's roll the credits. [MUSIC PLAYING] AMEET DOSHI: Lost in the stacks is a collaboration between WREK Atlanta and the Georgia Tech library written and produced by Ameet Doshi, Charlie Bennett, Fred Rosco, Mali Givens, and Wendy Hagmaier. FRED RASCOE: Today's show was edited and assembled by me. So if you noticed any errors, please bring them to my attention for immediate correction. CHARLIE BENNETT: Fred has no shame. Legal Counsel and a book of verified Mark Twain and Winston Churchill quotes were provided by the Burrus intellectual Property Law Group in Atlanta, Georgia. WENDY HAGENMAIER: Special thanks to Ivan for being on the show, to Soo-Kyoon for the show idea, to everyone who wants more transparency and research. And thanks as always to each and every one of you for listening. AMEET DOSHI: Find us online at lostinthestacks.org and you can subscribe to our podcast pretty much anywhere you get your audio fix. [MUSIC PLAYING] WENDY HAGENMAIER: Next week on Lost in the Stacks: there is power in the union. What could a library and archives do with that power? Hmm, we'll find out. [MUSIC PLAYING] AMEET DOSHI: It's time for our last song today. As the Retraction Watch blog says, "Times prides itself on being self-correcting", but sometimes the corrections don't get the visibility they should. So thankfully, there are services like Retraction Watch to make sure that we don't lose track when some scholar does wrong. So let's close with uh, uh uh, "Done Somebody Wrong" by Elmore James. Dust my broom right here on Lost in the Stacks. Have a great weekend everyone. [MUSIC PLAYING] Got to love Elmore James. CHARLIE BENNETT: Yeah. [MUSIC PLAYING]