linki2

Why retractions are on the rise.

 

Ivan Oransky, MD

Ivan Oransky: I really want to emphasise as Leeza did, this is very much a conversation, speaking with, I'm not speaking at or to you, as it were. And I'm going to do that for a few minutes, because I want to set the stage a little bit more in terms of giving you a sense of where I and more importantly, we at retraction watch, see certain trends, certain things happening, and just to foster conversation, and please as was just mentioned, challenge, grill me, all of those things. I really look forward always to great conversations and to learning from all of you. I see some familiar names and faces among you and so I look forward to those conversations as well as with those of you I've never spoken with. So my main sort of thesis for the next two and a half minutes or so is that retractions are on the rise. That is really not controversial. I'll show you some data anyway because we should all be evidence based. But not enough, which I suspect among this group is not all that controversial either. But I do want to at least give you some food for thought in that direction. So just to again, set the stage a little bit. This is a plot of the percent of retractions, in other words the rate of retraction. This is science and engineering publication, the data are from NSF in terms of the background information and the context, the data for their retractions are from our database, which you can visit it retractiondatabase.org. We're actually coming close to 39,000 retractions there, compared with about a third of that, about 13,000 in PubMed, and fewer than that in CrossRef, probably nine or 10,000 at this point. So I don't think that's really sort of controversial either just to say that we have the sort of leading and most comprehensive source of retractions in terms of a database. But what you see here is over the past two decades, the rate of retraction, I would say has done two things. One is that it's remained still fairly small. So on the left there, excuse me, the y-axis is the percentage of papers in a given year that are retracted. We're still below a 10th of a percent, I guess we sort of spiked to a 10th of a percent number several years ago, which I'll talk about, but that it is increasing and it's you know, fairly linear growth. I want to just note that spike from to 2015 or so. This is just a little bit in the weeds. But these are papers that were retracted that had been published in that year. So in other words, this is not the number of retractions that happened that year, it's number of papers up from that year that were eventually retracted. So since there is a lag, which I will talk to you about very quickly in the next slide. I would fairly comfortably predict that, in fact, you're going to see that curve kind of continue to do this because what's happening as I will make clear shortly, is that there's a lot more scrutiny by some of you who are on this on this meeting today. And so again, quite rare, but definitely increasing. The actual numbers, we didn't quite get to 5000 last year but in the year 2000 there were about 40, retractions in 2022 that were that 4600 retractions, but as a rate, it didn't go up quite that dramatically, not sort of 100 fold. But as a rate, it still went up, you know, fairly significantly. Why is this happening? So I think that people often ask me, well does that mean there's more misconduct happening? That's certainly possible and I in no way would rule that out. But if you want to be evidence based about it, I think you have to sort of look, the evidence is much more clear that it is because of what we refer to as sleuthing, people finding issues and the literature, people able to do that because papers are online now in ways that they weren't 20-25 years ago, even 15 years ago. There's PubPeer which full disclosure I'm a volunteer member of the Board of Directors at the pub peer foundation. But I mentioned that and Elisabeth Bik I'm sure all of you either know personally or of course no of Elisabeth's work is perhaps the most well-known sleuth, but we have a page of dozens of sleuths of which Elisabeth of course is one on retraction watch, that page is always growing. In fact, that should probably grow now, there are probably some people we need to add to it. That all has been really that kind of work, which is largely, if not exclusively unpaid, the post publication work has really grown. And the other thing that has happened is that certainly in the last, 5-6 years, but even going back more than a decade and I know that there's one of you at least one of you on this call, who knows that quite well, having been involved in it is that journals, again about 15 years ago now, a few journals started doing this more are doing it, they're actually hiring people to look at all these allegations, to look at some of the problems, look at papers and screen and find mechanisms to screen papers before they even go into peer review, before they're published to actually cut down on these problems. So all of that has really allowed for and I think fostered this growth and retraction which again is overall is a good thing. It's just a question of how much more should that grow. And just to get a little more granular in terms of trends, and maybe you could also argue this is sort of the bad news. If the growth in retractions is good news, even though publishers don't always agree with me when I say that they may have to admit it through gritted teeth, the bad news, if you will, and then the sort of before and after is retractions take far too long. When they happen at all, this is just headlines from retraction watch and just to give you a little bit of insight into some of the work we do and I should say for those of you don't know, Adam, Marcus and I launched retraction watch about 12 and a half years ago in August of 2010. It had been, you know, in fits and starts sometimes growing, sometimes shrinking, we're in this bit of a growth phase again, which I really like thanks to some generous donations, which we can always use more of. I'm a volunteer. But one of the things we started doing some years ago is filing public records requests for correspondence between universities and sometimes individual corresponding authors and journals where we could in terms of public universities to see how long these things were taking. What we often found out was that contrary to what journalists would like you to think is that they act so quickly, they would sit on things.

And this is by the way, official requests. This is not a whistle-blower making an allegation that needs to be investigated. These allegations have been investigated. Authors are asking for retractions, universities are asking for retractions and yet years later, nothing's happening. That's when things actually do get retracted. You probably can't read that I apologise, but it's a paper from Mbio from 2016, that Elisabeth Bik who I mentioned, and two others published, and the punch line here is that 2% of papers that Elisabeth looked at, and that's that was about 20,000 different papers, had images that exhibited features suggested of deliberate manipulation. That actually reaches, that meets the committee and publication ethics requirements for criteria I should say for retraction and yet, the vast majority of them are not retracted and Elisabeth has kept track of all those. And that's true for if you talk to lots of people who do that kind of work. Routinely, in fact universally, most of what they have pointed out, and these are people with excellent track records, published haven't done anything. And finally, the even further downstream problem, even if a paper is retracted, is that again, consistently, and this is a paper that Alison Abritus, who's our full time researcher at retraction watch, and runs our database, co-authored with a few others, other librarian and library researchers. People find that more than 90% of the time, papers continued to be cited as if they hadn't been retracted. This is in the biomedical literature and that's what a lot of this tends to focus for better or for worse, but the point is that, why would you want to cite a retracted paper in support of your idea? I don't know. I mean, maybe you have a reason for that. I'd love to know what it is. But if you were to, for example, do that in a court case, certainly here in the US where I know the system a little bit, nothing good would happen you would lose the case. You probably would be sanctioned if the judge knew that you were citing, not a scientific paper so much, but a precedent that have been overturned, for example, there's something called Shepherdisation that's supposed to prevent that. And people can use our database and use Zotero or EndNote, or papers or some other software and methods that actually ingest our database and we're very excited and thrilled about that, to try and cut down on those citations but so far, that doesn't seem to be doing all that much, frankly. I think we still have to let it all be ingested and see what happens. So I'm going to stop there. I really again, lightning round, I just wanted to share with you a couple of data points, thoughts, maybe conclusions, and with that, I just want to really have a conversation and I am looking forward to that.

Leeza Osipenko: So first question is in chat from Caroline. Why do you think Cochrane Reviews which are extremely influential and sell themselves as being kept up to date are hardly ever, as far as I can tell, retracted even when they are a decade or more out of date and are never going to be updated? I would call this misconduct as it potentially harms patients used to inform decision making?

 

Ivan Oransky: It's a great question. I think I have to push back on the premise a little bit though, and I'm not here to defend Cochrane by any stretch. If anyone's here from Cochrane and wants to respond, of course, they're welcome to. I think that, in fact, if you look at our database, Cochrane is a pretty regular contributor to the database in the sense that I was actually going through some country specific data recently for a different purpose here, and realised that in fact that the most retractions from that particular country, which does have a Cochrane Centre, were from Cochrane and they're different. Those are not for fraud or misconduct. They're withdrawn, and sometimes they're withdrawn while they're being reviewed, updated, sometimes they're never updated, which is probably the bigger problem. And so they are usually withdrawn in a slightly different way than a lot of other publishers would, there of course, that's a collaboration with Wiley. I'm aware of, and I think, Caroline, you certainly are as well, of cases that really should be updated, or maybe withdrawn or retracted so I don't dispute that at all. But I don't think it's an overall reluctance to retract as much as a typical sort of reluctance to retract maybe certain kinds of reviews and just the process not working the way we all want it to.

Leeza Osipenko: Another question from till in the chat as well: If a journal takes three years to retract a paper, what is the work from within the journal? What desks do a request move across? And who is involved in that decision? It is hard to understand how a decision can take that long.

 

Ivan Oransky: Sure, it's a great question and the first thing I should say is that there is no system. There are guidelines or best practices, which are followed sometimes, other times they're pointed to but not actually followed. People claim they're following it. This is the way life is. I see at least a few faces that I know are based in the US, but it's sort of like when my students say," oh, tell us about the health care system in the US" and I sort of throw up, I don't throw up my hands. I speak with them, they're students. But we don't have a health care system here. We have a massive crisis. There are rules and guidelines but again, they're not always followed. It really depends on the journal. It's not a great answer. But let me give you a couple of high level workflows. If you're talking about a journal in a major publisher. And so that's Elsevier, it's Springer nature. It's Taylor and Francis, I could go on. There is almost certainly a committee of some kind that probably has some lawyers on it and may in fact, involve mostly lawyers who are going to in some way, be involved in that process, adjudicate it, hopefully with the input of actual subject matter experts, not always. To be a little bit not naive as much as pollyannish. Sometimes those are where things actually move more quickly. When you see these three years, and we even saw six years, it's when there's literally no one paying any attention, or they just forgot or ignored it. There was one case, and Holden Thorp, we've had frank conversations about this, but he's the editor in chief of Science and he's the new relatively new editor in chief of Science and some things happened before his watch, to be honest. But we had a case where we put in a public records request and we know the editor got it, because we had a response from the editor saying, okay, got this, I'll take a look. And then nothing happened for three years. So there is no sort of like, okay, if I just speak to this person. It's not like there's a customer service desk, and if it needs to get escalated. There sort of is, but there's nothing that's written down. If I'm being cynical, I think that that's probably intentional. But I also want to just point out, let me get back to the lawyers for a second before I finish answering, lawyers are hugely involved in all these processes, in ways that almost universally slow things down. And so for example, in Nature in 2014, even which is years ago, published an editorial pointing out that one of the reasons why they'd been criticised as everyone had, even then for how slow these things went, and also how mealy mouthed the retraction notices were right, so lawyers will water them down, and make things move much more slowly because everyone's afraid of being sued. And the other thing, by the way, I would call them defence lawyers, although it's not like they're exactly defending against anything. They're defending the reputations. A lot of people accused of misconduct or even accused of errors will lawyer up, as we say here in the States, and you end up with, again, protracted and really mealy mouthed retraction processes that don't really tend to end the way you would hope if you were just purely interested in the scientific record, and frankly into its injustice and facts.

Leeza Osipenko: So question from Miranda, what do you foresee about the impact of AI technologies on scientific publishing? It could be both a tool for good in detecting problematic work or in exacerbating the Papermill issue?

 

Ivan Oransky: I don't do this a lot, but I actually put out a tweet, I think it was last week where I said, you know I'm glad there's all this, I didn't say it this way, I said it in a frankly somewhat maybe petulant way, I see all the attention being paid to ChatGPT. I get it and I'm actually just as concerned. My concern though is that publishers and a lot of editors, we stopped including them in our roundups, because there were so many of them, there's a new, this is what we think about chat GPT from a journal editor every day/week. It's a distraction. I think that it's an important distraction. And it's something that hopefully, maybe will make people pay more attention. But the problem I have, and it's the same problem I have with publishers who are suddenly getting, you'll forgive the term, but getting religion about paper mills, which we've known about for a long time, is that it is a distraction from what is the central issue. What are the fundamentals? Yes, it's going to create problems. It's going to industrialise even further the work of industrialised paper mills; it's going to make all of that easier for people who want to cheat. One hopes in the arms race that we've been in for decades now in terms of scientific publishing, that the detection tools, and all of that will keep up that but it ignores this sort of distraction of chatGPT. The distraction, frankly, of the over focus on paper mills at the exclusion of other issues in scientific publishing is that it allows everyone to conveniently not discuss the real problem, which is publish or perish, and which is the, you know, this sort of completely relentless reliance on metrics and their reliance on papers in particular as a metric. And so, if you don't do anything about that, yes, you can go catch the horse, the GPT being the horse here, after the horse has left the barn, but there's something else in the barn and it's probably worse than the horse and it's going to come along. We need to actually figure out the barn and I don't even know what that metaphor means but I will say that we have to think very carefully about what is making all of this worthwhile for cheaters in addition to looking at the individual behaviour, which is again, a lot of what we do at retraction watch, so I think it's going to have a big effect it already is weather. But I think I would hate for it to be a distraction.

David Colquhoun: I've been going around saying mainly on Twitter, or on Mastodon, where I now try to stay most of the time, that chat GPT is a major failure and I do wish people would stop being entranced by it. Frank Harold did a good post, great statistician from Duke, I think. And thing gave wrong answers in the first place, and he told it was wrong and it modified it. And in the end, it got quite a good answer, but the intelligence that went into it was Harold’s, not the bot. And I've done exactly the same thing. People should just stop being entranced by these toys; I think. But my experience with bad papers has mainly been in the alternative medicine field. And I recall, I did actually get invited to their credit I suppose to talk to the editors of BMC Chinese medicine, which publishes a lot of nonsense, all politically correct, according to the Chinese government, no doubt. And they just wouldn't admit that many of these papers were lousy, probably some of them fraudulent. They didn't behave like scientists at all. They simply denied everything. So it's great the work that retraction watch does is great. It's drawn attention to these things, but they're very far from sold I'm afraid.

 

Ivan Oransky: I would agree with both of your sort of themes and comments there. I appreciate the praise and would completely agree that we're not in a position to solve any real problems. We're in a position, hopefully to bring attention to some percentage of them, but not even a high one. I'm reminded when you were talking about Frank and I'm sure you'll take this in the spirit It's intended just to correct he's actually he's at Vanderbilt, rather than Duke. But Duke looms large in retraction watch for various other reasons but was reminded, as you were saying, you know, in fact, the intelligence is, of course, ours meaning in that case, Frank's. And Frank has lots of intelligence. There was a famous, you can all look it up, I won’t take the time but there's a famous scene from Seinfeld here in the US, you know, when it was still on about the phone used to be able to call 777 film and it somehow ended up at Kramer's, who's one of the characters of course, Kramer that was somehow crossed phone lines, who would pick up the phone, and it would be somebody looking for movie, but he could never figure out what the dial tone, you know, this sort of numbers, whatever the was pressing. So he'd say, why don't you just in that sort of robot voice? Why don't you just tell me the movie you'd like to see? And it’s sort of like, Frank, why don't you just tell me what you want me to spit out? And well, of course. Anyway, sorry. Let's other questions.

Leeza Osipenko: I have another comment in the chat from Francesco Cappa. Given that the sooner the better for retraction, should also be considered the gravity of the reason for retraction. And retraction may occur for several reasons which may go from an error from the authors to minor issues like copyright issues for some pictures, as per the reasons reported in the retraction watch, to misconduct, should it be useful to also report the gravity of what is it has occurred?

 

Ivan Oransky: Absolutely, one of the reasons that Adam and I launched retraction watch was that in addition to the journalistic reason, which is maybe less important to this audience, but when you find a source of stories, particularly about bad human behaviour, that is hiding in plain sight. That's wonderful for journalists. Obviously, we want to push further than what's just available publicly. But the other reason we noted was that retraction notices were awful. They were the opposite of transparent. There was a particular journal, Journal of Biological Chemistry that used to publish actually a fair number of retractions and then they went up and published even more once they got better. But they used to have one line retraction notices. This was withdrawn with by the author, that tells you absolutely nothing. It doesn't tell you if the author realised that they made a mistake and so they want to withdraw the paper which we should honour and celebrate, or the author was told by his or her university, if you don't do that, or it's a condition of your employment or its condition of your sanction, because you've committed misconduct. So we have been, for 12 and a half years, screaming from every rooftop that everybody will have us on, please, include better retraction notices and in fact, when you look at COPE guidelines, which again are I would argue, sort of intentionally misinterpreted in many cases, it says very clearly, the reason for retraction should be included. This journal JBC, actually, at one time said, well that's the reason it was withdrawn by the authors. And I said that's a weird, tautological circular argument that I don't really have a lot of time for. They changed it after we'd beat them up for five years. I don't mean no violence metaphor, after we wrote about them critically for five years. So in fact, that should be included. I'll sort of go further than that and say that it is actually critically important not just for the scientific enterprise, but for individual scientists who are involved. So there are great data, there's a number of interesting papers that have come to essentially the same conclusion, and they're done by usually by economists who really like to look at retraction notices and retraction data, because it is that sort of that intersection that economists love of, sort of fairly small and circumscribed data set, fairly binary, something's retracted or not, and it involves human behaviour, often bad human behaviour and involves incentives. And so they like to look at these data and what they've universally found is that when retraction notices have the reason for retraction, and it is fraud or misconduct, something on the bad side of things, which is about two thirds of retractions, I should note, entire fields, not just that author or that group sees a decline in their citation. So even unconsciously, or consciously, scientists are seeing that but better news and in fact, I think the more important message of those studies is that the opposite is also true. The converse is also true. When retractions happen and they happen with clear notices and says, this is what happened, there was an honest error. In other words, we had the wrong reagent, we made that calculation error, something that where they very explicitly explain what happened and it was not fraud. Those researchers and those fields actually don't see a decline in citations, even one paper which hasn't been replicated to be fair, but they found that they see an increase. So in other words, honesty, is being rewarded. I wouldn't go that far because it's just one paper. But at the very least, the important thing is that they don't seem to see a decline. So there are lots of really good reasons to do this other than even the obvious transparency reasons, which I think we should all welcome. One of the papers, we keep a list on retraction watch of the top 10 cited retracted papers. And they move around a little bit. But there are some papers you'll all recognise that are always in the top few and one of them is about a molecule called visfatin. I won't go into details of that you're welcome to look it up. But essentially, there were a couple different groups, one of which did commit misconduct, the other of which didn't commit misconduct. And you can actually trust those data. But the reason that papers were retracted is because there's misconduct by one particular group, and that paper continues to be cited, and that's probably not an awful thing as long as you cite it and explain that you're citing sort of this piece of it in support of what you're doing, which again argues for transparency and fulsome, as we like to say, retraction notices.

Lydie Meheus: We had some contact like more than years ago, and I don't know if you remember the whole story of Dr. Yamamoto and the Einstein Institute in Philadelphia so, but what I would like to bring in here in the conversation is a totally different driver than the published or perished reason for coming up with that type of paper really intentional fraud I would call it, is just business and money, and maybe that's very typical for the oncology field. But we see that happening more and more, and for us who want to properly inform patients and come with evidence-based information to patients, we are getting really worried about this trend in oncology, and certainly with cell therapy and personalist vaccines, I’m afraid we're going to see a tsunami coming on that type of things that are really based, well, in a lot of cases, on the scientific papers in predatory journals, and then it's a real business model that is coming up nowadays, and has nothing to do with scientists wanting to have their paper published and needing those publication. It's a totally different angle. And I would be happy to hear Ivan's idea on, and how to fight, we have been fighting that, we have been fighting that in the past intensively, we kind of gave up because I realized that recently, you can still buy it over the Internet. The old papers as you said, are still cited on those websites. Even though we had them, some of them were retracted.

Ivan Oransky: I certainly wouldn't disagree with you. I very much agree with you that purely commercial sort of concerns is part of this I guess, and this is honestly just semantics. I would include that in publish or perish, because to me publish or perish, includes the fact that for example, the FDA would require studies done a certain way that would be of publication quality. You can sort of relate all of that to the coin of the realm in some way, and I mean that obviously commercial financial way being a paper. But again, I certainly don't disagree. This is critical. I guess what I would say is that it points to something we have all known for a long time. I don't know if we've all known it, but a lot of us have known for a long time, and I see some folks on this on this zoom that were involved in some of these early cases. Remember that reprints of journal articles particularly big clinical trials, that claim to show X drug worked. Those were a fundamental part and arguably the fundamental part of journals business models for so many years. Richard Smith, who many of you may know, has of course, had written about this long before we were ever doing anything when he was at editor at BMJ and shortly thereafter so these reprints would and we've learned this, millions of dollars. There are electronic versions of them now. But this was like literally printing it out and making sure drug reps put it in the hands of physicians or people who could prescribe. I guess what I'm saying is to me it's all related. In other words, the entire model, whether it's in terms of publish or parish in order to keep your role, get promoted tenure, etc. Or to make a point and sell something, whether you're a big pharma company or a sort of dodgy company selling a particular compound that you're making claims about. Again, I very much agree with you It's an issue, and I just would see it as part of the, again, and we've only seen this in the pandemic as well of course, but the extreme over reliance on what is not essentially but what is a deeply flawed, peer-reviewed process is some sort of arbiter of truth, when we all know that while peer view is actually, when done, and when you know it's happened, and can see it, an audit trail if you will of it adds value often is just a box ticking exercise, and that's again part of the issue to me.

Dan Larhammar: I’ve discussed with colleagues whether PubMed would be able to flag articles that have an expression of concern or ongoing investigation and during that period the journalists would not allow those articles to be cited. Could one take this one step further? Would pub med and EndNote and other reference handlers, be in a position to announce that authors have requested retraction, and thereby bypass the journal If it doesn't act fast enough. Would that be possible?

Ivan Oransky: It's certainly possible and again, if someone is here from National Library Medicine, or any of those organizations, I can't speak for any of them what they can and can't do. I think sometimes we and this is not to say we shouldn't do this, but I think what often tends to happen is we focus on what's public and the people who are actually trying to do something because they're doing it in an open way, like pub med like in EndNote, but when really, we should focus on making sure this system is better. I don't mean those are mutually exclusive, but I think that's just a natural tendency. It's frankly why we look at retractions in publishers I mean we're not guilty but we do the same thing for obvious reasons. I think that there's a couple things I would say. First of all, the technology exists. It is existed for years ,more than a decade. In 2011 in the infancy of retraction watch, we were invited to write a comment piece for nature, which we did, and I can send it around if you're interested but the title is, the paper is not sacred. And so we were making a number of arguments, good, bad, and otherwise, and our thinking has changed not so much in the central idea but as data has changed and we've learned, but one of the fundamental arguments we were making that piece was that the technology is there and at the time it was called Cross Mark. Now it's called, check for updates, but if publishers and again, unfortunately it is relies on publishers to do this. If publishers were to install this what's called, crossmark or check for updates you would actually see everything that was involved and you might add, alt metrics. You might add blogs and tweets, and conversations happening about but that would need to be filtered in some way. If there was a pub peer comment. I mentioned pub peer a couple times, that would be linked, and that's already linked in some metrics, not in every websites, alt metrics, so that all exists. I don't necessarily think we could sort of hope that of any of the sort of official indexers can do it, and that may be more about statutes and what people are comfortable with then anything important. But the point is more centrally, Dan, I think that the technology exists. We are trying to be part of some of that software, obviously, as people ingest the database. But again, we're the end stage. We're talking to the end stage, knowing when someone has raised an allegation would be even more important and knowing there's an investigation happening. Publishers, though, have actually taken steps away from that, taking steps further from that. So, for example, PubMed you mentioned. They actually they do flag expressive concern when the publishers send the metadata to PubMed, and same thing for Node and others. What publishers have started doing is doing things that are actually expressive of concern but they don't involve any new metadata. Apologies to anyone if I'm getting into some weird, weedy territory, but we all are scientifically inclined, if not scientists, and I think sometimes you have to look at how the sausage is made. I say that as a vegetarian. If the publishers are turning what we all would consider an expression of concern into this little sort of anchor link in editors notes at the bottom of the abstract, which is there to be fair on the paper when you look at the abstract, but is not new metadata that gets transmitted to the index indices, that is a step backward. It's a step backward. We've told the publishers we think is a bad idea. Sometimes they listen to us most of the time they don't, and so again, like so much of this, all roads lead to Rome. So much of the metadata flow ends up coming from, to, through the publishers. We unfortunately need to somehow either divorce it from that which would be awesome, but somewhat long range I think, or somehow figure out another system.

 

Francesco Cappa: After my first question which was read from chat, is there a paper, a publication that explains that supports what you said before. That was very interesting to me, so that when a retraction happens as a result of a misconduct, there is a decline in citations. When you said that retraction is for good. Is there a publication or some reference that I can find?

Ivan Oransky: Sure I didn't bring them today, and I apologize for that, but I can share it. They're easy enough to find, there's one paper I don't remember If it was ever in a journal, I think it was eventually in a journal, but it was, I think was a National Bureau of Economic Research paper, the title is something like the retraction penalty. Pierre Azoulay, is I think the leader of one of the groups that does that work, and he's co-authored some of those papers. Ben Jones. Azoulay is I think still at MIT and Ben Jones is still at Northwestern. I know there are others as well. So those are a couple of things to look for, anyway, and if it's of interest, please, This email address is being protected from spambots. You need JavaScript enabled to view it. will always find me. I'm happy to share that.

Francesco Cappa: Where do you think might be a good outlet to publish research on a retraction, on misconduct, because I am conducting some research. Indeed I have asked you a couple of years ago, the data base you sent me. So we already have been in touch in the past. I'm finalizing the paper, and I was thinking with my coders to send this research about statistics on retraction drug sectors and we were thinking of scientometrics, but I would like also to ask you where you think it's a good outlet, or whether there are some special issues open where one can submit studies about retractions.

Ivan Oransky: Sure I'll answer generally, and I'm happy to follow up, of course, but I would say so one thing to do. We actually have a page on our site of papers that have cited our work one way or another, either the database or just something we've written, or what have you. It probably would give you a decent sense, I know there's well over 100 papers there now, in the bibliography. In fact, we need to add some more. It might just give you a sense of the sorts of places that are interested in this work. My feeling on scientometrics is nothing against the journal, one thing I will note is that they've gone through some sort of let's call it an unfortunate episode that's worth looking up in terms of control of the journal, and in terms of something that happened involving one of the papers there. We wrote about it, others did as well. I think the other thing I would say, though just more generally, nothing to do with scientometrics, more to do with field specific journals like that is you probably have peer reviewers who actually hopefully understand what you're looking at, which is the good news. The bad news is, you're not necessarily going to get broad reach. So when we've published very occasionally, usually because someone invites us to, we actually publish in science field-specific journals that aren't scientometrics that are anaesthesiology, or maybe larger journals but when we want to do things, we obviously publish on retraction watch but when we want to do something in a scientific literature, we think it's more important to get in front of the people who are actually doing the work that's affected as opposed to people studying the work that's affected if that makes sense.

Francesco Cappa: But if you want to make a study that is across sectors. Not on management, not on technology, not on medicine, but across disciplines what do you suggest?

Ivan Oransky: I would look at the list.

Leeza Osipenko: I have a question for you from a different perspective. So imagine students somewhere faraway places of the globe, or even probably we don't need to go too far, probably at any university. We have a very knowledgeable audience who works in the subject, and we think it's obvious that better research needs to be retracted but the vast majority of people might not even know that something is wrong with science. Something gets retracted, and this perception that if this is published, especially with the particular brand of a journal, by design, it's true. Fake news cannot get there, or bad stuff cannot get there. It's not even questioned for them to say, oh if you're reading it, why don't you check if it's been retracted why don't you check if it's good research why don't you check it? I suspect most people actually involved in systematic reviews as their job don't have a task or a checkbox on the daily basis to check every single article for whether it's being raised for attraction, so to speak. How do we get it to the masses in that sense, because I dream kind of a system where in some like pub med, things come up with red flag or something. Hey, pay attention, learn about this. So how do we get to the public to even let people know to dig there?

Ivan Oransky: First of all, I think you have to actually just start in terms of not feeding into intentionally or unintentionally or consciously, unconsciously, this sort of , in the US we used to have this sort of good housekeeping sale of approval. This notion that if it's Peer reviewed, it's true. Let's actually all think not so much this group, which I think would agree with that in some way that that's not the case, I should say, but actually sort of think about that, because again, that gets us into the problems we've had. Some of the problems we've had in the pandemic.
So I think there's just a sort of fundamental issue there that we should sort of be more honest about peer review, and with education. So that’s number one, number 2. Scare them. If that fails, or just in conjunction with it. I imagine in fact, a number of you on this call had the same experience that so many of the people that we talked to have had, which is, you actually believed all that. You wanted to believe in good intentions. You wanted to believe in science, correcting itself all of that. And then something happened. It completely destroyed. And that's related to the fact that that's what you were taught and sort of that was the socialization you had. And then it just destroys your view of all that. Many of you have worked your way through it the way I've tried to do and say, oh, wait, okay. It means that it is correcting but really not as well, or as quickly as we want to work through all that. But then there's just sort of the software and tools problem and so if you're really talking, I mean the public is something else. I feel like there's actually 2 publics that you're talking about. The public is sort of. Okay. Let's talk about why, when someone, changes their mind and comes to a different conclusion. That actually means science is working. It's not the opposite. It's not flip-flopping the way it's been partisanized, not partisan necessarily, and sometimes it is. But in terms of the public that is looking at the people who are about the cite papers or do more research into an area or learn something or try and learn something in terms of science, we need to get the right information into tools. Now, I'm biased in the sense that I would like everyone to use the retraction watch database, but I'm also very upfront about it. That's only a small part of, and to Dan's question earlier, about what kinds of information we need to get in front of people so PubMed does do this right? And EndNote does do this. They all do this, but they need the metadata from the publishers and the publishers do not have much of an interest it appears, in providing that metadata. There's a NISO that's a National Institute of standard organizations here in the States that I'm part of a subcommittee sort of thing looking at retractions and best practices like. I think we're going to actually what to me are fairly obvious things but if it means that more people pay attention because it's NISO and not like Ivan Oransky ranting and raving on a zoom or on Twitter, like all the better. The material is there. It's certainly not perfect, but it is certainly much better than it would appear. We need to actually get the information to the software into the tools and all that.

Lydie Meheus: I was just wondering coming back to your question Leeza, if we're talking about fraudulent retractions, could you think of any collaboration with Quackwatch from the FDA? Is there, or is that sorry for my ignorance, I’m not familiar with the Us System but is that something you have been thinking of to make it more generally know that it is really quackery that quack watch system, I don't know how good it works.

Ivan Oransky: Well, quack Watch is not a system? It's Steven Barrett and some colleagues who have been doing a tremendous job for many years. But they're much like us. They have no official standing or authority. I Like to think they've been very successful, and we've written about at least one case that I can think of at the top my head, where Steve Barrett's efforts lead to 20 different retractions by a particular researcher. Yeah, I would be happy to collaborate. I don't know that that's really a direct line frankly to the larger American public, or the international public. We both have our audiences. Maybe we could boost it a bit. We actually collaborate more with other news organizations that have a much bigger reach than we do. So we publish stories there, we actually just got another green light on another project we're working on from for excited about it, about a story in another part of the world. Quack watch is great it's just not necessarily sort of Going to get us or vice versa them further either.

Lydie Meheus: Yeah, I'm really thinking of a system to reach patients which is our major concern ofcourse.

Leeza Osipenko: As you see, there's a lot of thinking to do in the area where, in addition to the groundwork that's being done, there is massive outreach need for educating different publics.

Jana Christopher: I just wanted to mention paper mill products and mass retractions. So can you just talk about that briefly. What do you think are the bottlenecks for those massive retractions and do you see many journals going over their published record and trying to clean up? And do you see a difference between journals that are with big publishing companies and those that sort of publish themselves like the institute of physics or royal society chemistry?

Ivan Oransky: I think there's a couple things I would say. Again publishers have actually started to do these mass retractions hundreds at a time, I think once. That's a good thing, and I think the large publishers are sort of getting to it. They're still getting to it very slowly. Some of you on this on this zoom are very well aware of that because you've brought these issues to their attention and sort of had prolonged silence. I don't know how to say this, other than it depends, entirely on whether it's a non-profit publisher or for profit publisher, society, big , small, whether they actually have the will and have made the effort and sort of put in the resources. As you all know, from doing that kind of work yourself. And so again, IOP is one of the most active now because they've taken this on board, and they've hired some people. IEEE is sort of getting there. Give them some points for moving on some things. Springer nature, and a lot of the major publishers have kind of started to talk with one voice in ways that I find a little bit not so good because it's clear that they're sharing talking points about how they're victims of paper mills, and so therefore they're taking it seriously and they're going to do all these retractions and everything else. I find it a bit like the scene in Casablanca. I'm shocked there's gambling going on here. You're winning, sir. And they've heard me say that they don't like it, and that's fine. So it really depends on almost entirely. The trend isn't necessarily big, small, otherwise. I think the trend is just very: Is there someone there, or a group of people there that is actually taking this seriously? I will say that it is certainly boosting the numbers of retractions in our database. And Alison Abritus who I mentioned earlier, is even busier, and has a small army of help now, thankfully we have some funding for that. I guess my last comment on that would be again the same way with ChatGPT, as I mentioned even with paper mills. I wouldn't want it to distract too much from the core issue. In other words, what publishers aren't saying is, Yeah you're right, there's some problem with special issues, which is where these paper mills seem to really have a vulnerable position. And instead of saying, well wait, let us rethink these special issues, and on the mass scale that so many publishers do, which are huge money makers for them. Whether this is subscription or not by the way, it's also not just the open access model here, it's subscription as well, because they can sell bigger bundles to libraries, etc. They don't look at first principles and sort of because they created that problem and they're happy to continue printing money that way. So as much attention as we should pay to paper mills, let's not lose fact, let's not lose sight of the other fundamental issues.

 

 

Search