Now we finally have the evidence: but can we trust the data?


Ben Mol: So if we talk about mistakes in medical research, then there is a whole spectrum of things that can go wrong. And on one end that is wrong, observation and wrong analysis, just unintentional errors, and on the other side of the spectrum there's illegal human experiments, but just above that, there is data fabrication and falsification. So that goes from intentional to non-intentional. It goes from error to fraud. And this figure also says, easily detected to difficultly detected. I'm not so sure whether it's all easily detected but what I will do in in this talk, I will address fabrication.

So my story starts with this article. I'm an obstetrician gynaecologist chronic trained in the Netherlands with a vast interest in clinical academiology. I consider my core business making medical knowledge, and I'm passionate about it because I think it will generate a little bit of a better world if we are informed about what we are doing. So I was involved in detecting this case. This was a retracted article at some stage, but the story is that there is a study on polyps, a randomized trial published from Spain, and then a little bit later there is a study on myomas that are removed from a Dr Shokeir from Egypt and the first one is probably to do with reproduction. The second one is fertility/sterility in 2 high impact journals. But if you look careful at the abstracts, then there is actually a lot of similarities. So it's both 215 patients and then 93 of these patients get a pregnant, and then the distribution is 64, 29, and that's exactly the same in both studies.

And if you're not convinced, then look at the baseline table of both articles. So this is different interventions. It's claimed to be different studies but the mean age is exactly the same, and the distribution of myomas is exactly the same. So this was discovered, and it led to retraction, which is kind of euphemistic text, like there was duplication, I think that's fabrication. But from that moment on I was alerted, and then when I worked as an editor I got more alerted. So I saw this paper submitted to the journal that I was working for, and there were inconsistencies. So it's 340 patients in the table, but only 140 in the abstract so I wrote to the authors and asked questions, and then the authors disappeared, but later on the paper was published elsewhere, and then the authors have changed the things where I put my finger on, that you can clearly see that this is the same article. So with the evidence of the earlier submission this paper was retracted. It took a year or so.

Then a student came to Melbourne from the Netherlands, Esme Bordewijk, and she was doing meta-analysis in polycystic ovary syndrome, the topic of her PhD and I said well have a look at these particular studies, and after a couple of days she comes back and says, well there is a problem with these studies, because this also Dr. Badawy actually copies tables from one article to the other so what you see here is 2 different articles reporting on 2 different studies and different interventions, but the mean values are very comparable. So each green dot means exactly the same number, each red dot means 1 point difference, and that didn't happen once but it happened in a lot of tables, so Esme summarized it all, and here you can see the 18 studies of Badawy, and then red means that there are more than 10 similarities in baseline or outcome variables. So I think that this proves that at least a substantial part of these papers is fabricated, and the problem is obviously, these papers are published in high impact journals, and they are cited. So if you look into the influential guidelines of PCOS, also come from Melbourne, then 13 citations actually come from these authors like Badawy.

So that is the summary of the problem I was confronted with. Then I will not talk too long about it, what can you do about it? Well, I think an important moment is submission of papers.  I saw a couple of editors and people working at editorial boards. Among the audience that's great, because it's really important that they are alerted to this problem. And then, secondly, during meta-analysis, you can study the work of more than one author. So one of the problems with the papers I just showed. If you see one paper of that author you're not gonna see it but if you assess them all at the same time, you see problems. Ask the author, and in the end usually in my experience asking for individual participant data will solve the job because it's really very difficult to mimic all this false data the right way.

So then, when I was alerted, I looked at other papers that were published. This is a randomized clinical trial from Doctor Maher, from Menoufia University. It was unfeasible, big sample size for fast recruitment, positive treatment effect, so I wrote a letter to the editor. I bring things in the public domain because it helps giving you timelines, etc. And then I asked the editor for the data, because the address is included. I knew the editor actually. He said “We have seen the data and we said yeah, it looks okay”. So I said “well can I see the data”, and they said “not allowed”. And then there was a bit of email correspondence also with the author and then at some stage the author said, please ask the editor to provide you with a copy for the data, thanks.

I got the data set so I was nervous, and I thought maybe I'm not going to find anything right, making all this noise. But this is actually the data set underlying that randomized clinical trials that I just showed where you can see already that poor outcomes slow up for low Ph are ordered at yes and no. But then, if you look into amniotic fluid index, that is 4, 9, 8, 7, 6, 5, 4, and so it counts down. This doesn't happen in a genuine data set. And maybe you could say the data set is sorted in a particular way but this caller sheet indicates how many strings we found in the data set. So that paper was retracted, it was also a long story, but this indicates actually that if you get raw data, that you know that there is a problem.

So we have worked on it, and I have now raised concerns about 900 different articles, and from that experience we have developed a checklist that we call the TRACT checklist. If you see a paper and specifically a randomized clinical trial, look at the following things: governance, is the study registered, how does the registration compare to the published paper, author group, small numbers of authors is suspicious. authors with previous problematic papers. Plausibility of the intervention used. Time frame, I work in obstetrics and reproductive medicine. If you study Ivf and you report on live birth then you need at least 9 months before you can report it, and there was a remarkable number of studies that doesn't fulfil this criteria. Dropouts, So papers are with 0 dropouts, that's suspicious. Rounded numbers, study with all even numbers. You can look at baseline characteristics, randomized clinical trials should have a particular distribution of baseline characteristics. If they're too close together or too far apart it doesn't work and you can look at the outcomes and compare the effect to other studies. So I think this works quite good. It's not proven but it alerts you whether something's wrong. And then, as I said, the proof is in the pudding. You should ask for ethics, documents and raw data.

So this could be a thing that you talked about during a drink and you say, oh, it's not very nice but yeah. I mean there's so much literature so don't worry about this incidental paper. But the important question is how big is this problem? So what you can do is you can look at retracted papers, actually retraction watch is a fantastic website for old papers with retractions and expressions of concern are exposed but the problem is that the process of retraction is very slow, very time intensive, and there is, I must say, a reluctance with editors, and specifically publishers to retract papers. So this is not a good measure. So this shows that I think the number of retracted papers is one in a thousand, or even a lower. Better estimates come from other sources. This is John Carlisle, He's an anaesthesiologist. Works for the journal anaesthesia, and he as one of the few editors for a long time already assessing the underlying data from the studies that are submitted, and he published a couple of years ago, that in terms of false data that varies depending on the country somewhere between 20 and 90%. And he also talked about zombie studies. These are studies that completely do not exist. And he found also that the percentage of studies submitted to his journal is high I would say. China, for example is 36%. So, John Ioannidis wrote an editorial about this and said, well if this number is true, then there must be hundreds of thousands of these RCTs around us.

So I'll take you back now to my own field gynaecology and show what meta-analysis does. So the best way to assess the prevalence of the problem is individual participants data analysis where you ask for the original data, and together with a group of people that by Leslie Stewart, we have done that. I was part of that group. Studies on Progesterone to prevent breech during birth, and from the 49 available studies, 31 share data. So from Europe, 9 out of 10, from North America, 14 out of 15. Australia was the best 1 out 1, 100%. But the problem came from unfortunately the Middle East, 3 out of 10, and Africa and Egypt, 1 out of 6 shared data. Another topic endometrial scratching, that is a treatment that is given to infertile couples. A meta-analysis, published with about 25 papers. We have approached all the authors, and we can get only datasets of 4 of these people, 3 from India and one from the UK and the others do not share data. This is around spontaneous conception. We have also done it for ivf, in vitro fertilization, and there are similar numbers. So more than half of the people do not share data, and the people who do not share come from Iran, Egypt, China, and 5 from individual other countries.

This is our journal clip every Thursday and there, meta-analysis is discussed, and that is also infiltrated with these problematic studies, including Cochrane. This is a meta-analysis of tranexamic acid for the prevention of postpartum haemorrhage. So we talk about the most important course of maternal deaths in pregnancy around the world, haemorrhage during labour. If you look into the Meta analysis published in the American Journal, this is a high impact journal, then the estimate is that the use of tranexamic acid reduces blood loss more than a litre with more than 60%. The majority of the studies from India, Iran, Egypt, and 2 from China. This week the New England Journal of Medicine publish a large study from the US with 5,000 women, that estimates that blood loss more than one liter actually is not reduced by TXA. So the difference here is 7.3% versus 8%. So there's a 0.7% reduction with the most significant derivative risk of 0.91. So this slide summarizes the problem. This is the most important cause of death of women around the world. If you look into conventional meta-analysis, you think that Txa is a fantastic treatment. It is reported in one of our leading journals, the American Journal. But if you do the big trustworthy Rct then the treatment effect is completely not there. Keep that in mind. Everybody online and I saw a couple of names of people who are really involved in this problem, this is what you are giving this. This is my role as an editor, I do the systematic reviews. Almost everything systematic review has potential problematic papers in it.

Work of others, a meta-analysis on tocolytics to prevent Pre-clin birth. A Cochrane reviews out of it. 122 trials included but 44, so 25% excluded because they don't trust the data. Ivermectin for Covid, a similar amount of papers included, and recently we have compared for a series of topics, the number of papers that we lose in different individual participant data need analysis, and that is somewhere between 40 and 60% and it has major impact on the conclusions from this meta-analysis.

So based on what I just showed I think that that an estimate of 30% of the randomized clinical trials in my field, women's health, are fabricated and it's for you to assess whether you think that is a concern or not.

Another sensitive issue is that unfortunately, these studies come in majority from particular parts of the world. So if you look at the total number of randomized clinical trials in women's health, in relation to the gross domestic produce then Egypt and Iran are producing the most RCTs and maybe much more than you could actually expect from the number of resources available and then China and India are also big contributors In absolute terms. If you look at retraction watch all scientific papers retracted then China, Russia, Iran, Malaysia, Egypt, and India are the main contributors.

So how do we deal with this problem? And I go back to the studies of Badawi and Hashim where I started. So we published on more than 3 years, an overview of what was happening. And some authors, some journals have responded or have investigated, and retracted or put an expression of concern. But the process of assessment by the journals is going incredibly slow. So this is a Kaplan-Meier curve that estimates the time until the decision. So this is from the moment that I write to the journal until I hear something, and after one year only 25% of the journals has come up with a response. So 75% of the journals takes more than a year to respond. We have no system in place, that if an author has a problematic article that we assess all the work of that author, right? So Lance Armstrong, when he lost a tour de France victories because of doping, lost everything at some stage. But he was not positive or tested positive in every single race. This is how some universities respond. They say you should go back to the Institute. This is Menoufia University, it has come up now with a statement where they say data are not longer than 2 years collected in our centre. This is another trial. This is also from Menoufia. This is a big study on hypertension in pregnancy. The left paper was published first. It is a 520 women study with 3 arms. I already asked questions in a letter to the editor because I thought that the study was unfeasible but then a year later, in another journal the same group published a similar study where 2 of the 3 treatments arms, are the same. Well, this is really, completely impossible obviously, and if you're not convinced, then if you look into the 2 outcome tables, there are actually many numbers that are completely the same, and all the numbers are even numbers. So I have flagged these 3 years ago, and these papers are still not retracted, and here we go. Here comes to meta-analysis in the American Journal, and they use these papers right. So I wrote to the American Journal, I said, Listen, we are Boeing. We know that the software of our planes is wrong, and we keep on flying there. The American journal refuses to reject or adjust this meta-analysis until the originals are retracted and Elsevier is taking 2 years, completely unacceptable of course.

This is another response. So I told you already. We make these overviews of particular authors, and the reason to do that is to bring things in the public domain, and that gives an extra dimension so to say, so we made an overview of one author, I can’t remember the name here, who had I don't know how many problematic papers, and then other papers now rejected because they didn't dare to publish it, but they say, oh, it has to go through an internal legal read at the cost of approximately $500, and we had another paper where the editor had accepted the paper for publication but then Springer then said we are not going to publish this overview and the editor then wrote to me saying Springer has informed me the publication of the manuscript would be in inadvisable for legal reasons for myself, as an editor in chief, yourselves as authors, we already had it as a pre-publication out there, and the Journal and the publishers, and therefore they are not going to publish. I think given the fact that I just showed you that we misinformed, and women and practitioners on the wrong treatments, for for example, postpartum haemorrhage, that this is completely criminal.

I'm now going to give you an overview of the successful legal claims of an author against the publisher or editor for an expression of concern for retraction. Here is the overview, it is completely nothing right? So these lawyers really are not afraid for a risk that is not existing, and the interest of patience and the interest of science is not on the table.

One story, this is an interesting journey also. So this was a was an overview. I do that often with students. So May M Linn who was working as a student in Australia. She's now a young doctor wrote an overview of the work of Dr Torky and we got it published actually in a French Journal. So here are the RCTs that Dr. Torquey publishes. We always plot the moment of registration and the running of the trials, and when the trials are submitted. So there was one case of clear plagiarism that the paper was retracted. One other paper was retracted. There is a paper with only even numbers, so that should be retracted. But interestingly then somebody in Egypt actually read our paper and told us, well if you talk about Doctor Torky, then I can give you a little bit more information. So he went to the library, and he actually copied for us the original thesis that Doctor Torky uses to create his articles. But the thesis are not written or supervised by Torky. He just gets them from another supervisor. So this Prof Mustafa supervises a thesis, and then sends it to Torky and Torky makes an article of that, and it doesn't happen once or twice. But here we go, I won't give you all the details, but this happens in an enormous amount of papers and this is already also known with the publishers of these relevant journals since December, and there is no expression of concern published. This is the German Gerburtshilfe and Frauenheilkunde. They also have a copy paper, I wrote to the editor and the editor, I'm going to speak out his name, Professor Matthias Wilhelm Beckman, says no further action.

Another example if we close off. This is 2 RCTs published in fertility sterility. Same author involved Al-Inany. Interestingly, the affiliation of the last author, Abou-Setta, is from Canada which allows you, you would hope a little bit more ground for investigation. So the problem with these 2 RCTs is similar to what I showed in the beginning. Copying of numbers in a way that is not explained by chance. So red is one digit different. Green it's the same number I've written to the authors, and never got an explanation for that. As this affiliation is in Canada, I wrote to Alberta, and the Dean wrote back to me, the research presented in the manuscript in question was neither conducted nor approved by University of Alberta so I am not going to do anything about it. But this last author Abou-Setta, actually had an appointment in Alberta when it happened, and with this logic you can investigate no zombie trials, because they have obviously never conducted ever. And then I wrote to Ryan Zarychanski, who was published with Dr. Abou-Setta, and I asked him maybe, to ask Abou-Setta and he was saying, I don't have time for that, and its not in my bandwidth blah blah blah. But interestingly, I wrote to this Ryan Zarychanski because he has actually published in Jama a Meta analysis, where he looks at the impact of fabricated papers and look who is the second paper in the Jama article that is Dr. Abou-Setta who is a fabricator. I mean it is really unbelievable that people do not stand up against this, so this is the responses of editors. I really don't have time for this, the journal gets 80 submissions a week. It exists in all specialties in every country. We cannot keep up with the number of complaints you send us yet that's because you publish so many problematic articles. We are working on it be patient, and the last one this week was your assertions about us and about Elsevier are therefore ill informed and wrong. More over they’re insulting implications do not serve any productive collaboration. So this is a journal that has published 100% fake papers. And they're still out there more than 3 years after I flagged that. The consequences for myself is that all my papers are being attacked by various people, which in itself should be okay, because I'm critical on others. I should also be exposed. But I wonder specifically, I would say not so much even to the fabricators but to the wider academic community. Why I and couple of others, Jim Thornton, and a few others in here at the moment, speak out. Why only a few people speak out, and not everyone. This was a nice email I got from a group of editors, including Roberto Ramiro from the American Journal. You can't constantly lie and expect people to trust you in the paper that we discussed is retracted now, and this is another kind of the type of emails that are sent by fabricators, will start to accuse me in very wide CCs. And the last product is from last week BJOG, one of the leading obstetric journalists has now published an international multi-centre stakeholder consensus statement on clinical trial integrity done by the Cairo consensus group on research integrity, where, if there are not fabricators in that group, then they are very close to fabric characters, and the problem here is not so much the stakeholder paper. The problem is that we don't speak clearly out to these fabricators that this type of behaviour is unacceptable. We give them a feeling that they can get away with it. I think that is so wrong from BJOG to do that.

So those were referred to on the Economist article, and so there was attention for this topic in the Economist. I think, a very well written article, and one of the comments was, I recently saw your name in the article. It seems crazy to me that the topic of fraudulent medical research was brought up by a non-medical journal. I think that is a very important sentence. Also, I think that medicine has a lot of problems in addressing this issue. So this keeps me going. Facts do not cease to exist because they are ignored. The problem is there, and in summary, data fabrication there are areas where it happens there was a lack of governance, peer review does not detect it, but the main problem is, the majority of the academic community is looking away. I think the problem in RCTs which are the most informative studies, most important studies is as high as 30%. We're able to detect it, and the biggest challenge are at the level of governance.

Leeza Osipenko: So there is a question in chat from Florence who proposes replication study publication as one of the solutions. Also data availability. So the question is, what can we do to change medical journals culture to actually publish replication papers and to improve on transparency?

Florence LeCraw: I was just going to say, coming from the world of patient safety trying to prevent recurrence of medical errors, and then economic journals who do it by not only making their authors requiring their data encodes to be available to others, but then they publish them. They publish the replication papers. Medical journals I have found out through many people, and editors have admitted that they don't publish them, not the individual replications. And most people unlike our great speaker cannot do the arduous work to do it if they're not published, they've got to pay their bills. They're not going to be promoted if they're not published. So my question is, how do we encourage medical journal editors to be transparent and publish replication papers. You know what have you all seen. JAMA Internal medicine I've seen is the best at doing it, but most of Jama and new England journal medicine, I've been told, authors just blow them off.

Ben Mol: Replication papers are important, but they are also important in the absence of fraud. I think I hear many people discuss this so people say, well industry is involved in research in itself its not reliable but I'm particularly motivated to address this topic of fabrication, because I think the  other problem should be resolved by discussion and improving our systems but if we can't trust the underlying data, that's the end of every discussion. So what I say is we know that industry tries to manipulate their findings because they do it driven by profit, obviously so they want to have positive findings, but I believe and I had no indication that the data underlying the industry studies are wrong or fabricated. Similarly replications replication studies. Yeah, that is needed. But and it said that New England and JAMA only want to want to publish new findings for the first time. But then the replication studies end up in a lower impact journals, but what we don’t hear about is this completely fake data? It's from a different level.

Leeza Osipenko: There's another point from Florence in the chat, and I guess my question to Florence is yes, perhaps it doesn't give it an academic credit, but replication papers can definitely be uploaded on per print servers, and the information can be available there for extra support.

Susan Bewley: I just want to ask his advice about following him on this road. Our friend Jim Thornton and I, we are all looking at these things. I've got a paper I had to find someone to $599 to get a letter to the editor about something that made no sense, riddled with errors, and you know there's been an investigation. Supposedly a statistician looked at it, supposedly they haven't bothered to answer the letter. I think it must be made up. I think it must be fraud, and I am getting absolutely nowhere with the editor's journal. I'll put the Tweet in if anyone wants to look at it. I don't want to waste my time; I don't want to give up but where do you go when you've got authors who refuse to discuss something which is not replicable because they haven't even put in the methods basic data to replicate it. I think it is unscientific, possibly fraud, to have a paper which has methods that are not replicable and full of errors and I'm dealing with an chief editor and they said, we followed all the COPE guidelines. I mean, it's bullets but where do we peel if we have no international rules, governance systems, won't you just be battered back forever? I don’t want to make me or you dismal but please give me your advice.

Ben Mol: Well I would say don't give up and bring what you think is important. to light. I think transparency is an important word. You referred to COPE. I think the COPE processes are not equipped to deal with the mass fabrication that I discussed here but COPE I think is set up by people who think that there might be an individual dispute between 2 authors who have an academic competition, and then nobody knows exactly who's right and who's wrong, etc. So then you have to go to the Institute, and they have to investigate. But that completely doesn't deal with the with the mass fabrication that we are talking about here, and sometimes I find myself in a position that I'm the accuser and then the editor seems to be a kind of a neutral role. I mean to all the ethicists, I saw a couple online also, I'm a researcher who just want to do research right? I'm just speaking out because it's unacceptable. I'm not a party here. I mean I think it's really ridiculous if people are going to sit somewhere in the middle to judge it. And the other thing is to call us sleuths? I don't like this word. Sleuths like you have to look carefully to see. I mean you really must be a blind cow if you don't see the problems in these papers, so how on earth and the answer is there should be much more transparency. All the data should just be available. I saw Julie Hastings from general reproduction online. There is an author there who hides behind an expert report from someone else, and says, I'm not going to share the data. The journal should just put that online right. They should say, we have asked the journal for the data and he is not sharing that. And then everybody knows that you shouldn't use this stuff.

Leeza Osipenko: Yes, it would be a deal world if we obliged every journal to require full data access.

Ben Mol: Well they should oblige themselves. I saw one comment in the chat that I don't want to miss. That was from a PhD Student in the Netherlands, which I think was Doctor Mariam Shokralla who apologized that she is from Egypt. Let me apologize to Dr Shokralla. I struggle with this issue in that inevitably I point at particular countries etcetera, and I just don't know how to deal with it, because if I speak out, I point at particular countries, and if I don't speak out I'm not honest to the quality of care patience, science, etc. But let me say that under the level of the importance for patients There's also the importance for science but there was also the argument of justice to many people in this country who actually do honest research right, and they are affected to so the editors and the publishers should realize that if they don't speak out they are they are also harming the many people who do trustworthy research in these countries and actually do a lot of effort. So I saw your question in this in these countries, and we actually do a lot of effort.

Ian Tannock: One of the things that occurred to me is the fabricators you described are not very good at it, and it really raises the question, I am not a fabricator, but if I were I think I'm bright enough to do a much better job, and that suggests that you're 30%, although I think it's a little bit biased by the fact that they are coming from some of these countries where there's less control than in some others. But those who are better at it, and probably getting away with it much more. Do you have comments on that.

Ben Mol: I think that's true and the other concern is that if we put this information that I just shared with you out there, that the fabricators will read this check list and they will produce better papers. So I think the checklist and the fraud detection will specifically be good for what has happened in the last 10 to 15 years. But moving forward, we should be much more proactive, and just get all the data sets on the table, so to say. I believe however, that if you ask for original data that's going to be that's going to be pretty reliable. And you can also go and ask for institutes if studies were really done. If we really get more transparency, and also people from the countries who are trustworthy, then we know whether studies actually happened or not. You can ask it. I showed one example where somebody so my article and gave a lot of extra information. That has actually happened already 3 times to me indicating that there are also a lot of people around who want to do the good thing and I think if you reach out to these people then this problem is repairable.

David Colquhoun: I've published mostly in the Journal of Physiology and Royal Society journals, and it never occurred to me that people would actually make up data. 30% is an astonishing number. Is it something to do with medicine.

Ben Mol: I mean the reason I see it is I am a researcher, and the clinician. So in my own field, I can see more than others see, and I could not see what I see Obstetrics in the field of cardiology or neurology. But you touch it on a deeper point that I mean given that last editorial response that I got last week that they think I'm insulting by asking what about their investigations going on for 3 years. I mean that editor actually hid behind the publisher, and I said, well you have an editorial response too. I'm afraid that there might even be a layer deeper, that the problem is not only in research, but the problem is with medicine as a profession in itself. This is going to be tricky but maybe we are even afraid to really allow good science to do the job in medicine, because that threatens medicine as a field. And that's the reason why we don't want to speak out. I'm afraid that these mechanisms play, medicine is a very conservative field so to say, once you are a doctor you're part of that group. That group gives you a good life but don't dare to be critical outside the group. I think these mechanisms play a much bigger role than then we think they do.

Leeza Osipenko: I pasted a YouTube link to particle physics talk really fascinating. I don't know if they might have more than 30% of fabricated research really interesting. The difference is with particle physics you don't change clinical practice. So I think with medicine, you actually have a lot more impact when you fabricate research and physics they spend a lot of money and hopes. So yes very important comments. I want to get back to the question that was posted in the chat asking about artificial intelligence and synthetic data arms being produced for data sets. What is the situation with that, whether synthetic arms might work in the situation with fraud? Or could the transformations brought about by artificial intelligence be fruitful? Chance to take into account the problem of fraud as well. And that follows with my question to Ben, considering the trends that use important data, I would think it would be quite easy to train artificial intelligence to do that, and screen big volumes of articles to identify systematically, automatically these articles, and then send them to people like yourselves to recheck which leads me to another questions, how do you check articles? Do you just pick something, or do you only check articles which you happen to be reading and then you question so sorry. It's a lot of questions. But start with a. Yeah, couple of questions.

Ben Mol: So my response on the detection in relation to AI and better production of problematic articles. Yes I hear this question a lot but if you listen carefully to my talk, then our problem is not detection. Our problem is the response. So that is the second question for me. If we are really having a good system, and we respond good on what we see then you're going to have the question, do we really see everything? I mean these papers that I showed, these 900 papers, it is really unbelievable, that this still out there right. I could fill an hour every week with all these examples of what these editors are accepting and letting go. So the discussion is not what happens if better fabricators come. The discussion is, why are we not responding to all the clearly problematic work that is currently done? So that is, that is one answer. I think your other question was how do I work? Well I worked with my checklist not automatically. So we talk with people who can help us in in kind of transferring the algorithms in automatic searches, but I have worked now with a student from the University of Amsterdam, who has a plagiarism checker from the University. I think the things he finds also is unbelievable that the journals have not found themselves right, so that is one thing, and then put things in always in Google, always put an email address of an author in google and that gives a lot of surprising findings. But on the other hand, the problem is also not so big. If we would just go to 10 universities in Egypt, and we should collectively say, this is not acceptable then the problem is solved. But we don't dare to do that. I mean another problem I touched upon that is that the journals do not collaborate in terms of investigation. There is no mechanism that says, now we are going to collect everything from one author and at once we're going to ask that also to show his underlying data. And once we are going to approach his or her Institute. I mean that should be so simple to organize but we don't do that.

Question from chat: I observed that most of the fraud studies were clinical trials, and not other types of studies. Is that true and why would that be the case?

Ben Mol: So my estimate, or actually even Ivan Oransky’s estimate from retraction watch is that 2% of all the medical papers are retracted but from the RCTs, that is 30% and there are 2 reasons for that. RCTs if you do them real are expensive. So the percentage of RCTs in the trustworthy literature is low because it’s so expensive to do them. But if you're going to fabricate, you're not going to be hampered by that, because fabrication of an RCT is equally as cheap as fabrication of a non-RCT so you're going to have much more RCTs in the fabricated stuff. And the second reason to do that is, RCTs are attractive so if you want to increase your publication chances, then you can fabricate a RCT. So that explains for me the 2% into general literature as compared to the 30%, RCTs. And you saw it also in the picture that we made In relation to GDP so Egypt and Iran are just countries that you would not expect there as the number 1 and 2 RCT countries around the world. If you look at the number of papers and number of RCTs in terms of the medal ranking of the Olympics and if you looked at every Olympic games it's always the US, China, Russia and Europe you expect them there, based on their size? But Egypt and Iran are not to be expected there based on their size, and the reason is, fabrication is not costing them.  

Leeza Osipenko: Fabrication and zombie trials. What do people put for funding for zombie trials? My assumptions from your talk is that fabricated trials could have had a proper funder, and if the results were not as impressive they might change a few numbers, and zombie trials, is there any money behind that? No, nothing but that is another reason why these editors and publishers should take action. How can how can you kind of accept that somebody submits an RCT with 1,500 women without any funding right? That is just impossible. I think one of the problems is that the majority of the editors in the publishers don't know what they're talking about themselves. I mean what they let go in terms of papers, everybody can see that that could never happen, but because they don't have enough understanding of research. I really make myself unpopular here. They accept these papers so to say and one criterion should be if you submit a large RCT and you don't have any funding, then you should explain how that happens. That in itself, actually you just enriched my checklist with another item, I'm going to adjust the checklist.