The Impact of the COVID Pandemic on Prematurity Rates: Conflicting Results, Publication Ethics and Academic Frustration
Acta pædiatrica(2021)
摘要
In 2002, Olsen et al. suggested that editors were less inclined to publish negative results,1 and we believe that our experience during the current COVID-19 pandemic illustrates that this is still a fundamental issue. In August 2020, a leading medical journal published a paper that said that only one extremely preterm infant had been born in a specific country during the first five weeks of their COVID-19 lockdown.2 The paper stated that this was an unexpected result, as 9–13 infants had been born during the same periods in the previous five years. This was presented as a dramatic 90% reduction in the rate of extremely preterm birth and was said to be a new and surprising finding on a hot topic. The results were reported by a prominent American newspaper and the findings were subsequently quoted by other media as one of the good news stories about the pandemic. Three months after that paper was published, we submitted a manuscript that reported the number of extremely preterm born children admitted to 47 neonatal intensive care units (NICUs) in 17 countries for publication in the same journal. This was motivated by the publication of the first paper. The 47 NICUs were partners in a consortium that was undertaking the SafeBoosC-III trial. This trial tested a neonatal intervention for extremely preterm infants.3 The study was pre-registered at clinicaltrials.gov with a defined primary outcome and a statistical analysis plan. Whilst a modest decrease of 8% was seen during lockdown, this reduction was deemed small and not statistically significant. A total of 428 extremely preterm infants were admitted to the 47 NICUs during the three months of the most severe lockdown restrictions in early 2020, compared to 457 during the corresponding three months in 2019. The manuscript was placed on a preprint server.4 The submission to the leading journal was not accepted and the letter that we received from the editor one week after submission stated that although the paper was well presented, and it would not strongly influence readers as much as other papers that had been submitted at that time. The editor said that population-based studies were needed to assess the impact of the pandemic on preterm birth and outcomes. However, the editor further stated that looking at this population in different countries with different healthcare systems and lockdown arrangements was complicated, as important differences in some systems may not have been present in others. The feedback concluded that combining such data may not be appropriate, and therefore, it was difficult to reach general conclusion from the study. Similar responses were obtained from the editors of two further medical journals in October and December 2020. All three journals rejected the paper without putting it through the peer-review process. We were not asked to resubmit our paper, and we concluded that there are some editors who feel less inclined to publish negative rather than positive findings. The paper was finally published by the fourth journal we submitted it in June 2021.5 Whilst the journal felt that the paper was limited by the absence of data on obstetrics, like the miscarriage rate, they pointed to the fact that it was carried out by an established worldwide collaborative network. The protocol was registered beforehand and the admission numbers for the target group and the total admission numbers were provided. In addition, the strengths and limitations were extensively presented and appropriately discussed. The authors were told that their clear and straight forward observational study addressed a previously raised assumption that the preterm rate was reduced during the first wave of the pandemic, by providing data from the worldwide network. That showed that there was no real difference in the rate of extremely preterm birth when the three months during the first COVID-19 wave were compared with the same period in the preceding year. The strengths of the first study that inspired our research were a national population and high-quality data. However, the limitations were the small numbers of infants and the lack of an a priori defined question and statistical analysis plan. We believe that this detracted from the value of the statistical tests. A number of other reports have been published in the interim. Our preprint paper was included in an editorial on 11 published studies that concluded that the balance between positive and negative results was about 50/50.6 The epidemiological methodological problems of the papers were outlined, including author publication bias. The author of the review stated that clinicians may have noticed that the number of preterm births was lower than usual and they may have used their clinical databases to test it. Then they submitted, or did not submit, a report for publication, depending on the results. Publication bias is a central issue. Meta-analyses are affected by publication bias, and this is not necessarily remedied by reporting more data. A lack of negative results may give false hope and reduce the chance of future discoveries of better interventions. It is also a disservice to the general public and the scientific community. Although funnel plots may be used to look for evidence of publication bias, this is less likely to be effective for observational studies that may use large, routinely collected data that require little effort. This is different from meta-analyses of randomised clinical trials, where the big investments involved in large clinical trials make it less likely that investigators will not attempt to publish the results, despite the negative results. Furthermore, pre-registering of clinical trials is required, including predefined outcomes and statistical analysis plans. This makes it less likely that investigators will publish unplanned outcomes that yield positive findings, rather than planned outcomes that yield negative findings. However, our case suggests that the role that editors play in whether to publish negative and positive findings is also important. Poor science should not be published. Editors play an important role in rejecting manuscripts that have insufficient methodological quality. Poor science adds wrong data to the body of scientific literature and sets incorrect standards of work. We believe that none of the papers that have been discussed here should be categorised as poor science, and we are not criticising the speedy acceptance of a surprising positive finding during the pandemic. The COVID-19 pandemic was on the rise and making clinicians aware of unforeseen effects was appropriate. However, in our opinion, the unwillingness to rapidly re-open the debate is not as easy to defend. The first journal who rejected our paper pointed to the fact that it would not have as strong an influence on readers as other papers that had been submitted. But surely the readers who were influenced by the first paper should have the chance to see the other side of the debate? Are there different standards for the first reports that are published? Are imperfect studies with negative results less worthy of publication than imperfect studies with positive results? How should editors navigate these decisions? Are positive results more certain than negative results? How can we be sure that results are truly negative? How small should an effect be in order to be considered non-existent? How narrow should the confidence limit be? When should editors look for yes or no answers? Is the size of the effect a more appropriate question? We think that the cause of the problem is straightforward. Editors of biomedical journals depend on their readers and researchers, clinicians and lay readers like breaking news. The true or false angle adds drama. Business as usual is trivial. Papers that state that the answers were not as clear as we first thought tend to be disappointing. It appears that the COVID-19 pandemic has not changed those sad facts. The question for editors, as well as for clinicians or investigators, is how to deal with this. Most editors and authors work for free, but both groups need academic credit. Some efforts have been made to focus on the value of negative results, but it appears to be an uphill struggle since the journals that are dedicated to doing that do not tend to thrive.7 The present viewpoint was submitted to the first journal that rejected our negative report and was subsequently rejected by the editors of other journals. This version was accepted after revisions and paraphrasing rather than quoting review comments. The first journal that rejected our paper stated that they were not biased against our paper. They stated that they did not think it was sufficiently robust, positive or negative to answer the question that was posed, but not definitely answered, by the initial report. We can only guess why editor colleagues make decisions about what papers should and should not be published. The motivation behind this real-world story is the hope that it will generate some debate and that people in a position to make a difference will reflect on our report. The authors wrote the manuscript that is being discussed in this paper and may be positively biased towards its value.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要