Review of Statistical Methods Used in Biomedical Literature
-
Loading metrics
How ofttimes do leading biomedical journals use statistical experts to evaluate statistical methods? The results of a survey
- Tom E. Hardwicke,
- Steven Northward. Goodman
ten
- Published: October one, 2020
- https://doi.org/ten.1371/journal.pone.0239598
Figures
Abstruse
Scientific claims in biomedical research are typically derived from statistical analyses. All the same, misuse or misunderstanding of statistical procedures and results permeate the biomedical literature, affecting the validity of those claims. One approach journals accept taken to address this consequence is to enlist practiced statistical reviewers. How many journals do this, how statistical review is incorporated, and how its value is perceived by editors is of involvement. Here we report an expanded version of a survey conducted more than than 20 years ago by Goodman and colleagues (1998) with the intention of characterizing contemporary statistical review policies at leading biomedical journals. We received eligible responses from 107 of 364 (28%) journals surveyed, across 57 fields, mostly from editors in chief. 34% (36/107) rarely or never utilise specialized statistical review, 34% (36/107) used it for ten–l% of their articles and 23% used it for all manufactures. These numbers take inverse little since 1998 in spite of dramatically increased business organization most inquiry validity. The vast majority of editors regarded statistical review equally having substantial incremental value beyond regular peer review and expressed comparatively little business nearly the potential increase in reviewing time, cost, and difficulty identifying suitable statistical reviewers. Improved statistical teaching of researchers and different ways of employing statistical expertise are needed. Several proposals are discussed.
Citation: Hardwicke TE, Goodman SN (2020) How often do leading biomedical journals use statistical experts to evaluate statistical methods? The results of a survey. PLoS ONE xv(10): e0239598. https://doi.org/10.1371/journal.pone.0239598
Editor: Despina Koletsi, University of Zurich, Switzerland, SWITZERLAND
Received: April 30, 2020; Accepted: September 10, 2020; Published: October 1, 2020
Copyright: © 2020 Hardwicke, Goodman. This is an open up access commodity distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in whatever medium, provided the original author and source are credited.
Data Availability: All relevant data are available within: https://osf.io/a43ut/.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors take declared that no competing interests exist.
Introduction
Scientific claims in the biomedical literature are usually based on statistical analyses of data [1, 2]. Withal, misunderstanding and misuse of statistical methods is prevalent and tin threaten the validity of biomedical enquiry [ii–eight]. Statistical practices used in published research, peculiarly in leading journals, powerfully influence the statistical methods used by both the prospective contributors to those journals and the larger scientific customs. These practices are in turn shaped by the peer review and editing procedure, only about biomedical peer reviewers and editors do not have expert statistical or methodologic grooming. Many biomedical inquiry journals therefore enlist statistical experts to supplement regular peer review [9], input that empirical studies have consistently shown to improve manuscript quality [10–17].
Some biomedical journals take adopted statistical review since at least the 1970s. Leading journals such every bit the Lancet [12], the BMJ [18], Annals of Internal Medicine [19], and JAMA [20] all employ statistical review. Two surveys, one in 1985 and some other in 1998, sought to systematically characterise biomedical periodical policies and practices regarding statistical review [21, 22]. Since the last survey in 1998, concerns almost the validity of inquiry findings accept risen dramatically, with poor statistical do being recognized as an of import contributor [7]. We were interested to see to what extent these concerns had spurred changes among leading biomedical journals in the use of or attitudes towards statistical review.
Methods
Sample
From the complete list of Web of Science discipline categories (228) nosotros identified all 68 sub-domains representing biomedicine. We selected the top v journals by impact factor within each sub-domain. Nosotros supplemented this list with 68 additional journals previously included in the survey by Goodman and colleagues [22], and assigned each of these to their relevant sub-domain. Finally, we removed any duplicates that appeared in multiple sub-domains. This resulted in a sample of 364 journals.
Methods
The digital survey instrument (encounter https://osf.io/dg9ws/) was an adapted and expanded version of the survey previously conducted past Goodman and colleagues [22]. There were 16 questions in total, still the exact number presented to a respondent depended on their response to the showtime question: "Of original enquiry articles with a quantitative component published in your periodical, approximately what percentage has been statistically reviewed?" If respondents indicated that fraction was less than or equal to 10%, they skipped to a question about why they rarely employ statistical review (Q12). If the fraction was greater than 10%, they completed a detailed serial of questions relating to statistical review policies at their periodical (Q2—Q11). A question about power and willingness to use statistical review (Q13) was asked of all respondents except for those who indicated that their periodical'south articles rarely or never require statistical review, or that statistical aspects of the article are adequately handled during regular peer-review and/or by editors (for Q12). Three questions concerned journal characteristics (Q14—Q16). Finally, all participants were asked to share boosted comments in a gratuitous-text response (Q17). The full survey instrument is available online (https://osf.io/dg9ws/). The questions and response options reported here are paraphrased for brevity. This survey was approved by Stanford IRB #42023.
Survey process
The survey was developed and hosted on the 'Qualtrics' platform and distributed via electronic mail. The invitation e-mail (see https://osf.io/9px8r/) outlined the purpose of the survey and included a link to the survey instrument. Depending on availability, we east-mailed either the Editor-in-Chief, the Managing Editor, or used the full general journal contact accost (in that order of priority). The first wave of due east-mails was sent on August 9th 2017 and data collection was finalized on Dec 9th 2017. All respondents were told that the periodical proper name would not be reported or associated with any answers. Non-respondents, were sent upward to three reminder eastward-mails as required, dispatched at approximately two-calendar week intervals.
Results
Sample characteristics
We received responses from 127 (35%) of the 364 biomedical journals surveyed. Of the 68 subject area areas, at least ane periodical responded in 57 areas, with a range from i to 6 journals (Median = 2, see S1 Table). Twenty respondents were excluded for providing minimal information: 11 opened the survey but did not fill information technology out and 9 simply completed question i. This left 107 responses (29%) suitable for further analysis. Journals were classified into types by S.North.G. based on journal contents. At that place were 5 basic research journals, 86 clinical inquiry journals, two hybrid (bones and clinical) journals, iii methods journals, 2 policy journals, and ix review journals.
The vast majority of respondents identified themselves as having editorial roles: Editors-in-Chief (n = 77/107, 72%), managing editors (n = 12, 11%), deputy editors (due north = iv, 4%), associate editors (northward = 7, 7%), three statistics or methodology editors, one production editor, one peer review coordinator, and 2 missing descriptors.
The median number of original inquiry articles published annually by these journals was 164 (10th-90th percentiles 48 to 300; 15 missing). Median journal acceptance rate was 18% (10th-90th percentiles half-dozen to 45, 6 missing).
How frequent is statistical review?
36 (34%) of 107 respondents indicated that statistical review was used for x% or fewer of articles, 36 (34%) for between 10% and 50% of manufactures, 10 (9%) for between fifty% and 99% of manufactures, and for 25 (23%) statistical review was used for all articles (S1 Fig). Clinical and hybrid journals (North = 88) used statistical review for a greater proportion of articles (median = xxx%) compared to other journal types (N = 19, median = 2%).
For the 36 journals where statistical review was rare, xiv respondents indicated that statistical review is non required for the types of articles they handle, nine respondents said they lacked necessary resource or admission to statistical reviewers, and 8 respondents indicated that statistical aspects of manuscripts are already fairly handled by regular peer review and/or past the editors. 5 responses were "other" or missing.
Ability/willingness to employ statistical review
All 107 respondents were asked to rate the extent to which various factors influenced their ability/willingness to use statistical review (run across Fig 1).
Responses to the question 'To what extent do the following factors affect your power or willingness to use statistical review (or use information technology more) at this journal?' N = 107. Percentages sum to nigh eighty% because 21 (20%) responses were missing.
Statistical review policies
Farther questions about statistical review policies and procedures were simply asked of the 71 journals reporting that they reviewed more than than ten% of submitted articles.
Source and grooming.
56% of 71 respondents indicated that statistical reviewers are selected from members of the editorial team (Fig 2 Panel A). The median number of statistical reviewers on the editorial team was 2 (10th-90th percentile one to five, 6 missing). 34% relied on a pool of external reviewers, median size 11 (10th-90th percentile, 4 to 48, 2 missing). It was uncommon to place statistical reviewers on an advert-hoc footing (seven%).
Per centum of responses (Northward = 71, including 1 missing response for all questions not shown) for questions about policies relating to statistical reviewers. Console A: 'How are statistical reviewers called?' Panel B: 'What proportion have doctoral preparation level in a quantitative discipline (eastward.g., biostatistics, epidemiology, computer science, outcomes inquiry)?' Panel C: 'Are statistical reviewers compensated for their work?'.
86% of respondents indicated that most or all of their statistical reviewers have doctoral level training in a quantitative bailiwick (Fig 2 Panel B), and a narrow majority (55%) paid statistical reviewers (Fig ii Panel C).
Review logistics. 59% of the 71 journals did not require statistical reviewers to complete a software template or ask them to follow general guidelines (Fig 3, Panel A). 31% provided guidelines, 4% provided a software template, and four% provided both. 72% of journals using statistical reviewers "Ever" or "Usually" have them see a revised version of the manuscript (Fig 3 Panel B). It was rare for statistical reviewers to never see revised manuscripts.
Per centum responses (N = 71; including i missing response for all questions non shown) for questions about policies relating to statistical review procedures. RPR = Regular Peer Review. Panel A: 'Practise yous take a formal structure for statistical review that you lot ask statistical referees to follow?' Console B: 'How oftentimes does the statistical reviewer see a revised version of the manuscript, to assess whether their initial comments were addressed?' Panel C: 'When you do obtain a statistical review, at what stage is the review unremarkably solicited?'.
35% of journals solicited statistical review contemporaneously with peer review, 27% later on regular peer review but before an editorial decision, 17% "advertisement-hoc" and 6% merely after an editorial determination had been made (Fig 3 Panel C).
Outcomes of statistical review.
The bulk of respondents (73%) indicated that statistical review results in important changes to the reviewed manuscript 50% or more of the time (Fig 4 Console A). Roughly one quarter reported a delay in decision time of zero, less than a week, 1–2 weeks and greater than 2 weeks respectively (Fig 4 Panel B).
Percentage of responses (Northward = 71, including 3 missing responses for Panel A and ane missing response for Panel B, non shown) for questions about outcomes of statistical policies. For the boxplot the dark horizontal line represents the median, lower and upper hinges correspond to the 25th and 75th percentiles, and the upper and lower whiskers represent the ± 1.5 interquartile range. Panel A: 'When you do obtain a statistical review, approximately what percentage of the time does it result in what you consider to be an of import change in the manuscript?' Panel B: 'When you utilise statistical review, what is the approximate median increase in fourth dimension to concluding decision?'.
Value of statistical review.
Substantial majorities of respondents believed statistical review to have considerable incremental value beyond regular peer review. This extended to critical manuscript elements supporting proper conclusions, across statistics per se, including results interpretation, presentation, consistency of conclusions with the show, and the reporting of written report limitations (Fig v).
Responses to the question 'In your journal, how would yous charge per unit the incremental importance of the statistical review (i.e., what information technology adds to typical peer and editorial review) in assessing these elements of a research report?' N = 71; Percentages do not sum to 100% because of 1–two missing responses.
Discussion
Concerns about the validity of published scientific claims, coupled with the recognition that suboptimal or frankly erroneous statistical methods or interpretations are pervasive in the published literature, have led to active discussions over the past decade of how to ensure the proper use and interpretation of statistical methods in published biomedical research [two, 7, 25]. Most of the proposed remedies tend to focus on improving written report design (eastward.g., sample sizes), statistical methods, inferential guidance (e.g., the utilize of p-values), transparency, and statistical training. Comparatively piffling attention has focused specifically on how journals themselves tin can improve their performance. The fact that so many problems persist suggests that extant editorial procedures are not adequate to the chore [2–eight]. That was the motivation for this survey of the highest touch factor biomedical journals across 57 specialties to find out whether and how they use methodologic experts to help them adjudicate and revise manuscripts [9].
The results suggest that although statistical review of some kind is fairly common, information technology is far from universal; of the 107 eligible journals 34% (36/107) rarely or never use specialized statistical review and 34% (36/107) used it for ten–50% of original research. Just 23% of these top journals subjected all original research to specialized statistical scrutiny. These numbers are quite like to and no better than those reported in a similar survey by Goodman and colleagues [22] in 1998 where 33% of 114 surveyed biomedical journals employed statistical review for all original research manuscripts and an boosted 46% employed statistical review at the editor's discretion. Thus, it seems that there may non have been substantial changes in the use of statistical review over the last 20 years, in spite of the fact that the vast bulk of editors in this survey regarded statistical review as having substantial incremental value beyond regular peer review, improving non just the statistical elements, merely estimation of the results, strength of the conclusions and the reporting of limitations. This impression is supported past empirical assessments of statistical review [10–17]. Interestingly, there was comparatively little concern almost the potential increase in reviewing time, cost, and difficulty identifying suitable statistical reviewers.
We did not attempt in this survey to address the quality of statistical review or its implementation, which can vary with the journal model. Calculation methodologists to the editorial board may be most constructive at facilitating the two-style transfer of noesis and of journal culture between the statisticians and the other editors [eighteen, 23], improving the methodological sophistication of the entire editorial team over time, and ensuring that reviews target the most critical issues and are communicated and implemented accordingly. This editorial board model was the most common reported here (56%), as it was previously [22]. By contrast, one-third of journals drew their statistical reviewers from an external pool. This model risks using statistical reviewers without adequate domain cognition, or whose methodologic expertise or preferences are narrow or idiosyncratic. Just like other peer reviewers, individual statistical reviewers have their own limitations, and if there is non a statistician internal to the journal, an editor may not know if statistical reviewer requests are reasonable or how to adjudicate disputes between the statistical reviewer and the authors, who might have a statistician of their own.
Specialized statistical review is just one function of a multistep editorial procedure. Schriger et al. examined defended statistical review at a leading emergency medicine journal, and found that while there was a measurable improvement in statistical quality, a sizable number of errors flagged by statistical reviewers persisted in the published article [17]. This occurred because authors alleged they had fixed issues that were non in fact corrected, comments were not transmitted to authors, authors ignored comments, or the writer rebutted the comments, all without follow-up by a decision editor. Statistical review processes were afterwards altered to make this less likely at that journal, but this demonstrates that to be effective, the initial statistical review must be enforced past other editorial processes.
Statistical reviewers do not necessarily need to be PhD statisticians; a domain proficient with sufficient quantitative training may likewise take on the office. In our survey, it was reported that nearly statistical reviewers had doctoral level preparation in a quantitative discipline, which could include such fields every bit statistics, epidemiology, informatics, health services research, and economic science. About half received fiscal compensation for their work, somewhat more than the one-third reported Goodman et al. [22]. Dissimilar other reviewers, financial compensation is often necessary to employ statistical reviewers considering they are in wide need and are not reviewing for their own academic discipline, for which they do non expect compensation. Typically, only the most prominent journals in a field accept the resource to pay statistical reviewers, and by targeting high impact-factor journals, this survey may have selected journals most likely to have those resources.
The utilise of reviewer guidelines or templates was relatively uncommon, equally was the case 20 years ago [22]. Guidelines or templates might assistance to standardize the review procedure and prompt reviewers to address pertinent statistical problems, improving overall review quality and consistency beyond reviewers and papers. The Nature journal group has instituted a formal statistical reporting checklist for authors that is electronically linked to the article (https://www.nature.com/documents/nr-reporting-summary.pdf).
This study has several of import limitations. Although the absolute number of responses was comparable to those obtained in previous surveys on this topic [21, 22], the response charge per unit (35%) was low enough to exist concerned nearly selection bias, albeit probably towards journals more than likely to use statistical review. The survey focused on high impact factor journals, again probably an up bias as lower profile journals are unlikely to employ statistical review more than ofttimes [22]. This is supported by a survey of 30 dermatology editors where 24 (lxxx%) rarely used statistical review for original inquiry with data, and only 3 (10%) reported statistically reviewing more than 75% of manuscripts [24]. Simply one dermatology journal had an editor primarily responsible for statistics. Then, while the fraction of journals (35%) using statistical review for more than than half of their articles could be essentially improved, the respective number for the not-respondents and for the tens of thousands of other biomedical research journals is probably far lower. Finally, statistical review may exist less valuable at review journals of which there were 9 amid the respondents; we did not explicitly verify whether these journals could potentially benefit from statistical review.
Recommendations and decision
Overall, the findings reported hither propose that statistical review has non dramatically changed at leading biomedical journals over the by xx years [22] even equally concerns about statistical misuse in biomedical enquiry take markedly increased [ii]. Virtually editors seem convinced by the value of statistical review and apply the process to some or sometimes all of the articles that undergo regular peer review.
Efforts to reduce poor statistical practice through statistical review might be best focused on improving standardization, potentially through the provision of guidelines or templates. Facilitating a more than productive two-way dialogue between the statistical and applied research communities may help to mitigate poor practices [7]. Meta-research can be used to elucidate which models of statistical review are more or less constructive in different scenarios [25].
New models of peer review and editorial practice might help to address persistent statistical problems in the biomedical literature. Recognizing that statistical review is fourth dimension intensive, limited past both reviewer supply and expense, possibly new centralized resources of experienced or vetted methods reviewers could exist developed that would supply pre-publication statistical reviews, whose content could exist transmitted to whatsoever periodical to which the paper is submitted. While this might not supplant the statistical reviews at leading journals, information technology would raise the bar overall for the statistical quality of submitted manuscripts across the publishing landscape. Only every bit open-access fees of several thousand dollars are at present routinely included in federally funded research grants, mayhap a much lower standard fee for independent statistical review could be supported by such funding, which could be used to back up the centralized resource, and accept the burden off of journals that cannot afford high quality review. Alternatively, either individual journals or their publishers could collectively subscribe to such a service. Review procedures at leading biomedical journals prove that even papers with statistician authors can still benefit from independent methodologic review. Finally, it would be critical for such a service to provide feedback to a statistician's abode establishment, whether it be academic or in the individual sector, on the quality and value of their contribution, to provide boosted professional incentive to provide such reviews.
The increasing use of open up peer review, where all peer review and editorial correspondence is made openly available might help amplify the effect of statistical reviews. Currently, such reviews serve only to ameliorate individual papers, and their content and issue is effectively hidden. Having a public archive of formal statistical reviews could potentially serve as a valuable scientific and didactic resource.
Other models of peer review have been proposed to improve methodologic rigor, but they are unlikely to run across the demand. Pre-impress archives and models that promote transparency, code and information sharing and postal service-publication peer review purport to facilitate the ability of the broader scientific community to probe the cogency of methods and claims. However, while this might indeed be effective for a small-scale proportion of articles, particularly those that garner special attention, information technology is unlikely to induce modify in the vast majority of manufactures, for which there simply are non plenty methodologic readers who will offer in-depth critiques, particularly without incentive to exercise and then. Also, editors use the leverage of possible rejection to require changes that authors might not otherwise accept, but neither preprint nor mail service-publication review have that leverage. Primary findings and conclusions have much longer lasting event than ones amended later, as evidenced by retracted articles that continue to be cited, or errata that are ignored, then it is of import that the initial publication of record be equally accurate equally possible.
Given that human expertise is in brusque supply, what role could artificial intelligence play in improving review of methodologic aspects of a paper? There have been a few attempts to develop programs that examine statistical aspects of a paper, just these are of extremely limited scope, e.thousand. checking whether the reported degrees of liberty and F or chi-square statistic is consistent with a reported p-value [26], which is mainly of value in the psychological literature, which has a structured way to present such information rarely used in biomedical publications. Some publishers are besides experimenting with software that evaluates the utilise of reporting standards, but other functionality is unclear. [27] Given that methodologic reviewers ideally provide an integrated cess of the research question, pattern, deport, analysis, reporting and conclusions, it is highly unlikely that AI applications volition be able to provide noun aid in the near or medium futurity.
Journal review is only one component of a larger ecosystem that needs irresolute [25]. Improving the quality of statistical didactics for researchers and readers of the scientific literature is of paramount importance, particularly in calorie-free of documented misunderstanding of foundational statistical concepts in both groups [28]. It is critical to note that statistical pedagogy goes well across computational training, which is necessary but not remotely sufficient to properly design and analyze research. Inquiry funders are sending this bulletin, with the announcement of new NIH and AHRQ requirements for "rigor and reproducibility" training in T32 grants starting in May 2020 [31]. Grooming and published enquiry are synergistic; the quality of statistical analyses reported in the highest profile journals creates a de facto standard, sending an important message to young investigators that robust training in statistical reasoning and design volition be recognized and rewarded when they submit their research to the all-time journals in their fields.
Open up practices statement
The written report was not pre-registered. All data exclusions, measurements, and analyses conducted during this report are reported in this manuscript. Our survey as well included an additional grouping of psychology journals; however, due differences between the two disciplines, those results are reported elsewhere [29]. All anonymized data (https://doi.org/10.17605/OSF.IO/NSCV3), materials (https://doi.org/10.17605/OSF.IO/P7G8W), and analysis scripts (https://doi.org/ten.17605/OSF.IO/DY6KJ) related to this report are publicly available on the Open Science Framework. To facilitate reproducibility, nosotros wrote this manuscript by interleaving regular prose and analysis code, using knitr [30], and accept made the manuscript available in a software container (https://doi.org/ten.24433/CO.3883021.v2) that re-creates the computational environs in which the original analyses were performed.
Supporting information
S1 Fig. Histogram showing distribution of estimates for the percentage of original quantitative research articles that undergo statistical review.
The dashed line indicates the < = 10% cutting-off point whereby statistical review was considered 'rare' and respondents were re-directed towards the end of the survey (see methods department for details).
https://doi.org/ten.1371/journal.pone.0239598.s002
(DOCX)
Acknowledgments
We thank Lisa Ann Yu for assistance collecting periodical contact details and Daniele Fanelli for discussions about the survey pattern. Nosotros are grateful to all respondents for taking the time to complete the survey.
References
- 1. Chavalarias D, Wallach JD, Li AHT, Ioannidis JPA. Evolution of Reporting P Values in the Biomedical Literature, 1990–2015. JAMA. 2016;315: 1141–1148. pmid:26978209
- View Article
- PubMed/NCBI
- Google Scholar
- ii. Strasak AM, Zaman Q, Marinell G, Pfeiffer KP, Ulmer H. The Use of Statistics in Medical Research. The American Statistician. 2007;61: 47–55.
- View Article
- Google Scholar
- iii. Altman DG. Statistics in medical journals. Statistics in medicine. 1982;ane: 59 71. pmid:7187083
- View Article
- PubMed/NCBI
- Google Scholar
- 4. Carmona-Bayonas A, Jimenez-Fonseca P, Fernández-Somoano A, Álvarez-Manceñido F, Castañón East, Custodio A, et al. Top ten errors of statistical assay in observational studies for cancer research. Clinical & translational oncology: official publication of the Federation of Spanish Oncology Societies and of the National Cancer Plant of Mexico. 2017;20: 954–965. pmid:29218627
- View Commodity
- PubMed/NCBI
- Google Scholar
- v. Fernandes-Taylor S, Hyun JK, Reeder RN, Harris AH. Common statistical and research design problems in manuscripts submitted to high-affect medical journals. BMC research notes. 2011;4: 304. pmid:21854631
- View Article
- PubMed/NCBI
- Google Scholar
- 6. Gore SM, Jones IG, Rytter EC. Misuse of statistical methods: critical cess of articles in BMJ from January to March 1976. British Medical Journal. 1977;i: 85. pmid:832023
- View Commodity
- PubMed/NCBI
- Google Scholar
- seven. Wasserstein RL, Lazar NA. The ASA's Statement on p-Values: Context, Process, and Purpose. The American Statistician. 2016;70: 129–133.
- View Article
- Google Scholar
- viii. Salsburg DS. The Religion of Statistics as Expert in Medical Journals. The American Statistician. 1985;39: 220–223.
- View Commodity
- Google Scholar
- nine. Altman DG. Statistical reviewing for medical journals. Statistics in medicine. 1998;17: 2661 2674. pmid:9881413
- View Commodity
- PubMed/NCBI
- Google Scholar
- ten. Gardner MJ, Bond J. An exploratory study of statistical assessment of papers published in the British Medical Journal. JAMA. 1990;263: 1355 1357. pmid:2304214
- View Article
- PubMed/NCBI
- Google Scholar
- 11. Goodman SN, Berlin J, Fletcher SW, Fletcher RH. Manuscript quality before and after peer review and editing at Register of Internal Medicine. Am Coll Physicians. 1994;121: 11 21.
- View Article
- Google Scholar
- 12. Gore SM, Jones Thousand, Thompson SG. The Lancet's statistical review process: Areas for improvement by authors. The Lancet. 1992;340:100–101.
- View Commodity
- Google Scholar
- thirteen. Prescott RJ, Civil I. Lies, damn lies and statistics: Errors and omission in papers submitted to INJURY 2010–2012. Injury. 2013;44: half-dozen–11. pmid:23182752
- View Article
- PubMed/NCBI
- Google Scholar
- xiv. Schor S, Karten I. Statistical evaluation of medical journal manuscripts. JAMA. 1966;195: 1123 1128. pmid:5952081
- View Article
- PubMed/NCBI
- Google Scholar
- 15. Cobo East, Selva-O'Callagham A, Ribera J-Grand, Cardellach F, Dominguez R, Vilardell Grand. Statistical Reviewers Amend Reporting in Biomedical Articles: A Randomized Trial. Scherer R, editor. PLoS One. 2007;2: e332 viii. pmid:17389922
- View Article
- PubMed/NCBI
- Google Scholar
- sixteen. Stack C, Ludwig A, Localio AR, Meibohm A, Guallar E, Wong J, et al. Authors' assessment of the touch and value of statistical review in a general medical journal: 5-year survey results. Available: https://peerreviewcongress.org/prc17-0202
- View Commodity
- Google Scholar
- 17. Schriger DL, Cooper RJ, Wears RL, Waeckerle JF. The effect of dedicated methodology and statistical review on published manuscript quality. Annals of Emergency Medicine. 2002;xl: 334–337. pmid:12192360
- View Article
- PubMed/NCBI
- Google Scholar
- eighteen. Smith R. Encyclopedia of Biostatistics. 2nd ed. In: P Armitage TC, editor. Encyclopedia of biostatistics. 2nd ed. John Wiley & Sons; 2005. https://doi.org/x.1002/0470011815.b2a17141
- 19. Sox HC. Medical journal editing: Who shall pay? Annals of internal medicine. 2009;151: 68 69. pmid:19581649
- View Article
- PubMed/NCBI
- Google Scholar
- 20. Vaisrub North. Manuscript review from a statistician's perspective. JAMA. 1985;253: 3145 3147. pmid:3999302
- View Article
- PubMed/NCBI
- Google Scholar
- 21. George SL. Statistics in medical journals: A survey of current policies and proposals for editors. Medical and Pediatric Oncology. 1985;thirteen: 109–112. pmid:3982366
- View Article
- PubMed/NCBI
- Google Scholar
- 22. Goodman SN, Altman DG, George SL. Statistical reviewing policies of medical journals. Journal of General Internal Medicine. 1998;thirteen: 753–756. pmid:9824521
- View Article
- PubMed/NCBI
- Google Scholar
- 23. Marks RG, Dawson‐Saunders EK, Bailar JC, Dan BB, Verran JA. Interactions between statisticians and biomedical periodical editors. Statistics in Medicine. 1988;seven: 1003–1011. pmid:3205998
- View Article
- PubMed/NCBI
- Google Scholar
- 24. Katz KA, Crawford GH, Lu DW, Kantor J, Margolis DJ. Statistical reviewing policies in dermatology journals: Results of a questionnaire survey of editors. Journal of the American Academy of Dermatology. 2004;51: 234–240. pmid:15280842
- View Article
- PubMed/NCBI
- Google Scholar
- 25. Hardwicke TE, Serghiou S, Janiaud P, Danchev V, Crüwell S, Goodman S, et al. Calibrating the scientific ecosystem through meta-research. Annual Review of Statistics and its Application.
- View Article
- Google Scholar
- 26. Nuijten MB, Hartgerink CH, van Assen MA, Epskamp S, Wicherts JM. The prevalence of statistical reporting errors in psychology (1985–2013). Behav Res Methods. 2016 Dec;48(iv):1205–1226. pmid:26497820
- View Commodity
- PubMed/NCBI
- Google Scholar
- 27. Heaven D. AI Peer Reviewers Unleashed to Ease Publishing Grind, Nature, 2018; 563: 609–610. pmid:30482927
- View Commodity
- PubMed/NCBI
- Google Scholar
- 28. Windish DM, Huot SJ, Greenish ML. Medicine Residents' Understanding of the Biostatistics and Results in the Medical Literature. JAMA. 2007;298: 1010–1022. pmid:17785646
- View Commodity
- PubMed/NCBI
- Google Scholar
- 29. Hardwicke TE, Frank MC, Vazire S, Goodman SN. Should Psychology Journals Prefer Specialized Statistical Review? Advances in Methods and Practices in Psychological Scientific discipline. 2019; 251524591985842.
- View Article
- Google Scholar
- thirty. Xie Y. knitr: A General-Purpose Package for Dynamic Report Generation in R. 2018.
- View Commodity
- Google Scholar
- 31. https://grants.nih.gov/grants/guide/observe-files/NOT-OD-20-033.html, Accessed 2/17/20.
Source: https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0239598
0 Response to "Review of Statistical Methods Used in Biomedical Literature"
Enregistrer un commentaire