- Original Article
- Open Access
Peer review practices by medical imaging journals
Insights into Imaging volume 11, Article number: 125 (2020)
To investigate peer review practices by medical imaging journals.
Journals in the category "radiology, nuclear medicine and medical imaging" of the 2018 Journal Citation Reports were included.
Of 119 included journals, 62 (52.1%) used single-blinded peer review, 49 (41.2%) used double-blinded peer review, two (1.7%) used open peer review and one (0.8%) used both single-blinded and double-blinded peer reviews, while the peer review model of five journals (4.2%) remained unclear. The use of single-blinded peer review was significantly associated with a journal’s impact factor (correlation coefficient of 0.218, P = 0.022). On subgroup analysis, only subspecialty medical imaging journals had a significant association between the use of single-blinded peer review and a journal’s impact factor (correlation coefficient of 0.354, P = 0.025). Forty-eight journals (40.3%) had a reviewer preference option, 48 journals (40.3%) did not have a reviewer recommendation option, and 23 journals (19.3%) obliged authors to indicate reviewers on their manuscript submission systems. Sixty-four journals (53.8%) did not provide an explicit option on their manuscript submission Web site to indicate nonpreferred reviewers, whereas 55 (46.2%) did. There were no significant associations between the option or obligation to indicate preferred or nonpreferred reviewers and a journal’s impact factor.
Single-blinded peer review and the option or obligation to indicate preferred or nonpreferred reviewers are frequently employed by medical imaging journals. Single-blinded review is (weakly) associated with a higher impact factor, also for subspecialty journals. The option or obligation to indicate preferred or nonpreferred reviewers is evenly distributed among journals, regardless of impact factor.
Nearly all medical imaging journals use either a single-blinded peer review model (51.2%) or a double-blinded peer review model (41.2%).
Reviewer preferences are optional by 40.3% and obligatory by 19.3% of medical imaging journals.
There is a positive association between the use of a single-blinded peer review model and a journal’s impact factor (correlation coefficient of 0.218, P = 0.022), also for subspecialty journals (correlation coefficient of 0.354, P = 0.025).
Peer review refers to a formal system held by scientific journals, whereby a manuscript is scrutinized by persons who were not involved in its creation but are considered knowledgeable about the topic of the manuscript [1,2,3]. Peer review is considered of crucial importance for the selection and publication of quality science [1,2,3]. All medical imaging journals listed by the authoritative Journal Citation Reports  use peer review before manuscript publication. Unfortunately, the peer review process has some potential weaknesses which may undermine its effectiveness in ensuring the quality and fairness of published research . Richard Smith, former editor-in-chief of the BMJ, once mentioned: “So peer review is a flawed process, full of easily identified defects with little evidence that it works. Nevertheless, it is likely to remain central to science and journals because there is no obvious alternative, and scientists and editors have a continuing belief in peer review. How odd that science should be rooted in belief” .
There are multiple peer review models. Single-blinded and double-blinded models are best known, but there are several other models including triple-blinded, quadruple-blinded, and open review systems [7, 8]. In single-blinded peer review, the reviewers know the identity of the authors but not vice versa . In double-blinded peer review, the identities of both authors and reviewers are kept hidden from each other . In the triple-blinded peer review model, the authors’ identity is also hidden from the handling editor during the submission process, and the quadruple-blinded peer review model is further augmented by hiding the identity of the handling editor . Finally, in an open peer review model, both authors and reviewers know each other’s identity . Each system has advantages and disadvantages . Double-blinded and open peer reviews are most supported by the current literature . The single-blinded peer review system has been shown to be susceptible to bias [7, 9,10,11].
Another important issue that may affect the validity of the peer review process is the recommendation of potential reviewers by the submitting authors upon manuscript submission . In 2014, it became apparent that these practices are vulnerable to exploitation and hacking, because some authors performed “peer reviews” of their own manuscripts by using fabricated e-mail accounts . In the aftermath of the scandals involving fake peer reviewers, many journals decided to turn off the reviewer recommendation option .
Currently, there is a lack of knowledge on the peer review practices of medical imaging journals. More insight into the integrity and fairness of the peer review process is required in order to better appraise the quality of published research and to identify potential targets for improvement. This information is relevant to the readership of any medical imaging journal (even for journals which hold a high standard), because all journals publish articles that refer to some degree to studies that have been published elsewhere. The currently available evidence is supportive of double-blinded or open peer review rather than single-blinded peer review [7, 9,10,11] and discourages the use of the reviewer recommendation option for authors . It is therefore hypothesized that most medical imaging journals employ such practices and that such a trend is particularly seen for journals with a higher impact factor. Therefore, the purpose of our study was to investigate peer review practices by medical imaging journals.
Materials and methods
Our study used data available in the public domain and did not concern medical scientific research in which participants or animals were subjected to procedures or were observed. Therefore, it did not require institutional review board approval or informed consent. All 129 journals listed by the 2018 Journal Citation Reports in the category “radiology, nuclear medicine and medical imaging” as of April 2020 were eligible for inclusion . Journals that allowed submissions by invitation only were excluded.
The editorial procedure on each journal’s Web site was carefully studied for the peer review model employed by the journal (i.e., single-blinded, double-blinded, triple-blinded, quadruple-blinded, open peer review, or other). If this information was not provided on the journal’s Web site, editors-in-chief or editorial managers were contacted to require information about the peer review model. In the case of no reply within two weeks, editors-in-chief and editorial managers were contacted again in a final attempt to retrieve this information. Furthermore, the manuscript submission system of each journal was accessed to determine the presence of an optional or obligatory reviewer recommendation, and the presence of an option to indicate nonpreferred reviewers. Finally, the impact factor of each journal was determined based on the information provided by the Journal Citation Reports as of April 2020 . All data were collected by a single author (T.C.K.).
The proportions of journals with single-blinded, double-blinded, triple-blinded, quadruple-blinded, open review, and other models were determined. The proportions of journals with optional or mandatory reviewer recommendations, and those with the option to indicate nonpreferred reviewers, were also assessed. Point-biserial correlation analyses were performed to determine the associations between the peer review model employed by the journal and the journal’s impact factor, between the presence of a reviewer recommendation option or obligation and a journal’s impact factor, and between the presence of an option to indicate nonpreferred reviewers and a journal’s impact factor. A subgroup analysis was performed for all medical imaging journals except radiotherapy journals, journals for physicists, engineers, and chemists, and journals related to a single country. Additional subgroup analyses were performed for general and subspecialty medical imaging journals separately, and for imaging journals with more and less than 1000 published articles per 2-year period separately. p values < 0.05 were considered statistically significant. Statistical analyses were executed using IBM Statistical Package for the Social Sciences (SPSS) version 26 (SPSS, Chicago, IL, USA).
Medical imaging journals
Of the 129 journals listed by the Journal Citation Reports in the category “radiology, nuclear medicine and medical imaging,” ten were excluded because they allowed submissions by invitation only. The 119 journals that were included in our analyses had a mean impact factor of 2.205 (range: 0.413–10.975).
Peer review models
Of all 119 journals that were included, 62 (52.1%) used a single-blinded peer review model, 49 (41.2%) used a double-blinded peer review model, two (1.7%) used an open peer review model, and one (0.8%) used both a single-blinded and a double-blinded peer review model (depending on whether or not the submitting author disclosed the authors’ names on the first page of the manuscript), whereas for five journals the peer review model remained unclear (Fig. 1). There were no journals which used another type of peer review model. Seventy-two (60.5%) journals mentioned their peer review model on their Web site. A Box-and-Whisker plot of journal impact factor according to peer review model is shown in Fig. 2. Because nearly all journals used either the single- or double-blinded peer review model (97.4%), the correlation analysis was only performed for the single- and double-blinded models vs. journal impact factor. A point-biserial correlation coefficient of 0.218 (P = 0.022) indicated a positive association between the use of a single-blinded peer review system and a journal’s impact factor. On subgroup analysis, only subspecialty medical imaging journals had a significant association between the use of a single-blinded peer review system and a journal’s impact factor (point-biserial correlation coefficient of 0.354, P = 0.025) (Table 1).
Of all 119 journals that were included, 48 (40.3%) provided authors the option to indicate reviewer recommendations, 48 (40.3%) did not have a reviewer recommendation option, and 23 (19.3%) obliged authors to indicate reviewers on their manuscript submission systems (Fig. 3). The 23 journals with an obligatory reviewer recommendation required the suggestion of at least one reviewer (four journals), two reviewers (five journals), three reviewers (11 journals), four reviewers (one journal), and five reviewers (two journals). A point-biserial correlation coefficient of 0.032 (P = 0.727) indicated no significant association between the presence of a reviewer recommendation option or obligation and a journal’s impact factor. There were no significant associations on additional subgroup analyses (Table 1).
Of all 119 journals that were included, 64 (53.8%) did not provide an explicit option on their manuscript submission Web site to indicate nonpreferred reviewers, whereas 55 (46.2%) did (Fig. 4). Fifty-three journals with a nonpreferred reviewer option did not indicate any limit for the number of nonpreferred reviewers, whereas two journals indicated that a maximum of five nonpreferred reviewers could be listed. A point-biserial correlation coefficient of 0.064 (P = 0.492) indicated no significant association between the presence of a nonpreferred reviewer option and a journal’s impact factor. There were no significant associations on additional subgroup analyses (Table 1).
Our study shows that the majority of medical imaging journals employ a single-blinded peer review model (52.1%), followed by a double-blinded peer review model (41.2%). However, there is ample evidence that the single-blinded peer review system is prone to bias [7, 9,10,11]. For example, it has been reported that reviewers are more likely to give higher manuscript ratings and recommend acceptance when prestigious authors’ names and institutions are visible than when they are not , that single-blinded reviewers are significantly more likely than their double-blinded counterparts to recommend papers from famous authors, top universities, and top companies for acceptance , and that single-blinded peer reviews may suffer from gender bias against women . In addition, reviewers’ knowledge of the authors’ identities may render the review process susceptible to fraud when a conflict of interest exists between the authors and the reviewers. Therefore, it is worrisome that the single-blinded peer review model is employed by most medical imaging journals. Our results also indicate a weak but significant trend that the single-blinded peer review model is more frequently used by journals with a higher impact factor. Therefore, the concerns related to single-blinded peer review are certainly not only applicable to lower-ranked medical imaging journals. Interestingly, subgroup analyses showed that the association between single-blinded peer review and the journal’s impact factor was highest for subspecialty journals. The reason for the association between the use of a single-blinded peer review system and a journal’s impact factor remains unclear. However, it can be speculated that some journals use a single-blinded peer review system for reviewers to be able to check the credentials of the authors. Papers from authors with a prestigious track record are likely to receive a more favorable review which will increase the likelihood of (eventual) acceptation by the handling editor. In turn, published papers from authors with a prestigious track record are probably cited more frequently. This phenomenon can be referred to as the Matthew effect: “To those who have, shall be given; to those who have not shall be taken away even the little that they have” [6, 13]. Only two journals, with impact factors of 1.622 and 0.478, used an open peer review system. Other peer review systems, including triple- and quadruple-blinded systems, were not used by any of the journals. This is probably related to widespread long-term habituation to the use of single- and double-blinded systems, and more complexity and costs associated with the use of triple- and quadruple-blinded systems. This indicates that handling editors of all medical imaging journals are currently not blinded to the identity of the authors. However, many journals reject submissions without review, and although some experienced handling editors may have the expertise to make justified “direct reject” decisions, the possibility exists that they are prone to the same type of peer review bias that has been shown to exist for reviewers [7,8,9,10,11]. Even well-intentioned editors may be subject to unconscious bias, just as reviewers are . It was also interesting to note that only a small majority of journals (60.5%) mentioned their peer review model on their Web site. The reason for this finding remains unclear, but it can be speculated that it is simply a neglected topic. This issue is another target for improvement, since transparency can be considered as one of the key components of scientific integrity.
Another important finding of our study is that there were just as many journals with and without the option to indicate reviewer preferences (both 40.3%), whereas the remaining journals (19.3%) obliged submitting authors to provide potential reviewers. This may also be considered worrisome, because recommendation of potential reviewers by the submitting authors has been shown to be vulnerable to exploitation, hacking, and peer review bias . Furthermore, the obligatory reviewer recommendation is a potentially ethically compromising situation and a violation of author’s rights, because it forces authors to interfere with the review process . The presence of a reviewer recommendation option or obligation on a journal’s manuscript submission system was independent of a journal’s impact factor, which indicates that this issue plays a role across the entire range of medical imaging journals. Although selecting appropriate reviewers costs time, an unbiased selection of potential reviewers is essential. Another potential reason for journals to employ the reviewer recommendation option or obligation is that they do not possess a large database of potential reviewers. The use of reviewer finding software could be a solution for these journals . Yet another possibility is that recommendations for reviewers may also aid the handling editors' job enabling a faster turnaround time which may in itself improve the impact factor of a journal, although this remains an issue of speculation.
A nonpreferred reviewer option was present in nearly half (46.2%) of the included journals and was not associated with a journal’s impact factor. It is currently unclear how a nonpreferred reviewer option affects peer review. It may avoid peer review bias when authors disclose individuals with whom a conflict of interest exists. However, if authors indicate their wish to exclude certain knowledgeable individuals with stringent standards from whom they expect to receive a critical review which could lead to rejection, and the journal follows this request, the review process may potentially be biased in favor of the authors’ manuscript . Further research is necessary to elucidate this element of the peer review process.
To our knowledge, there have been no previously published, similar studies on peer review practices by medical imaging journals. Nevertheless, the topic of peer review is regularly discussed [16,17,18,19] and two previous studies have investigated the efficacy of reviewer blinding in imaging journals [20, 21]. In a study by Katz et al.  that was published in 2002, original manuscripts submitted to two general radiology journals with double-blinded peer review policies during a 6-month period were reviewed. They found that 34% of submitted manuscripts contained information that potentially or definitely unblinded the identities of the authors or their institutions . The most frequent unblinding violations were statement of the authors' initials within the manuscript, referencing work "in press,” identifying references as the authors' previous work, and revealing the identity of the institution in the figures. In a more recent study by O'Connor et al. , all reviewers of manuscripts submitted to the American Journal of Neuroradiology from January through June 2015 were surveyed in order to assess whether they were familiar with the research or had knowledge of the authors or institutions from which the work originated. Their survey revealed that reviewers correctly identified the authors in 90.3% of cases and correctly stated the institutions in 86.8% of cases . Unblinding resulted from self-citation in 34.1% for both authorship and institutions . Unsurprisingly, author familiarity and institution familiarity were significantly associated with greater manuscript acceptance (P < 0.038 and P < 0.039, respectively) . The studies by Katz et al.  and O'Connor et al.  underline the responsibility of both authors and journals in ensuring that manuscripts are adequately blinded before sent out for peer review .
Our study had some limitations. First, it did not compare the validity of different peer review models. A randomized trial has yet to be performed to investigate whether any peer review model is more prone to bias in the medical imaging field. However, the current literature favors double-blinded and open peer reviews over single-blinded peer review models [7, 9,10,11]. There is no reason to assume why this concept would be different for medical imaging journals. In addition, empirical evidence has already shown the danger of using a reviewer recommendation option on a manuscript submission system . Second, our study did not assess for any temporal changes in peer review practices. As such, it remains unclear whether peer review standards in the medical imaging field have improved according to the above-mentioned insights that have appeared in the recent literature [7, 12]. Nevertheless, our study sets a benchmark which could be used to monitor and to possibly improve upon in the future. Third, metrics of peer review practices were correlated with journal impact factors. However, the impact factor can be influenced and biased by many factors, and extension of the impact factor to the assessment of journal quality may be inappropriate . Fourth, we did not compare peer review practices of journals in the medical imaging field to journals in other areas, because this was beyond the scope of the present study.
In conclusion, single-blinded peer review and the option or obligation to indicate preferred or nonpreferred reviewers are frequently employed by medical imaging journals. Single-blinded review is (weakly) associated with a higher impact factor, also for subspecialty journals. The option or obligation to indicate preferred or nonpreferred reviewers is evenly distributed among journals, regardless of impact factor.
Availability of data and materials
Data and materials are available upon request to the authors.
Statistical Package for the Social Sciences
Wager E, Godlee F, Jefferson T (2002) How to survive peer review. BMJ books
Kelly J, Sadeghieh T, Adeli K (2014) Peer review in scientific publications: benefits, critiques, and a survival guide. EJIFCC 25:227–243
Callaham ML, Tercier J (2007) The relationship of previous training and experience of journal peer reviewers to subsequent review quality. PLoS Med 4:e40
2018 Journal Citation Reports (InCites). https://jcr.clarivate.com/. Accessed 24 April 2020
Henderson M (2010) Problems with peer review. BMJ 340:c1409
Smith R (2006) Peer review: a flawed process at the heart of science and journals. J R Soc Med 99:178–182
Haffar S, Bazerbachi F, Murad MH (2019) Peer review bias: a critical review. Mayo Clin Proc 94:670–676
Shaw DM (2015) Blinded by the light: anonymization should be used in peer review to prevent bias, not protect referees. EMBO Rep 16:894–897
Okike K, Hug KT, Kocher MS, Leopold SS (2016) Single-blind vs double-blind peer review in the setting of author prestige. JAMA 316:1315–1316
Tomkins A, Zhang M, Heavlin WD (2017) Reviewer bias in single- versus double-blind peer review. Proc Natl Acad Sci U S A 114:12708–12713
Witteman HO, Hendricks M, Straus S, Tannenbaum C (2019) Are gender gaps due to evaluations of the applicant or the science? A natural experiment at a national funding agency. Lancet 393:531–540
Haug CJ (2015) Peer-review fraud – hacking the scientific publication process. N Engl J Med 373:2393–2395
Teixeira da Silva JA, Al-Khatib A (2018) Should authors be requested to suggest peer reviewers? Sci Eng Ethics 24:275–285
Journal author name estimator. https://jane.biosemantics.org/. Accessed 24 April 2020
Siegelman SS (1991) Assassins and zealots: variations in peer review. Special report. Radiology 178:637–642
Berquist TH (2012) Peer review: is the process broken? AJR Am J Roentgenol 199:243
Berquist TH (2014) Peer review: should we modify our process? AJR Am J Roentgenol 202:463–464
Berquist TH (2017) Peer review: is the process broken? AJR Am J Roentgenol 209:1–2
Katz DS, Gardner JB, Hoffmann JC et al (2016) Ethical issues in radiology journalism, peer review, and research. AJR Am J Roentgenol 207:820–825
Katz DS, Proto AV, Olmsted WW (2002) Incidence and nature of unblinding by authors: our experience at two radiology journals with double-blinded peer review policies. AJR Am J Roentgenol 179:1415–1417
O’Connor EE, Cousar M, Lentini JA, Castillo M, Halm K, Zeffiro TA (2017) Efficacy of double-blind peer review in an imaging subspecialty journal. AJNR Am J Neuroradiol 38:230–235
Liebeskind DS (2003) The fallacy of double-blinded peer review. AJR Am J Roentgenol 181:1422 (author reply 1422-1423)
Kurmis AP (2003) Understanding the limitations of the journal impact factor. J Bone Joint Surg Am 85:2449–2454
No funding was received for this work.
Ethics approval and consent to participate
This study used data available in the public domain and did not concern medical scientific research in which participants or animals were subjected to procedures or were observed. Therefore, it did not require institutional review board approval or informed consent.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Kwee, T.C., Adams, H.J.A. & Kwee, R.M. Peer review practices by medical imaging journals. Insights Imaging 11, 125 (2020). https://doi.org/10.1186/s13244-020-00921-3
- Journal article
- Medical imaging
- Peer review