Skip to main content

Table 3 Reporting frequencies of individual STARD items for all studies and comparison of reporting frequencies with STARD being recommended (2015) vs. STARD being mandatory (2019) in Radiology

From: Has the quality of reporting improved since it became mandatory to use the Standards for Reporting Diagnostic Accuracy?

STARD item No.

Item description

All Articles

(n = 66), %

Articles published in 2015 (n = 39), %

Articles published in 2019 (n = 27), %

Title or abstract

1

Identification as a study of diagnostic accuracy using at least one measure of accuracy (such as sensitivity, specificity, predictive values or AUC)

100 (n = 66)

100 (n = 39)

100 (n = 27)

2*

Structured summary of study design, methods, results and conclusions (for specific guidance, see STARD for Abstracts)

100 (n = 66)

100 (n = 39)

100 (n = 27)

Introduction

3*

Scientific and clinical background, including the intended use and clinical role of the index test

100 (n = 66)

100 (n = 39)

100 (n = 27)

4*

Study objectives and hypotheses

42 (n = 28)

31 (n = 12)

59 (n = 16)

Methods

5

Whether data collection was planned before the index test and reference standard were performed (prospective study) or after (retrospective study)

92 (n = 61)

87 (n = 34)

100 (n = 27)

6

Eligibility criteria

83 (n = 55)

82 (n = 32)

85 (n = 23)

7

On what basis potentially eligible participants were identified (such as symptoms, results from previous tests, and inclusion in registry)

97 (n = 64)

97 (n = 38)

96 (n = 26)

8

Where and when potentially eligible participants were identified (setting, location and dates)

59 (n = 39)

56 (n = 22)

63 (n = 17)

9

Whether participants formed a consecutive, random or convenience series

71 (n = 47)

56 (n = 22)

93 (n = 25)

10a

Index test, in sufficient detail to allow replication

100 (n = 66)

100 (n = 39)

100 (n = 27)

10b

Reference standard, in sufficient detail to allow replication

62 (n = 41)

62 (n = 24)

63 (n = 17)

12a

Definition of and rationale for test positivity cutoffs or result categories of the index test, distinguishing prespecified from exploratory

64 (n = 42)

59 (n = 23)

70 (n = 19)

12b

Definition of and rationale for test positivity cutoffs or result categories of the reference standard, distinguishing prespecified from exploratory

35 (n = 23)

38 (n = 15)

30 (n = 8)

13a

Whether clinical information and reference standard results were available to the performers or readers of the index test

74 (n = 49)

72 (n = 28)

78 (n = 21)

13b

Whether clinical information and index test results were available to the assessors of the reference standard

30 (n = 20)

26 (n = 10)

37 (n = 10)

14

Methods for estimating or comparing measures of diagnostic accuracy

64 (n = 42)

67 (n = 26)

59 (n = 16)

15

How indeterminate index test or reference standard results were handled

26 (n = 17)

28 (n = 11)

22 (n = 6)

16

How missing data on the index test and reference standard were handled

27 (n = 18)

31 (n = 12)

22 (n = 6)

17

Any analyses of variability in diagnostic accuracy, distinguishing prespecified from exploratory

73 (n = 48)

69 (n = 27)

78 (n = 21)

18*

Intended sample size and how it was determined

8 (n = 5)

5 (n = 2)

11 (n = 3)

Results

19

Flow of participants, using a diagram

62 (n = 41)

38 (n = 15)

96 (n = 26)

20

Baseline demographic and clinical characteristics of participants

74 (n = 49)

62 (n = 24)

93 (n = 25)

21a

Distribution of severity of disease in those with the target condition

88 (n = 58)

90 (n = 35)

85 (n = 23)

21b

Distribution of alternative diagnoses in those without the target condition

65 (n = 43)

59 (n = 23)

74 (n = 20)

22

Time interval and any clinical interventions between index test and reference standard

52 (n = 34)

56 (n = 22)

44 (n = 12)

23

Cross-tabulation of the index test results (or their distribution) by the results of the reference standard

8 (n = 5)

5 (n = 2)

11 (n = 3)

24

Estimates of diagnostic accuracy and their precision (such as 95% CIs)

97 (n = 64)

95 (n = 37)

100 (n = 27)

25

Any adverse events from performing the index test or the reference standard

5 (n = 3)

3 (n = 1)

7 (n = 2)

Discussion

26*

Study limitations, including sources of potential bias, statistical uncertainty and generalizability

88 (n = 58)

82 (n = 32)

96 (n = 26)

27*

Implications for practice, including the intended use and clinical role of the index test

100 (n = 66)

100 (n = 39)

100 (n = 27)

Other information

28*

Registration number and name of registry

9 (n = 6)

3 (n = 1)

19 (n = 5)

29*

Where the full study protocol can be accessed

62 (n = 41)

59 (n = 23)

67 (n = 18)

30*

Sources of funding and other support; role of funders

100 (n = 66)

100 (n = 39)

100 (n = 27)

  1. STARD Standards for Reporting Diagnostic Accuracy; Item-No. item number; AUC area under the curve
  2. *Indicates new STARD 2015 items