CLAIM items (N = 29) | Study, n (%) |
---|---|
Overall (excluding item 27) | 961/1508 (63.7) |
Section 1: Title and Abstract | 53/58 (91.4) |
1. Title or abstract—Identification as a study of AI methodology | 29/29 (100.0) |
2. Abstract—Structured summary of study design, methods, results, and conclusions | 24/29 (82.8) |
Section 2: Introduction | 55/87 (63.2) |
3. Background—scientific and clinical background, including the intended use and clinical role of the AI approach | 29/29 (100.0) |
4a. Study objective | 22/29 (75.9) |
4b. Study hypothesis | 4/29 (13.8) |
Section 3: Methods | 700/1044 (67.0) |
5. Study design—Prospective or retrospective study | 29/29 (100.0) |
6. Study design—Study goal, such as model creation, exploratory study, feasibility study, non-inferiority trial | 29/29 (100.0) |
7a. Data—Data source | 29/29 (100.0) |
7b. Data—Data collection institutions | 29/29 (100.0) |
7c. Data—Imaging equipment vendors | 25/29 (86.2) |
7d. Data—Image acquisition parameters | 22/29 (75.9) |
7e. Data—Institutional review board approval | 28/29 (96.6) |
7f. Data—Participant consent | 24/29 (82.8) |
8. Data—Eligibility criteria | 22/29 (75.9) |
9. Data—Data pre-processing steps | 20/29 (69.0) |
10. Data—Selection of data subsets (segmentation of ROI in radiomics studies) | 26/29 (89.7) |
11. Data—Definitions of data elements, with references to Common Data Elements | 29/29 (100.0) |
12, Data—De-identification methods | 3/29 (10.3) |
13. Data—How missing data were handled | 6/29 (20.7) |
14. Ground truth—Definition of ground truth reference standard, in sufficient detail to allow replication | 27/29 (93.1) |
15a. Ground truth—Rationale for choosing the reference standard (if alternatives exist) | 0/29 (0.0) |
15b. Ground truth—Definitive ground truth | 29/29 (100.0) |
16. Ground truth—Manual image annotation | 17/29 (586) |
17. Ground truth—Image annotation tools and software | 10/29 (34.5) |
18. Ground truth—Measurement of inter- and intra-rater variability; methods to mitigate variability and/or resolve discrepancies | 9/29 (31.0) |
19a. Data Partitions—Intended sample size and how it was determined | 29/29 (100.0) |
19b. Data Partitions—Provided power calculation | 4/29 (13.8) |
19c. Data Partitions—Distinct study participants | 23/29 (79.3) |
20. Data Partitions—How data were assigned to partitions; specify proportions | 22/29 (75.9) |
21. Data Partitions—Level at which partitions are disjoint (e.g., image, study, patient, institution) | 22/29 (75.9) |
22a. Model—Provided reproducible model description | 21/29 (72.4) |
22b. Model—Provided source code | 0/29 (0.0) |
23. Model—Software libraries, frameworks, and packages | 20/29 (69.0) |
24. Model—Initialization of model parameters (e.g., randomization, transfer learning) | 23/29 (79.3) |
25. Training—Details of training approach, including data augmentation, hyperparameters, number of models trained | 16/29 (55.2) |
26. Training—Method of selecting the final model | 21/29 (72.4) |
27. Training—Ensembling techniques, if applicable (N = 14) | 8/14 (57.1) |
28. Evaluation—Metrics of model performance | 29/29 (100.0) |
29. Evaluation—Statistical measures of significance and uncertainty (e.g., confidence intervals) | 20/29 (69.0) |
30. Evaluation—Robustness or sensitivity analysis | 10/29 (34.5) |
31. Evaluation—Methods for explainability or interpretability (e.g., saliency maps), and how they were validated | 11/29 (37.9) |
32. Evaluation—Validation or testing on external data | 16/29 (55.2) |
Section 4: Results | 90/174 (51.7) |
33. Data—Flow of participants or cases, using a diagram to indicate inclusion and exclusion | 16/29 (55.2) |
34. Data—Demographic and clinical characteristics of cases in each partition | 25/29 (86.2) |
35a. Model performance—Test performance | 16/29 (55.2) |
35b. Model performance—Benchmark of performance | 8/29 (27.6) |
36. Model performance—Estimates of diagnostic accuracy and their precision (such as 95% confidence intervals) | 20/29 (69.0) |
37. Model performance—Failure analysis of incorrectly classified cases | 5/29 (17.2) |
Section 5: Discussion | 57/58 (98.3) |
38. Study limitations, including potential bias, statistical uncertainty, and generalizability | 28/29 (96.6) |
39. Implications for practice, including the intended use and/or clinical role | 29/29 (100.0) |
Section 6: Other information | 6/87 (6.9) |
40. Registration number and name of registry | 0/29 (0.0) |
41. Where the full study protocol can be accessed | 0/29 (0.0) |
42. Sources of funding and other support; role of funders | 6/29 (20.7) |