Skip to main content

A European Society of Oncologic Imaging (ESOI) survey on the radiological assessment of response to oncologic treatments in clinical practice

Abstract

Objectives

To present the results of a survey on the assessment of treatment response with imaging in oncologic patient, in routine clinical practice. The survey was promoted by the European Society of Oncologic Imaging to gather information for the development of reporting models and recommendations.

Methods

The survey was launched on the European Society of Oncologic Imaging website and was available for 3 weeks. It consisted of 5 sections, including 24 questions related to the following topics: demographic and professional information, methods for lesion measurement, how to deal with diminutive lesions, how to report baseline and follow-up examinations, which previous studies should be used for comparison, and role of RECIST 1.1 criteria in the daily clinical practice.

Results

A total of 286 responses were received. Most responders followed the RECIST 1.1 recommendations for the measurement of target lesions and lymph nodes and for the assessment of tumor response. To assess response, 48.6% used previous and/or best response study in addition to baseline, 25.2% included the evaluation of all main time points, and 35% used as the reference only the previous study. A considerable number of responders used RECIST 1.1 criteria in daily clinical practice (41.6%) or thought that they should be always applied (60.8%).

Conclusion

Since standardized criteria are mainly a prerogative of clinical trials, in daily routine, reporting strategies are left to radiologists and oncologists, which may issue local and diversified recommendations. The survey emphasizes the need for more generally applicable rules for response assessment in clinical practice.

Critical relevance statement

Compared to clinical trials which use specific criteria to evaluate response to oncological treatments, the free narrative report usually adopted in daily clinical practice may lack clarity and useful information, and therefore, more structured approaches are needed.

Key points

· Most radiologists consider standardized reporting strategies essential for an objective assessment of tumor response in clinical practice.

· Radiologists increasingly rely on RECIST 1.1 in their daily clinical practice.

· Treatment response evaluation should require a complete analysis of all imaging time points and not only of the last.

Graphical Abstract

Introduction

Response evaluation criteria are crucial in the assessment of the efficacy of cancer drugs in clinical trials [1]. Four decades ago, at the dawn of cross-sectional imaging, the World Health Organization (WHO) introduced the first imaging criteria for the assessment of tumor burden, based on the sum of the products of diameters of the target lesions [2]. In 2000, the Response Evaluation Criteria In Solid Tumor (RECIST) working group published new guidelines, the RECIST version 1.0 [3], introducing new rules to better objectify the evaluation of the tumor burden, as the definitions of minimum size and the number of measurable lesions per organ. The new criteria also introduced unidimensional measurements bringing a simplification with respect to the WHO criteria [4]. A revised version, RECIST 1.1, that incorporates major changes such as a reduction in the number of lesions to assess, a new categorization of lymph nodes based on short axis, new recommendations for the assessment of progressive disease, and so on, was introduced in 2009 [5]. RECIST 1.1 criteria are based upon imaging modalities that are globally available and easily interpretable and are therefore widely used in clinical trials [6, 7].

Though intended for use in the clinical trial setting, oncologists increasingly rely on RECIST 1.1-based measurements for clinical management of patients also in daily clinical practice [8]. The main justifications are as follows: (1) the opportunity for a more standardized and structured approach to response assessment and (2) the increased clarity of the radiological report [9, 10]. Indeed, terms such as measurable disease, tumor burden, target lesions, and response categories are now part of the radiologist’s lexicon. However, in clinical practice, reporting strategies are mostly left to the local radiologist and oncologist, who may issue their own set of rules [11].

The need for a standardized approach, and of universally applicable rules for the assessment of response in daily routine, should be considered an important priority for the oncology and oncologic imaging community. The aim of this study was to investigate and compare the opinions and preferences of radiologists, with a dominant interest in oncologic imaging, on treatment response evaluation in clinical practice to gather information for the development of reporting models and recommendations.

Materials and methods

To gather the opinions and preferences of radiologists, a survey was developed by an expert panel of members of the European Society of Oncologic Imaging (ESOI) board, composed of a radiology resident and two board-certified radiologists, the latter with more than 20 and 6 years of experience in oncologic imaging and expertise in tumor response assessment by RECIST 1.1 and other criteria.

The survey was conducted anonymously and was launched on the ESOI website (ww.esoi-society.org). ESOI members were reached on the same day by an email with a link to the survey; a week later, those who had not responded received a reminder and a final call was sent after 20 days.

The survey consisted of 5 sections with a total of 24 questions. In brief:

  • - The first section gathered demographic information (i.e., geographic distribution, age and site of main professional activity, and field of interest of participants).

  • - The second section focused on how to measure lesions and lymph nodes at the baseline examination and on how to deal with diminutive lesions.

  • - The third section was related to what to report on the baseline examination, which lesions to measure, how to evaluate non-measurable lesions, and which non-oncologic findings should be reported.

  • - The fourth section focused on reporting of follow-up examination and in particular which of the previous studies should be used as the comparator and how to compare previous findings.

  • - The fifth section included questions on the use of specific assessment criteria (mainly RECIST 1.1) in clinical practice. A focus is given on the medical imaging doctors’ practices and preferences including perceived advantages and disadvantages of using RECIST 1.1.

A web-based survey tool (Google Form, Mountain View, CA, USA) was used for the data collection. The results were downloaded and elaborated in Microsoft Excel format (Redmond, Washington, USA). Simple descriptive analyses and graphs were performed using Microsoft Excel 2018® (Microsoft Office, 2018). Proportions were compared using the “N − 1” chi-squared test [12]; a p-value < 0.05 was considered statistically significant.

Results

Two hundred eighty-six completed forms were received and evaluated. The answers to all 19 questions related to Sects. 2–5 are reported in the supplementary material and demographics and professional information (Sect. 1, 5 questions) are summarized in Table 1.

Table 1 Demographic and professional information (286 responders)

In brief, most responders were from Europe (n = 199; 69.6%) and approximately half were aged between 35 and 50 (n = 145; 50.7%). Most responders had a working experience of more than 10 years (n = 171; 60%). The fields of interest of responders are summarized in Fig. 1.

Fig. 1
figure 1

Main field(s) of oncologic imaging involvement of the responders. More than one response was allowed

Baseline assessment

Figure 2 summarizes the preferred measurement criteria for organ lesions and lymph nodes. Mimicking the RECIST 1.1 criteria, most responders measured only the main lesions (182; 63.6%), preferably two per organ if present (132 of 182; 64.7%). Approximately half of the responders (145; 50.6%) reported measuring only the short axis of lymph nodes.

Fig. 2
figure 2

Radiologists’ opinions on measurement of lesions (a) and lymph nodes (b) at baseline in clinical practice

Non-measurable lesions (e.g., pleural effusion, abdominal fluid collection, peritoneal carcinomatosis) were mainly assessed qualitatively (n = 176; 61.5%); however, a minority of responders (n = 103; 36%) preferred a quantitative evaluation whenever possible.

Concerning diminutive lesions, i.e., lesions with a diameter below a certain predefined threshold, more than half of the responders replied that they did not measure lesions smaller than or equal to 5 mm (n = 72; 25.2% if ≤ 5 mm and n = 112; 39.2% if ≤ 3 mm) but would mention them in the report (n = 168; 58.7%).

Most responders (n = 179, 62.6%) affirmed that they would report all non-oncologic findings at the baseline examination, including benign and non-clinically significant ones (e.g., hepatic, or renal cysts), while a minority (n = 103; 36%) would report only the clinically significant findings.

According to the large majority of survey responders, tumor measurements were reported in the text of a narrative report (n = 231; 80.8%). Measurements were transferred through hyperlinks together with the images for 13.6% (n = 39) of responders, while 15.7% (n = 45) of responders adopted a structured report.

Follow-up examination

Figure 3 shows responders’ opinions on which previous time point they use to compare the findings in real-world assessment. Of note, most responders (100 of 286; 35%) replied that they would compare findings with previous exam. In relation to measurable lesions, nearly all the responders report measuring the same lesions as in baseline (n = 249; 87.1%).

Fig. 3
figure 3

Percentage of which previous time point is used to compare the findings annotated in the baseline examination

For non-measurable lesions, 46.2% (n = 132) of participants would continue evaluation only with qualitative assessment; conversely, 40.6% (n = 116) would perform an objective measurement of findings when feasible. With reference to non-oncologic imaging findings, almost half of the responders (n = 137; 47.9%) replied that they would report them only in case of significant changes, a summary sentence being recommended in all other cases (e.g., “all the other findings are unchanged”). Moreover, nearly all the responders (n = 245; 85.7%) conclude the report with their personal impression on the response to treatment.

Use of RECIST 1.1 in clinical practice

Table 2 reports on the frequency of RECIST 1.1 use in real-world assessment. Overall, 74.1% of responders use RECIST 1.1 in their clinical practice either always or in specific cases. Of note, responders from research institutions use imaging criteria significantly less than responders from the remaining institutions. Table 3 summarizes the results on the opinion of responders on whether RECIST 1.1 should be used outside clinical trials. The overall rate of positive replies was 87.4%. Also, in this case, responders from research institution had a less favorable impression of the use of RECIST. Moreover, in reply to the specific question, 71% (n = 203) of responders, oncologists in their institution, consider response evaluation with RECIST 1.1 useful also in clinical practice.

Table 2 Responses to question 20 [Do you apply RECIST 1.1 criteria for response evaluation in clinical practice (not in clinical trials)?]. Results have been dichotomized according to geographic regions (Europe vs other countries), working experience (< 10 years vs > 10 years), and type of institutions (research facilities—university hospital and research institute vs other institutes). A p-value < 0.05 was considered statistically significant
Table 3 Responses to question 21 (Do you think RECIST 1.1 criteria should be applied in clinical practice and not only in clinical trials?). Results have been dichotomized according to geographic regions (Europe vs other countries), working experience (< 10 years vs > 10 years), and type of institutions (research facilities—university hospital and research institute vs other institutes). A p-value < 0.05 was considered statistically significant

Figure 4a and b report on the advantages and the disadvantages of reporting with RECIST 1.1 in clinical practice, respectively. Most responders consider the increased standardization with respect to the conventional report as the most important advantage. Conversely, most responders consider the use of RECIST 1.1 more time-consuming with respect to the narrative report.

Fig. 4
figure 4

The charts show the perceived advantages (a) and disadvantages (b) of reporting with RECIST 1.1 criteria

Discussion

Assessment of treatment response represents an important crossroad for the oncologic patient as it determines whether a specific drug, or ensemble of drugs, is effective or not. Within clinical trials, tumor response assessment relies primarily on the extent to which the sum of diameters of target lesions changes in time. Several imaging criteria have been developed for this purpose [13,14,15,16,17,18], being RECIST 1.1 [5] the most common. Opposite to the clinical trial environment where patients are carefully monitored, in daily clinical practice, the decision on whether to continue an oncologic treatment is left to the local multidisciplinary teams or to oncologists, who base their decision largely on their experience after gathering all useful clinical and imaging information. Cross-sectional imaging narrative reports usually provide fundamental information for decision-making. Unfortunately, narrative reports may lack standardization and clarity, and since recommendations are largely missing in this context, radiologists usually take a personal approach to reporting [19, 20]. Indeed, there is some evidence that narrative reports may not be as accurate as RECIST criteria in the assessment of response to treatment in clinical practice [21,22,23]. Feinderberg et al. showed that narrative reports were associated with an overestimation of treatment response in comparison to RECIST 1.1 among patients with complete response [21]. Schomburg et al. also compared the free text report with a response based on iRECIST criteria in 50 patients with metastatic renal cell carcinoma, finding only a moderate agreement between the two modalities (kappa 0.38 to 0.70), with new lesions frequently not recognized in free text [22]. These works underline the need for more standardized radiological criteria in daily clinical practice.

This survey was performed to gather information on radiologist’s practice in reporting cross-sectional imaging examinations in patients with advanced disease treated with cancer drugs in daily routine, to develop a common and shareable approach to the assessment of response to treatment. When preparing the questionnaire, the expert panel was aware that, since each cancer patient has a different story, conclusions drawn from its results would be difficult to generalize. This survey also aims to highlight the differences and criticalities of radiologists working in different institutions and with different professional backgrounds.

The first important observation is that most responders are inclined to follow RECIST 1.1 rules, with some notable exceptions. A slightly higher number of respondents (39.1% vs 36%) showed a preference for the measurement of two dimensions with respect to lesion maximum diameter only, even though the former is more time-consuming [24, 25]. We hypothesize that responders are more confident with measuring both long and short axis because they feel measures are more representative of the tumor, which often is characterized by an oblong and/or irregular shape. Only 7 respondents (2.4%) suggest measuring lesion volume, which is certainly more accurate, but not yet validated and time-consuming, unless a dedicated segmentation software is available [26]. Short axis diameter was considered the most reliable method to measure lymph nodes by half of the respondents. However, 33.6% of responders still prefer measuring both long and short axis of lymph nodes, probably because this method has been deemed more appropriate in some circumstances, for example in lymphoma assessment [15] or in predicting metastatic lymph nodes in gastric cancer [27]. Responders were not asked to define a cut-off measure for lymph nodes as size criteria are dependent on the lesion site. For example, inguinal lymph nodes may be considered pathological when the short axis is 15 mm or more while in other sites, e.g., mesorectum or mesenteric, the size cut-off is definitely smaller [28]. Moreover, morphology, shape, and borders may also be relevant in lymph node assessment [28].

Most radiologists measure lesions and lymph nodes on the axial plane, as recommended by RECIST 1.1. As for RECIST 1.1, most responders suggest measuring only the main lesions within each organ (63.6% of the cases) and a maximum of two lesions per organ (64.7%), following the selection criteria of target lesions, and to qualitatively evaluate diminutive and non-measurable findings, following the rules for non-target lesions.

According to RECIST 1.1 criteria, the baseline examination must be used as a comparator to define a stable, partial, or complete response to treatment and the nadir should be the reference to evaluate disease progression. Interestingly, in this survey, when deciding which of the previous studies should be considered as the comparator, responders expressed different opinions, some of which were controversial. Almost one third of responders affirmed that comparison should be performed only with the previous examination. In this regard, it must be noted that Weber et al. [11] developed a structured reporting concept for general follow-up assessment of cancer patients in clinical routine, based on RECIST 1.1 principles, but including only the prior tumor measurements, a limitation that the authors considered in their paper. It is the opinion of the authors that even in daily routine, a proper tumor response evaluation should require a careful comparison not only between the current and prior examination but also with older examinations, to avoid evaluation errors, as in the case of slowly growing lesions. One of the reasons why RECIST 1.1 response rules are difficult to apply in routine practice is that patient history is not always readily available and collecting clinical and imaging data is a hard and time-consuming process, especially when health electronic records are not readily accessible. Collaboration between radiologists and referring oncologists in this context is mandatory, and we recommend that controversial cases be discussed within a multidisciplinary framework.

In this survey, a substantial number of responders (41.6%) declare that they systematically use RECIST 1.1 criteria in clinical practice. Moreover, an even higher percentage (60.8%) believed that RECIST 1.1 should always be applied in clinical practice. This finding reflects the highly selected and motivated population of professionals that responded to this survey, mainly imaging specialists involved in oncologic reporting. However, interestingly, responders from research institutions use RECIST criteria less frequently than those working in other health facilities and believe they should be used in real-world assessment to a lesser extent. A non-negligible percentage of responders (32.5%) use RECIST 1.1 criteria in clinical practice only in specific cases. In free text answers, some responders affirmed that they prefer using RECIST 1.1 criteria in patients with mixed response, although the latter represents one of the well-known limitations of the criteria themselves. Other reasons that drive the radiologists to use RECIST 1.1 in daily routine are discrepancies between imaging and clinical data or cases with a high tumor burden and involving different organs, where a qualitative assessment can be difficult or misleading.

According to this survey, the main strengths of RECIST 1.1 are increased standardization, clarity, and improved communication with the oncologist. Opposite, the main concern of responders is the increased reporting time (68.5%). Of note, a minority of responders (16%) believe that the process is less time-consuming. Preference might depend on radiologists’ experience and on the availability of specialized software providing lesion identification, annotation, and allowing retrieval of target lesions from previous time point for comparison. These tools can create automatic reports with visual disease timelines and reduce errors by an automatic check when specific criteria are applied leading to a reduction of reporting time [29].

The main limitation of this study is represented by the specific target population that was addressed, i.e., members of the European Society of Oncologic Imaging, most of whom are imaging doctors with an interest in oncologic imaging. Indeed, general radiologists or oncologists might have different perspectives. Furthermore, responders are just a small proportion of the ESOI community, a fact that might have altered the results.

Conclusion

To overcome the lack of rules, responders suggest either to use RECIST or personal criteria, usually a combination of unidimensional and bidimensional measurements of the most significant target lesions. Differently from RECIST, many responders suggest comparing the last time-point with the previous study, instead of baseline and nadir. A major concern of responders is that structured reporting is more time-consuming than a narrative report; this can be overcome by using specialized software. In conclusion, based on this survey, we believe it is important to define rules for the assessment of tumor response in clinical practice. The broader oncology community should take charge of their implementation.

Availability of data and materials

All data generated or analyzed during this study are included in this published article or in the supplementary material.

Abbreviations

ESOI:

European Society of Oncologic Imaging

RECIST:

Response Evaluation Criteria in Solid Tumor

WHO:

World Health Organization

References

  1. Grimaldi S, Terroir M, Caramella C (2018) Advances in oncological treatment: limitations of RECIST 11 criteria. Q J Nucl Med Mol Imaging 62(2):129–139. https://doi.org/10.23736/S1824-4785.17.03038-2

    Article  PubMed  Google Scholar 

  2. World Health Organization (1979) WHO handbook for reporting results of cancer treatment. World Health Organization. Accessed 18 June 2023. Available: https://apps.who.int/iris/handle/10665/37200

  3. Therasse P, Arbuck SG, Eisenhauer EA et al (2000) New guidelines to evaluate the response to treatment in solid tumors European Organization for Research and Treatment of Cancer, National Cancer Institute of the United States, National Cancer Institute of Canada. J Natl Cancer Inst 92(3):205–216. https://doi.org/10.1093/jnci/92.3.205

    Article  CAS  PubMed  Google Scholar 

  4. Jang GS, Kim MJ, Ha HI et al (2013) Comparison of RECIST version 1.0 and 1.1 in assessment of tumor response by computed tomography in advanced gastric cancer. Chin J Cancer Res. 25(6):689–694. https://doi.org/10.3978/j.issn.1000-9604.2013.11.09

    Article  PubMed  PubMed Central  Google Scholar 

  5. Eisenhauer EA, Therasse P, Bogaerts J et al (2009) New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur J Cancer 45(2):228–247. https://doi.org/10.1016/j.ejca.2008.10.026

  6. Schwartz LH, Litière S, de Vries E et al (2016) RECIST 1.1-update and clarification: from the RECIST committee. Eur J Cancer 62:132–137. https://doi.org/10.1016/j.ejca.2016.03.081

    Article  PubMed  PubMed Central  Google Scholar 

  7. Fournier L, de Geus-Oei LF, Regge D et al (2021) Twenty years on: RECIST as a biomarker of response in solid tumours an EORTC imaging group - ESOI joint paper. Front Oncol 11:800547. https://doi.org/10.3389/fonc.2021.800547

  8. Sohaib A (2014) Response assessment in daily practice: RECIST and its modifications. Cancer Imaging 14(S1):O35. https://doi.org/10.1186/1470-7330-14-S1-O35

    Article  PubMed Central  Google Scholar 

  9. Schwartz LH, Panicek DM, Berk AR, Li Y, Hricak H (2011) Improving communication of diagnostic radiology findings through structured reporting. Radiology 260(1):174–181. https://doi.org/10.1148/radiol.11101913

    Article  PubMed  PubMed Central  Google Scholar 

  10. Wibmer A, Vargas HA, Sosa R, Zheng J, Moskowitz C, Hricak H (2014) Value of a standardized lexicon for reporting levels of diagnostic certainty in prostate MRI. AJR Am J Roentgenol 203(6):W651–657. https://doi.org/10.2214/AJR.14.12654

    Article  PubMed  Google Scholar 

  11. Weber TF, Spurny M, Hasse FC et al (2020) Improving radiologic communication in oncology: a single-centre experience with structured reporting for cancer patients. Insights Imaging 11(1):106. https://doi.org/10.1186/s13244-020-00907-1

    Article  PubMed  PubMed Central  Google Scholar 

  12. Campbell I (2007) Chi-squared and Fisher-Irwin tests of two-by-two tables with small sample recommendations. Stat Med 26(19):3661–3675. https://doi.org/10.1002/sim.2832

    Article  PubMed  Google Scholar 

  13. Choi H, Charnsangavej C, Faria SC et al (2007) Correlation of computed tomography and positron emission tomography in patients with metastatic gastrointestinal stromal tumor treated at a single institution with imatinib mesylate: proposal of new computed tomography response criteria. J Clin Oncol 25(13):1753–1759. https://doi.org/10.1200/JCO.2006.07.3049

    Article  PubMed  Google Scholar 

  14. Lencioni R, Llovet JM (2010) Modified RECIST (mRECIST) assessment for hepatocellular carcinoma. Semin Liver Dis 30(1):52–60. https://doi.org/10.1055/s-0030-1247132

    Article  CAS  PubMed  Google Scholar 

  15. Cheson BD, Fisher RI, Barrington SF et al (2014) Recommendations for initial evaluation, staging, and response assessment of Hodgkin and non-Hodgkin lymphoma: the Lugano classification. J Clin Oncol 32(27):3059–3068. https://doi.org/10.1200/JCO.2013.54.8800

    Article  PubMed  PubMed Central  Google Scholar 

  16. Cheson BD, Ansell S, Schwartz L et al (2016) Refinement of the Lugano Classification lymphoma response criteria in the era of immunomodulatory therapy. Blood 128(21):2489–2496. https://doi.org/10.1182/blood-2016-05-718528

    Article  CAS  PubMed  Google Scholar 

  17. Younes A, Hilden P, Coiffier B et al (2017) International Working Group consensus response evaluation criteria in lymphoma (RECIL 2017). Ann Oncol 28(7):1436–1447. https://doi.org/10.1093/annonc/mdx097

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Seymour L, Bogaerts J, Perrone A et al (2017) iRECIST: guidelines for response criteria for use in trials testing immunotherapeutics. Lancet Oncol 18(3):e143–e152. https://doi.org/10.1016/S1470-2045(17)30074-8

    Article  PubMed  PubMed Central  Google Scholar 

  19. Franconeri A, Fang J, Carney B et al (2018) Structured vs narrative reporting of pelvic MRI for fibroids: clarity and impact on treatment planning. Eur Radiol 28(7):3009–3017. https://doi.org/10.1007/s00330-017-5161-9

    Article  PubMed  Google Scholar 

  20. Park SB, Kim MJ, Ko Y et al (2019) Structured reporting versus free-text reporting for appendiceal computed tomography in adolescents and young adults: preference survey of 594 referring physicians, surgeons, and radiologists from 20 hospitals. Korean J Radiol 20(2):246–255. https://doi.org/10.3348/kjr.2018.0109

    Article  PubMed  Google Scholar 

  21. Feinberg BA, Zettler ME, Klink AJ, Lee CH, Gajra A, Kish JK (2021) Comparison of solid tumor treatment response observed in clinical practice with response reported in clinical trials. JAMA Netw Open 4(2):e2036741. https://doi.org/10.1001/jamanetworkopen.2020.36741

    Article  PubMed  PubMed Central  Google Scholar 

  22. Schomburg L, Malouhi A, Grimm MO et al (2022) iRECIST-based versus non-standardized free text reporting of CT scans for monitoring metastatic renal cell carcinoma: a retrospective comparison. J Cancer Res Clin Oncol 148(8):2003–2012. https://doi.org/10.1007/s00432-022-03997-0

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  23. Goebel J, Hoischen J, Gramsch C et al (2017) Tumor response assessment: comparison between unstructured free text reporting in routine clinical workflow and computer-aided evaluation based on RECIST 1.1 criteria. J Cancer Res Clin Oncol. 143(12):2527–2533. https://doi.org/10.1007/s00432-017-2488-1

    Article  PubMed  Google Scholar 

  24. James K, Eisenhauer E, Christian M et al (1999) Measuring response in solid tumors: unidimensional versus bidimensional measurement. J Natl Cancer Inst 91(6):523–528. https://doi.org/10.1093/jnci/91.6.523

    Article  CAS  PubMed  Google Scholar 

  25. Cortes J, Rodriguez J, Diaz-Gonzalez JA et al (2002) Comparison of unidimensional and bidimensional measurements in metastatic non-small cell lung cancer. Br J Cancer 87(2):158–160. https://doi.org/10.1038/sj.bjc.6600449

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  26. Bi WL, Hosny A, Schabath MB et al (2019) Artificial intelligence in cancer imaging: clinical challenges and applications. CA Cancer J Clin 69(2):127–157. https://doi.org/10.3322/caac.21552

  27. Lee SL, Lee HH, Ku YM, Jeon HM (2015) Usefulness of two-dimensional values measured using preoperative multidetector computed tomography in predicting lymph node metastasis of gastric cancer. Ann Surg Oncol 22(Suppl 3):S786–793. https://doi.org/10.1245/s10434-015-4621-1

    Article  PubMed  Google Scholar 

  28. Elsholtz FHJ, Asbach P, Haas M et al (2021) Introducing the Node Reporting and Data System 1.0 (Node-RADS): a concept for standardized assessment of lymph nodes in cancer. Eur Radiol. 31(8):6116–6124. https://doi.org/10.1007/s00330-020-07572-4

    Article  PubMed  PubMed Central  Google Scholar 

  29. Sevenster M, Travis AR, Ganesh RK et al (2015) Improved efficiency in clinical workflow of reporting measured oncology lesions via PACS-integrated lesion tracking tool. AJR Am J Roentgenol 204(3):576–583. https://doi.org/10.2214/AJR.14.12915

    Article  PubMed  Google Scholar 

Download references

Funding

 The research leading to these results has received funding from AIRC under 5 per Mille 2018 - ID.21091 project – P.I. Bardelli Alberto, G.L. Regge Daniele.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, G.C. and D.R.; methodology, G.C., V.R, E.N., L.F., M.D., A.L., G.A.Z., R.G.H.B., H.S., and D.R.; software, V.R.; validation, G.C. and V.R.; formal analysis, V.R.; investigation, G.C. and V.R.; data curation, G.C. and V.R.; writing—original draft preparation, G.C. and V.R.; writing—review and editing, E.N., L.F., M.D., A.L., G.A.Z., R.G.H.B., and H.S.; visualization, G.C. and V.R.; supervision, D.R. and G.C.; project administration, D.R.

All authors read and approved the final manuscript.

Corresponding author

Correspondence to Giovanni Cappello.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

R.G.H.B. and H.S. are members of the Insights into Imaging Advisory Editorial Board. They have not taken part in the review or selection process of this article.

The other authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1. 

Supplementary Tables.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cappello, G., Romano, V., Neri, E. et al. A European Society of Oncologic Imaging (ESOI) survey on the radiological assessment of response to oncologic treatments in clinical practice. Insights Imaging 14, 220 (2023). https://doi.org/10.1186/s13244-023-01568-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13244-023-01568-6

Keywords