Skip to main content
  • Original Article
  • Open access
  • Published:

Development and validation of core entrustable professional activities for abdominal radiology

Abstract

Objectives

To develop and validate European entrustable professional activities (EPAs) for sub-specialised hepatobiliary and gastrointestinal (HB/GI) diagnostic imaging.

Materials and methods

Both European Society of Radiology and national curricula in HB/GI diagnostic radiology were thoroughly reviewed, resulting in preliminary EPAs drafted by a pilot group of expert radiologists in 2 different countries. Each EPA was fully described with 7 components (Specification/limitations; Potential risks of failing; Relevant domains of competence; Required experience, knowledge, skills, attitude and behaviour; Assessment information sources to assess progress and ground a summative entrustment decision; Entrustment for which level of supervision is to be reached; and Expiration date). The modified Delphi method with 3 Delphi rounds was chosen for validation. Content validity index (CVI) and median values were used for validation.

Results

There were 15 preliminary EPAs, some of them divided according to 2 levels: resident and fellow level. The 37 members of the Delphi group were based in 2 different European countries with a background experience of 10 represented countries. Subsequent to the first Delphi round, 6 EPAs were accepted (CVI ≥ 0.8, median ≥ 4), 6 needed major revisions (CVI 0.7–0.79, median ≥ 4), 3 were rejected (CVI < 0.7) and 1 was added. After the second Delphi round, both the 6 revised EPAs and the additional one met the validation criteria (CVI ≥ 0.8, median ≥ 4). Finally, 13 EPAs were validated during the 3rd Delphi round with an agreement percentage of 95–100%.

Conclusion

This study creates and validates EPAs for sub-specialised HB/GI diagnostic imaging.

Critical relevance statement

Thirteen EPAs for sub-specialised hepatobiliary and gastrointestinal diagnostic imaging were created with a strong methodology, and as a first example set in sub-specialised diagnostic imaging, they provide a template for others to be created.

Key points

• The competence-based teaching in medical studies has recently been reintroduced through EPAs.

• Thirteen EPAs have been developed for hepatobiliary and gastrointestinal sub-specialised diagnostic imaging.

• These EPAs were validated using a Delphi modified method and provide a template for other to be created.

Graphical abstract

Introduction

Health professional education is mainly based on a learning and knowledge curriculum. The number of publications in the field of radiology has increased exponentially over 40 years. Subsequently, the knowledge curriculum of radiology trainees has widened. However, learning in the workplace must remain part of the trainee’s curriculum. This connection between knowledge and competencies is necessary to optimise medical curricula.

Introducing competency-based education into a trainee curriculum can prove confusing and proper definition is crucial to translate it into daily practice. For this reason, in 2005 Olle ten Cate introduced the concept of entrustable professional activities (EPAs) in medical training to help programme directors and supervisors determine the competence of their trainees [1].

An EPA is a task and/or set of responsibilities that supervisors entrust and delegate to a trainee without supervision, once adequate competence has been obtained [1, 2]. An EPA is a whole unit of professional practice, including several competencies. It must not be confused with a single isolated task (e.g. “perform an MRI with hepatospecific contrast agent”). A full EPA requires 7 components [2,3,4]:

  1. 1.

    Specification and limitations

  2. 2.

    Potential risks of failing

  3. 3.

    Most relevant domains of competence

  4. 4.

    Required experience, knowledge, skills, attitude and behaviour

  5. 5.

    Assessment information sources to assess progress and ground a summative entrustment decision

  6. 6.

    Entrustment for which level of supervision is to be reached at which stage of training

  7. 7.

    Expiration date

EPAs are an emerging concept and have been recently created in fields such as anaesthesiology and intensive care [5, 6], but rarely, in sub-specialised diagnostic imaging. This is probably due to the difficult task of defining and assessing competencies in pure diagnostic work. In this article, we developed and validated a set of EPAs for hepatobiliary and gastrointestinal diagnostic imaging, using a modified Delphi study based on the method for EPA development previously described by ten Cate and Hennus [6].

Material and methods

Development of preliminary EPAs

A pilot group of 4 expert radiologists from two European countries (2 professors and 2 consultants, all specialised in hepatobiliary and gastrointestinal radiology) made an extensive and thorough review of the ESR radiology trainee curriculum. To complement this exhaustive review, the national radiology trainee curricula of two different European countries (France and Ireland) were also reviewed. From these reviews, they identified competencies and created the preliminary set of EPAs. A title and specifications/limitations were defined for each EPA, together with domains of competence, knowledge, skills, attitudes and assessment methods. A preliminary list of 15 EPAs was created, each fully described according the 7 previously described components (see Introduction) [4]. When appropriate, EPA content was divided into resident and fellow level. As explained by ten Cate [4], the conditions for entrustment decisions should guide training activities. Thus, each EPA must also specify expected experience. The modified Delphi method was then used to reach group consensus and to collect expert opinion [7].

First Delphi round

One month prior to the first Delphi round, all potential participants on the panel were contacted and sent an invitation for the project to be presented. Participants were carefully selected for the Delphi group according to the recommendations for the creation of a Delphi panel [7,8,9,10,11]. A wide and representative range of participants from two European countries was contacted to be part of the Delphi group, including professors, consultants, fellows and residents. A preparatory video-conference session, explanatory email and reference articles for EPA were sent to participants. This one-month delay was chosen so that all participants had enough time to contact the pilot group for more explanations if needed.

An online survey was developed for the first Delphi round [12]. An example of the survey can be seen in Additional file 1. The survey was tested by three radiologists from the two countries (one professor, one consultant and one resident) in order to ensure i. clarity and format of the questions and ii. comprehensiveness of the questions such that an appropriate answer could be given. These three radiologists did not take part to the Delphi tour. Some minor textual revisions were made following the survey testing. The electronic survey was then sent to 48 stakeholders, along with a second detailed email containing again (repetitive) information on EPAs.

Panel members were asked to score each EPA for “indispensability”, “comprehensiveness/clarity” and “completeness” with a 5-point Likert-scale as follows: 5, Strongly agree—4, Agree—3, Neither agree or disagree—2, Disagree—1, Strongly disagree. This 5-point Likert-scale was chosen with the intention that the mid-point represents a neutral response. In order not to misinterpret the words, digits from 1 to 5 were also added [13]. After each answer, an open text box was available for any additional suggestion, including an additional suggested EPA. In addition, all preliminary listed knowledge, skills and attitudes were given point-by-point approval with a 2-point binary scale: 2, approve—1, disapprove. Again, open text boxes were provided following each section for free text additional suggestions. The final part of the survey asked for the number of successfully completed examinations for entrustment (open box for numbers), expected level of supervision (scale from 1 to 5) for both resident and fellow level, and the expiration date (open box for numbers). The levels of supervision are:

  • Level 1: Not allowed to practice EPA

  • Level 2: Allowed to practice EPA only under proactive, full supervision

  • Level 3: Allowed to practice EPA only under reactive/on-demand supervision

  • Level 4: Allowed to practice EPA unsupervised

  • Level 5: Allowed to supervise others in practice of EPA

Data analysis of the first Delphi round

After the first Delphi round, all results and comments were analysed by the 4 members of the pilot group. First, content validity index (CVI) of each Likert-scale was calculated for each EPA [14]. For each item, the CVI is computed as the number of experts giving a rating of either 4 or 5, divided by the number of experts. CVI was originally described with a 4-point rating scale [14, 15]. As a five-point Likert-scale was chosen for our study, the management of CVI results was as previously described in the study of Hennus et al. [6]: a CVI of 0.8 or higher indicated sufficient content validity, a CVI within the range 0.70 and 0.79 implied that the item required revision, and a CVI below 0.70 indicated elimination of the corresponding EPA. The median score of each item (indispensability, comprehensiveness/clarity and completeness) was then calculated, and a median < 4 was deemed to indicate that the EPA as needed revisions. For questions involving a 2-point scale, the item was approved if CVI of 0.8 or higher and deleted if under 0.8. Comments and suggestions in open text box were all reviewed by the 4 members of the pilot group and dealt with as follows: (a) suggestion regarding textual clarifications and/or alterations: suggestion accepted if unanimously agreed on by the pilot group; (b) suggestion contradicting existing EPA guidelines: suggestion rejected; (c) suggestion regarding content of an EPA made by > 5% of all panellists: suggestion accepted; and (d) suggestion regarding content of an EPA made by < 5% of all panellists: suggestion rejected. For the number of successfully completed examinations for entrustment, the median value was considered. For expected level of supervision and expiration date, the mean value was considered. Data were analysed using SPSS software, version 15.0.

Second Delphi round

All results from the first Delphi round were summarised and sent to the Delphi panellists. Emphasis was placed on clear and easy visualisation of the results and modifications, using comparison tables, graphs and colours. Retained additional suggestions were presented with the percentage of the panellists suggesting them. The survey of the second Delphi round included only EPAs needing revisions according to CVI and median value. The panellists were again asked to score each revised EPA for “indispensability”, “comprehensiveness/clarity” and “completeness” with a 5-point Likert-scale. Knowledge, skills and attitudes were rated for global approval with a 2-point binary scale. In addition, one EPA was added to the survey (EPA 16, see Result section). This EPA underwent the same process as the first Delphi round.

Data analysis of the second Delphi round

CVI and median values were calculated for each scale. All new results and comments were again analysed by the 4 members of the pilot group. The prior data analysis described in the first Delphi round was conducted, both with the revised EPA and the new one. Data were analysed using SPSS software, version 15.0.

Third Delphi round

For this last round, each EPA was presented as a card with the entire 7 components completed as a final version. The panellist was asked for global approval with a 2-point binary scale “agree” or “disagree” for each EPA, and for approval for implementation of the whole EPA set into the medical imaging curriculum.

Results

First Delphi round

From the 48 surveys sent to panellists, 38 complete responses (79%) from two different countries were received (Fig. 1). The final panel included 15 professors (39.5%), 15 consultants (39.5%, 13 public and 2 private), 5 fellows (13%) and 3 residents (8%). All participants were sub-specialised in hepatobiliary and gastrointestinal imaging. (The 3 residents had experience in hepatobiliary and gastrointestinal imaging.) Twenty-three (60%) of the participants were male and fifteen (40%) female (Table 1). From the 38 participants, 13 had prior experience of sub-specialised hepatobiliary and gastrointestinal imaging in other countries for at least 6 months (mean = 2 years ± 2.7). In total, 10 different countries were represented.

Fig. 1
figure 1

Flow chart of the study

Table 1 Panellist characteristics of the first Delphi round

The content validity index (CVI) and median value were calculated for indispensability, comprehensiveness/clarity and completeness (Table 2, Figs. 1, 2 and 3). All median values were ≥ 4. Six EPAs had CVI > 0.8 and were accepted (Fig. 2a). Six EPAs (EPA 1, EPA 3, EPA 4, EPA 6, EPA 7 and EPA 8) did not meet the threshold of 0.8 for either “indispensability”, “comprehensiveness/clarity” and/or “completeness”, with CVI ranging from 0.7 to 0.8. These EPAs underwent major revisions according to the comments associated to each results. Three EPAs had CVI < 0.7 for indispensability and were rejected. The three rejected EPAs were: “Perform and interpret a specific colonic examination”, “Perform and interpret oesophageal imaging” and “Effectively contribute clinical/imaging opinion to abdominal/digestive multidisciplinary team (MDT) meetings” (see Discussion section).

Table 2 Content validity index (CVI) of indispensability, comprehensiveness/clarity and completeness for each EPA according to Delphi round 1 and 2
Fig. 2
figure 2

Results for indispensability of the first (a) Delphi round and comparison with the second (b) Delphi round. EPA 2, 5 and 14 are eliminated from the second round as content validity index (CVI) for indispensability was below 0.7. EPA 16 was added in the second Delphi round

Fig. 3
figure 3

Results of comprehensiveness/clarity (a) and completeness (b) of the first Delphi round and comparison with the revised EPA of the second Delphi round. EPA 2, 5 and 14 have been previously eliminated due to content validity index (CVI) for indispensability below 0.7 and are not represented

Moreover, 11% of the panellist asked for an additional EPA (“Perform and interpret post-operative imaging of abdominal and gastrointestinal system”) that was added in round 2 (EPA 16).

Second Delphi round

From the 38 sent surveys, there were 36 (95%) responses, all fully completed. The revised EPAs all improved their CVI > 0.8 for indispensability, comprehensiveness/clarity and completeness (Figs. 2b, 3, Table 2). All median values were ≥ 4. The additional EPA 16 was directly approved for round 3 as CVI was over 0.8 for indispensability, comprehensiveness/clarity and completeness. All knowledge, skills and attitudes of this EPA were also approved with CVI > 0.8.

Third Delphi round

From the 38 sent surveys, there were 38 (100%) complete responses. All EPAs were validated by the panel (Table 2). The agreement for implementation of the whole EPA set into the medical imaging curriculum was 92%. Subsequent to the comments of the panellists, the order of the EPAs was changed in the final presentation, so that EPAs covering more basic resident-level competencies were placed at the beginning of the curriculum and more advanced fellow-level EPAs were placed towards the end of the curriculum. The table of correspondence can be seen in Supplementary materials 2. There are 13 final EPAs for hepatobiliary and gastrointestinal diagnostic imaging, all summarised in Table 3. Figure 4 shows an example of one complete EPA. All complete EPAs are accessible in Supplementary materials 3.

Table 3 The final EPAs for hepatobiliary and gastrointestinal diagnostic imaging trainees
Fig. 4
figure 4

Example of EPA 3’ for the trainee curriculum with the 7 components

Discussion

A competency-based curriculum is lacking in sub-specialised diagnostic imaging, probably due to the difficult task of defining competency in a purely diagnostic, intellectual activity, compared, for example, with a procedure-based activity. In our study, we developed a list of 13 EPAs for hepatobiliary and gastrointestinal diagnostic imaging. The study was validated with an international panel, using a strong methodology: the modified Delphi method.

Apart from the method, one of the main strengths of our study is the wide validation of the content of each EPA. Not only were panellists asked to assess indispensability, comprehensiveness/clarity and completeness, but also to validate the knowledge, skill and attitude necessary for each EPA, as well as the required experience for each examination, level of supervision according to the stage of training (resident versus fellow) and expiration date. This level of validation and its method have not previously been reported, to our knowledge, in the literature.

Our study went further than the previous studies on EPA, with the subdivision of hepatobiliary and gastrointestinal diagnostic imaging curriculum into resident and fellow levels. This approach was difficult to consider for the pilot group and was potentially confusing for the panellist. Careful and precise explanation was given to each panellist prior to the study for everyone to have the same definition. A few additional questions were answered during the first Delphi round when asked by individual stakeholders. Moreover, many comments in the open text boxes referred to the attributed level of competencies. The resident/fellow levels were initially created according to the European/national curricula. So, when explicitly specified in these curricula, they were not changed and this was explained to panellists who had suggested a change, before the second round. However, for some competencies, the level of trainee was not explicitly specified. In that case, the comments were taken into consideration following the rules described in Materials and methods.

The three rejected EPAs after the first Delphi round were: i. perform and interpret a specific colonic examination; ii. perform and interpret oesophageal imaging; and iii. effectively contribute clinical/imaging opinion to abdominal/digestive multidisciplinary team (MDT) meetings. With CVI = 0.62 and CVI = 0.65 for indispensability, respectively, the two first EPAs were rejected by the panel based on similar arguments such as requiring much too specific skills, along with low throughput of examinations and consequent difficulty teaching them in many university centres, and thus should not be part of the systematic curriculum of each trainee (16% and 19% of the panellist, respectively). However, this rejection as part of systematic curriculum of the trainee should not discourage centres from trying to teach these examinations when possible for them and when the trainee is interested in learning. The third rejected EPA concerning competencies at MDT had the lowest CVI of 0.57 with the largest amount of comments against that EPA, arguing that this EPA was neither resident nor fellow level. The pilot group was first surprised by this decision of the panel, as participation at MDTs was part of the 3 curricula they reviewed. However, going back to the definition of an EPA, the panel was actually correct in its assessment, because an EPA is defined as a task that supervisors delegate to a trainee to perform unsupervised once adequate competence has been obtained [4]. However, in the ESR curriculum level II, for example, this competence/attitude was defined as “To participate in and to perform under supervision at multidisciplinary conferences”. Indeed, 22% of the panellists clarified their rejection by stating that trainees must participate in MDTs, but delegation should not be considered before becoming a consultant. This example also shows the strong methodology of the Delphi method.

This study has several limitations, mainly related to the Delphi group participants. The Delphi survey is a group designed to transform opinion into group consensus. The members of the Delphi group should be individuals who have knowledge of the topic under investigation, defined as “panel of informed individuals” or “experts” [9, 10]. Our panel members were selected for their knowledge of and commitment to abdominal radiology. The majority of them worked in university medical centres. This may have biased some of the results, but we tried to balance the panel by choosing panellists from a variety of backgrounds and levels of expertise, including professors, consultants, fellows and residents. In addition, although we tried to develop these EPAs for inclusion in the European trainee curriculum, the members of the Delphi group were only from two specific countries. Although the European Society of Radiology curriculum tends to provide homogeneous recommendations across European countries, many countries also have their own national curriculum, which may differ slightly from country to country. Therefore, the implementation of these EPAs in different European environments is potentially challenging. Further studies could validate the results in a larger number of countries. It is also very likely that each country will have its own guidelines for some specific topics, depending on the structure of the health system and local practices. The inclusion of additional guidance and information on these recommendations (e.g. the purpose of the EPAs) would likely also help their implementation by building confidence in the ability to use EPAs [16, 17].

In conclusion, this study developed and validated a set of 13 EPAs which could be used as a European trainee curriculum for hepatobiliary and gastrointestinal diagnostic imaging. The robust methodology and European validation make it widely applicable and offer a potential template for EPA creation in other sub-specialities in diagnostic imaging.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CVI:

Content validity index

EPAs:

Entrustable professional activities

References

  1. Ten Cate O (2005) Entrustability of professional activities and competency-based training. Med Educ 39:1176–1177. https://doi.org/10.1111/j.1365-2929.2005.02341.x

    Article  PubMed  Google Scholar 

  2. Hauer KE, Kohlwes J, Cornett P et al (2013) Identifying entrustable professional activities in internal medicine training. J Grad Med Educ 5:54–59. https://doi.org/10.4300/JGME-D-12-00060.1

    Article  PubMed  PubMed Central  Google Scholar 

  3. Mulder H, Ten Cate O, Daalder R, Berkvens J (2010) Building a competency-based workplace curriculum around entrustable professional activities: the case of physician assistant training. Med Teach 32:e453–459. https://doi.org/10.3109/0142159X.2010.513719

    Article  PubMed  Google Scholar 

  4. Ten Cate O, Chen HC, Hoff RG et al (2015) Curriculum development for the workplace using entrustable professional activities (EPAs): AMEE Guide No. 99. Med Teach 37:983–1002. https://doi.org/10.3109/0142159X.2015.1060308

    Article  PubMed  Google Scholar 

  5. Moll-Khosrawi P, Ganzhorn A, Zöllner C, Schulte-Uentrop L (2020) Development and validation of a postgraduate anaesthesiology core curriculum based on entrustable professional activities: a Delphi study. GMS J Med Educ 37:Doc52. https://doi.org/10.3205/zma001345

    Article  PubMed  PubMed Central  Google Scholar 

  6. Hennus MP, Nusmeier A, van Heesch GGM et al (2021) Development of entrustable professional activities for paediatric intensive care fellows: A national modified Delphi study. PLoS One 16:e0248565. https://doi.org/10.1371/journal.pone.0248565

  7. de Villiers MR, de Villiers PJT, Kent AP (2005) The Delphi technique in health sciences education research. Med Teach 27:639–643. https://doi.org/10.1080/13611260500069947

    Article  PubMed  Google Scholar 

  8. Akins RB, Tolson H, Cole BR (2005) Stability of response characteristics of a Delphi panel: application of bootstrap data expansion. BMC Med Res Methodol. https://doi.org/10.1186/1471-2288-5-37

    Article  PubMed  PubMed Central  Google Scholar 

  9. Hasson F, Keeney S, McKenna H (2000) Research guidelines for the Delphi survey technique. J Adv Nurs 32:1008–1015

    Article  CAS  PubMed  Google Scholar 

  10. McMillan SS, King M, Tully MP (2016) How to use the nominal group and Delphi techniques. Int J Clin Pharm 38:655–662. https://doi.org/10.1007/s11096-016-0257-x

    Article  PubMed  PubMed Central  Google Scholar 

  11. Diamond IR, Grant RC, Feldman BM et al (2014) Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol 67:401–409. https://doi.org/10.1016/j.jclinepi.2013.12.002

    Article  PubMed  Google Scholar 

  12. O’Dowd E, Lydon S, O’Connor P et al (2020) The development of a framework of entrustable professional activities for the intern year in Ireland. BMC Med Educ 20:273. https://doi.org/10.1186/s12909-020-02156-8

    Article  PubMed  PubMed Central  Google Scholar 

  13. Keeble C, Baxter PD, Gislason-Lee AJ et al (2016) Methods for the analysis of ordinal response data in medical image quality assessment. Br J Radiol 89:20160094. https://doi.org/10.1259/bjr.20160094

    Article  PubMed  PubMed Central  Google Scholar 

  14. Polit DF, Beck CT, Owen SV (2007) Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Res Nurs Health 30:459–467. https://doi.org/10.1002/nur.20199

    Article  PubMed  Google Scholar 

  15. Lynn MR (1986) Determination and quantification of content validity. Nurs Res 35:382–385

    Article  CAS  PubMed  Google Scholar 

  16. Gagliardi AR, Brouwers MC, Palda VA et al (2011) How can we improve guideline use? A conceptual framework of implementability. Implement Sci 6:26. https://doi.org/10.1186/1748-5908-6-26

    Article  PubMed  PubMed Central  Google Scholar 

  17. Cochrane LJ, Olson CA, Murray S et al (2007) Gaps between knowing and doing: understanding and assessing the barriers to optimal health care. J Contin Educ Health Prof 27:94–102. https://doi.org/10.1002/chp.106

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

We wish to thank the radiologists who tested the surveys and members of the Delphi group: Malone D, Frampas E, Quinn S, Jagut JL, McVeigh Niall, Jacquemin S, Crilly S, Savoye-Collet C, El Chammas D, MacDermott R, Lissillour PL, Dray B, Zappa M, Delagnes A, Oreistein I, Beuzit L, Hoeffel C, Millet I, Lucidarme O, Pouvreau P, Tasu JP, Ronot M, Gregory J, Guiu B, Wagner M, Calame P, Belabbas D, Laurent V, Reizine E, Rousset P, Zins M, Vermersch M, Lewin M, Chenin M, Blanc F, Barat M, Moret A, Milot L and Dillon H. We also want to thank the French network of University Hospitals HUGO (“Hopitaux Universitaires du Grand Ouest”), Guerbet and the French Society and College of Radiology (SFR-CERF).

Funding

This study has received funding by the French network of University Hospitals HUGO (“Hopitaux Universitaires du Grand Ouest”), Gore and the French Society and College of Radiology (SFR-CERF).

Author information

Authors and Affiliations

Authors

Contributions

AP, SS, AD and CA were part of the pilot group and analysed all the results of the Delphi group in each round. AP, SS, MC and CA created the first draft and surveys of the 3 Delphi rounds. AP did the statistical analysis. AP, SS, MC, AD and CA wrote the article.

Corresponding author

Correspondence to Anita Paisant.

Ethics declarations

Ethics approval and consent to participate

Institutional Review Board approval was not required because this study does not involve human or animals.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: 

ESM 1: Example of survey for EPA 3. First Delphi round. ESM 2: Table of correspondence between EPA figures of the 3 Delphi rounds and the final presented EPA. ESM 3: The 13 EPAs for sub-specialised HB/GI diagnostic imaging.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Paisant, A., Skehan, S., Colombié, M. et al. Development and validation of core entrustable professional activities for abdominal radiology. Insights Imaging 14, 142 (2023). https://doi.org/10.1186/s13244-023-01482-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13244-023-01482-x

Keywords