Skip to main content

Clinical audit—ESR perspective

Abstract

This paper provides a comprehensive outline of the audit process advocated for clinical radiologists and clinical radiology departments. The philosophy discussed is equally appropriate for interventional and diagnostic radiologists.

Introduction

Within Europe there is wide variation in the understanding and implementation of clinical audit. Interpretation of the term clinical audit and its differentiation from regulation, quality assurance, accreditation and research also differs across Europe. This document attempts to define and establish the scope of clinical audit in a way that is applicable across member states and radiological organisations. Participation in clinical audit has many benefits which include the demonstration of a commitment to the delivery of a high quality service. It may also indicate areas of the service where further investment is required

Definition

Clinical audit is a tool designed to improve the quality of patient care, experience and outcome through formal review of systems, pathways and outcome of care against defined standards, and the implementation of change based on the results. Audit uses specific methodology in which performance is compared with a preselected standard. If the standard is not achieved, reasons for this are explored, change is implemented and a re-audit is carried out to ensure improvement [1]. This methodology is often described in terms of the audit cycle, illustrated in Fig. 1.

Fig. 1
figure 1

The audit cycle.

Responsibility for audit

Departmental audit

Those who use, pay for or manage radiology departments or services will wish to ensure that these are of the highest possible standard, but clinical audit, as implied in the term, is a professionally led activity, which is designed and carried out by appropriately trained health professionals rather than managers or professional auditors. Health professionals, including radiologists together with other professional staff such as radiographic/nursing/technical staff and physicists who are directly involved in service delivery are often best placed to know those areas which are either particularly important in the delivery of a safe service, or where improvement may be required. Equally, they are best placed to suggest specific improvement strategies where necessary. Although the audit is professionally led, all staff within the team which has responsibility for the part of the service under review should be able to contribute to the process.

Audit can be described as internal or external. Internal audit, which is more commonly carried out, refers to audit carried out within a department or institution and external audit refers to audit performed by professionals from outside the department or institution. Whether internal or external, audit should not be carried out without the knowledge of those involved in the delivery of the service and should be a planned, scheduled process. All audit, whether within the unit in which one works, or audit involving others’ work is dependent on professional honesty and integrity. Appropriate confidentiality should be observed as the aim is improvement, not blame.

When organisations invite teams external to the organisation to carry out audit, the considerations outlined above related to professional leadership, confidentiality and no-blame/improvement culture still apply.

Personal or individual audit

This is related to departmental audit but is a more difficult and contentious issue as standards for reporting accuracy have not been established. It is a professional duty of all radiologists to examine both the quality of their work and the systems within which that work is carried out [2], and self audit is a valuable learning tool as well as ultimately being beneficial to patient care. Further guidance on how individuals can monitor their performance will be formulated by the ESR. Audit has an important role in continuing professional development and education for both individuals and departments. Improvements in the quality of the delivery of radiological services should be focused on self-improvement aided by identifying areas where further investment in services is required. The results of audit should be used within a positive, constructive and forward-looking framework and not used in a non-statistically valid way to judge individual performance. Working within a department which invests time and effort in clinically relevant audit and which looks at team and individual performance in the context of the overall improvement of services to patients is the most valuable form of individual audit.

Scope of audit

It is possible to audit every aspect of a radiology department and how it functions, and every stage of the patient journey from receipt of request to the radiology report reaching the referrer. Audits can be comprehensive and look at a large number of factors or processes simultaneously or can be tailored to very specific areas of service delivery.

Types of audit

All aspects of a radiological organisation and its performance are amenable to audit. Audit can be divided into three categories.

  1. 1.

    Structure audit. Examination of the systems within which we work, for example the management structure, accommodation, equipment, staffing and training.

  2. 2.

    Process audit. Examination of the processes involved in the delivery of care from initial referral to delivery of a radiological report including for example quality management of the processes, justification, waiting times and examination practices and protocols [3, 4].

  3. 3.

    Outcome audit. Examination of the outcome or results of the delivery of care, which may include medical outcome and patient satisfaction [5, 6].

Standards

Audit cannot be carried out without a preset standard against which performance can be assessed. These are not necessarily widely available. There is a particular lack of validated patient outcome or accuracy standards. This factor, together with a general difficulty in measuring patient outcomes, results in this type of audit being the most difficult to carry out.

Sources of standards

Standards against which local performance can be measured can be found from a variety of sources

  1. 1.

    Local, European or international legislation. Compliance with these standards is compulsory [7].

  2. 2.

    Peer-reviewed research. These will provide benchmark standards but may have to be interpreted in the light of local facilities and expertise.

  3. 3.

    Recommendations or consensus statements from learned or national societies and organisations. These will usually have been developed to be applicable in routine practice [8, 9].

  4. 4.

    Where no published or recommended standards are available, these may have to be established by local agreement or consensus before the relevant audit is undertaken. Under these circumstances, locally sourced data from comparative investigations, pathology, surgical findings, peer group review or clinical follow-up may allow the setting of local standards for outcome audits.

High or low standards?

Standards, other than those governed by legislation, are not necessarily pass or fail. A very high or aspirational standard may only be achieved by the very few but could serve to encourage maximum improvement. If the selected target standard is based on the average expected performance, then initially, 50% will be expected to fall below it. A low or minimum target standard may be regarded as the minimum acceptable level of performance. The level of the standard selected should be taken into account in interpretation of results.

Indicators

The indicator or indicators are measurable variables related to the standard. An indicator or series of indicators should be identified at the start of the project to decide what data will need to be collected to calculate the value of the indicator and hence to decide if the chosen standard has been met. Examples of indicators in radiology include the examination volume per modality as a productivity indicator, the report turnaround time as a reporting efficiency indicator, access to an imaging modality as an access indicator and the expenditure on contrast media as a financial indicator.

Data collection

Data to be collected may include items such as observations or measurements. Collection should aim to ensure that the data are complete, accurate and representative so that valid conclusions can be reached. The ease of data collection may be affected by the locally available data storage methods.

Prospective versus retrospective audit

Data may be collected prospectively over a period of time, for a predetermined number of cases, or retrospectively from existing information sources. Prospective collection is more likely to ensure completeness of information, but the process of collection may influence the behaviour of participants and therefore the outcome of audit. It may also take longer to gather the data. Retrospective collection from records may however result in incomplete data being available. A not uncommon scenario would be a retrospective audit which shows areas for improvement, followed by prospective re-audit after the appropriate changes have been made.

Examples of audits

  1. 1.

    Structure audit

    • Type: staff training.

    • Standard: 100% of department staff should have completed training in cardiopulmonary resuscitation.

    • Indicator to be measured: percentage of staff who have completed training within the time frame specified in local rules.

    • Data to be collected: the total number of staff and the number who have undergone training.

    • Suggested number of staff to be sampled: all staff.

  2. 2.

    Process audit

    • Type: patient consent [10].

    • Standard: for 100% of interventional vascular radiology procedures, there is documented evidence that a discussion of the procedure by a suitably qualified member of staff has taken place and there is a written record of patient consent.

    • Indicator to be measured: the percentage of patients for whom there is evidence that consent procedures have been completed.

    • Data to be collected: consecutive patient records examined for written evidence of pre-procedure discussion, the name of the doctor and the patient’s written consent.

    • Suggested number of patient records to be sampled: 30 consecutive interventional procedures.

  3. 3.

    Outcome audit

    • Type: procedure complication rate [1].

    • Standard: fewer than 20% of lung biopsies should result in pneumothorax and fewer than 8% of patients should require chest drain insertion.

    • Indicator to be measured: the percentage of patients who suffered a pneumothorax and the percentage requiring a chest drain.

    • Data to be collected: consecutive lung biopsies, patient identifier, name of doctor, needle size used, presence or absence of pneumothorax, chest drain required or not.

    • Suggested number of procedures to be sampled: all lung biopsies carried out in 1 year.

Accuracy of audit

Audit is a sampling process and unlike research is not primarily designed to be statistically robust since it is carried for the purpose of improving local quality of care rather than influencing others’ practice. When audit data are interpreted, this potential, although not inevitable statistical weakness, should be taken into account. Various statistical methods can be employed to increase confidence in the statistical validity of the results [11].

Analysis of audit results

When the chosen standard is attained, this can be taken as affirmation of the quality of the service and reassurance that no change is necessary. Audit is primarily a quality improvement tool, and in those cases where the chosen standard is not reached, the results should be interpreted in a culture which does not seek to blame individuals. Analysis of the results should examine all the possible reasons for the results not meeting the standard, including the target level chosen, system, process, and technical reasons. Only then can system changes be introduced to address any measured shortcomings. Consideration should also be given to possible sampling bias accounting for the underperformance. A checklist of suggested changes to improve performance should be then be drawn up and implemented.

Re-audit

When change has been implemented, it is mandatory to repeat the same audit process to ensure that the changes introduced have led to the expected improvement. This ‘closes the loop’.

Time/resources required

Professional input into the design and standards chosen for audit is mandatory, but data collection and analysis can be delegated to suitably trained staff. Audit is potentially time consuming and needs to be allocated sufficient time and financial resources.

Conclusion

As part of clinical governance, healthcare organisations are accountable for continually improving the quality of their services [12]. Clinical audit, correctly and professionally conducted, is a powerful tool to improve patient care, experience and outcome.

References

  1. Goodwin R, de Lacey G, Manhire A (eds) (1996) Clinical audit in radiology: 100+ recipes. www.rcr.ac.uk

  2. European Society of Radiology (2004) Good practice guide for European radiologists. http://www.myesr.org/html/img/pool/ESR_2006_II_GoodPractice_Web.pdf

  3. American College of Radiology (ACR) (2008) Appropriateness criteria. www.acr.org

  4. European Commission (2001) Radiation protection 118: referral guidelines for imaging. www.ec.europa.eu

  5. Fitzgerald R, Mehra R (2000) How accurate is cancer scan reporting? Hosp Med 61:637–6420

    Article  PubMed  CAS  Google Scholar 

  6. Kundel HL (1989) Perception errors in chest radiography. Semin Respir Med 10:203–210

    Article  Google Scholar 

  7. International Atomic Energy Agency (IAEA) (2006) Applying radiation safety standards in diagnostic radiology and interventional procedures using X rays. Safety reports series no. 39. www.pub.iaea.org

  8. European Commission (1999) European guidelines on quality criteria for computed tomography. Report EUR 16262 EN. www.drs.dk

  9. American College of Radiology (2006) Practice guidelines for performing and interpreting diagnostic computed tomography (CT). www.acr.org

  10. The Royal College of Radiologists (2005) Standards for patient consent particular to radiology. www.rcr.ac.uk

  11. Tawn DJ, Squire CJ, Mohammed MA, Adam EJ (2005) National audit of the sensitivity of double-contrast barium enema for colorectal carcinoma, using control charts. For the Royal College of Radiologists Clinical Radiology Audit Sub-Committee. Clin Radiol 60:558–564

    Article  PubMed  CAS  Google Scholar 

  12. Scally G, Donaldson L (1998) The NHS's 50 anniversary. Clinical governance and the drive for quality improvement in the new NHS in England. BMJ 4:61–65

    Article  Google Scholar 

  13. The Royal College of Radiologists (2009) AuditLive. www.rcr.ac.uk

Download references

Acknowledgements

Paper prepared by the ESR Subcommittee on Audit and Standards. Chairperson: Jane Adam. Members: Hudaver Alper, Éamann Breatnach, Maurizio Centonze, Elisabeth Dion, Birgit Ertl-Wagner, Robert Manns. Approved by the ESR Executive Council, June 2009.

Author information

Authors and Affiliations

Consortia

Appendices

Appendix 1

Example of audit pro forma

Topic to be audited

  • Type of audit: structure/process/outcome

Standard selected including target performance

  • Source of standard: legislation/publication/learned society guidance or consensus/ locally generated standard/other

  • Indicator: quantifiable variable(s) to be calculated

Data to be collected

Number of patients or volume of information to be collected

Results

Standard met? Yes/no

If standard not met, analysis of potential causes

Action plan for implementation of change

Re-audit date

Re-audit outcome

Appendix 2

Glossary of terms [13]

Audit cycle: The basic framework upon which all audit projects are based. An audit topic is chosen and a standard to be met is defined. Data are collected to identify what is really happening and these are compared with the standard. If the required standard is not achieved, changes are introduced to improve performance. The cycle should then be repeated to assess whether changes have led to the standard now being met.

Clinical guidelines: Statements of principle and good practice developed in order to assist practitioner and patient decisions about appropriate health care in specific clinical circumstances. Guidelines are usually produced and agreed upon by a national body.

Closing the loop: Completion of the full audit cycle. Practice is changed following the initial audit and the audit is repeated to ensure that the changes introduced have been effective.

Data to be collected: Specifies what data need to be collected so that the indicator can be calculated.

Effectiveness: The extent to which application of a technology or intervention brings about a desired effect, e.g. change in diagnosis, altered management plan, improvement in health. It is a measure of the degree of conformity between the actual result and the desired outcome. Effectiveness is not synonymous with efficacy.

Efficiency: Assessment of efficiency determines whether acceptable levels of efficacy and effectiveness are achieved when using a prudent or optimal set of resources.

Evaluation: A systematic and ideally scientific process determining the extent to which planned intervention(s) achieve predetermined objectives.

Indicator: A measurable variable related to the standard. An indicator, or series of indicators, should be identified within an audit project which will clarify what data need to be collected.

Local guidelines: Guidelines may be developed and introduced locally. They are commonly adaptations of national guidelines designed to meet local conditions and constraints. The process of developing a local guideline involves consensus of all relevant clinicians.

Outcome (patient health): An alteration in the health status of an individual patient directly attributable to clinical action (or inaction). It is customarily abbreviated to “outcome” although this may lead to confusion in blurring distinction between patient-based measures and other metrics. WHO defines “health” as a complete state of physical, mental and social well being, classified under four headings:

  • Quantity of life (e.g. 5 year survival)

  • Process-based measures (e.g. complication and readmission rates)

  • Quality of life (e.g. measures of pain, handicap, depression)

  • Satisfaction, including entitlement to privacy, courtesy, etc. (e.g. score on a satisfaction survey)

Outcome audits look at what is done as a whole from the patient's point of view. Problems that such an audit may reveal (e.g. 25% chance that diagnosis is not correct) may prompt audits of each link in the whole diagnostic chain. These would be process audits.

Performance: The quality of care achieved, judged by both the process and outcome of that care.

Process: The activity undertaken (what was done? how well was it done? what should have been done?).

Protocol: A system of rules about the correct way to act in formal situations or an adaptation of a clinical guideline designed to meet local conditions and constraints. The latter is the same as a local guideline [1].

Quality: The level of excellence. Many attempts have been made to define the quality of medical and health care. In general, six aspects are usually emphasised: access to services, relevance to need, effectiveness, equity, social acceptability, efficiency/economy.

Quality assurance: The managed process whereby the comparison of care against predetermined standards is guaranteed to lead to action to implement changes, and ensuring that these have produced the desired improvements.

Research: A systematic investigation to establish facts or principles, and collect valid information on a subject. Research explores new ideas with the aim of defining and setting the standards of care for best clinical practice. This can be contrasted with audit, which aims to establish whether the actual care given to patients meets set standards. Research identifies what can and should be done, whilst audit identifies whether it is actually being done. For example, a study to determine whether endoscopic stent insertion or open surgical bypass provides the better palliation for malignant biliary obstruction is research. However, a study to determine whether the palliation of malignant biliary obstruction at a given hospital is carried out in accordance with the Association of Hepato-biliary Surgeons' guidelines would be audit.

Sample: A subgroup of a population selected for audit in such a way as to allow inferences to be made about the whole population, i.e. a representative subgroup. The method of choosing the sample is crucial to the validity of the audit.

Standard: A conceptual model against which the quality or excellence of a particular activity may be assessed. It is the specification of process and/or outcome against which performance can be measured. In the context of health care, a standard indicates the best practice of clinical care to which all patients should be entitled. This may be determined by research, consensus statements, local agreement or recommendations from learned societies. The standard incorporates a target performance which specifies the expected level of achievement that performance should meet or exceed. An example of a standard would be the following: the risk of pregnancy should be established in women of childbearing age undergoing planned or inadvertent computed tomography (CT) of the pelvis in 100% of cases.

Structure: The availability and organisation of resources (human and material) required for the delivery of a service.

Target (see standard): Specification of the expected level of achievement which performance should meet or exceed.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

European Society of Radiology., ESR Subcommittee on Audit and Standards. Clinical audit—ESR perspective. Insights Imaging 1, 21–26 (2010). https://doi.org/10.1007/s13244-009-0002-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13244-009-0002-2

Keywords