A clinician referring a patient for an imaging study is usually looking for a series of specific things in the radiologist’s report: complete and accurate identification of relevant findings, a coherent opinion about the likely underlying cause of abnormalities and, if appropriate, guidance on further investigations that may add information or certainty. The response of reporting radiologists to these implied requests can range across a wide spectrum, from those who believe it is best to produce a long list of positive and negative findings, and an exhaustive differential diagnosis, through those radiologists who try to achieve a brief but accurate report giving only those findings and differential diagnoses they believe likely, to those whose reports consist of lists of findings without context or filtration. If there is mutual understanding between referrer and radiologist, many of the report variants along this spectrum can achieve the necessary result, but this depends on experience and trust between the referrer and reporter [1].
In the current model of radiology service provision, where contact between referrer and radiologist is diminishing [24, 25], and increased use of off-site reporting may mean the two are completely unknown to one another, it is often impossible for this trust and experience to develop. Referrer A may have no idea what subtle point Radiologist B is trying to convey by the use of particular language. In an increasingly globalised environment, standardisation of language and reporting acquires greater importance.
Clarity of reports is key to accurate communication of meaning. Report readers are usually in a hurry [3]. Referrers often complain about the failure of a radiologist to commit to a conclusion. Rambling descriptions without a useful conclusion add little to patient care and often suggest the radiologist wishes to remain remote from the clinical problem. If we compose unnecessarily vague or ambiguous reports, we do a disservice to our patients and to ourselves [26].
Attempts have been made to identify precisely what types of reports are desired by referrers. In 1995, McLoughlin et al. surveyed 100 referring doctors regarding their preferences among three different styles of report for each of six clinical scenarios. For a normal CXR, a report simply stating “normal” was most popular if the patient had no chest symptoms, but if symptoms were present, reports giving descriptive detail were preferred. In abnormal CXRs, most wanted reports describing the findings and suggesting the diagnosis, as opposed to only giving a diagnosis. For abdominal ultrasound studies, most preferred reports giving detailed findings, even if those findings were normal. Thus, the descriptive detail expected by clinicians depended on the clinical circumstances, but was independent of the specialty, experience or academic status of the referrer. The authors speculated that the preference of a substantial proportion of physicians for detailed descriptions, even when this involved listing negative findings, providing no further information, might indicate that referrers interpreted these reports as showing that a thorough examination was performed [27].
In 2005, Sistrom and Honeyman-Buck attempted to identify whether the format of a radiology report (independent of its content) had any effect on the ability of a reader to extract information of relevance to patient care. The working hypothesis was that consistently formatted (structured) reports would be easier to read and understand, with improved efficiency in answering content-specific questions. Sixteen senior medical students were given radiology reports to read in free text or structured format and asked to answer multiple-choice questions about specific medical content. While the subjects all strongly preferred structured format reports to free text, no significant differences were found between the two formats in terms of speed of reading the reports, accuracy of understanding of their content or efficiency of assimilation of the contained information. The authors suggested that overly structured data can result in a loss of cognitive focus by clinicians, with a loss of overview when dealing with data in many fields, and that this can apply to the reporting radiologist as well as the report reader. The act of composing a report in narrative form may be an inherent part of the process of cognitive processing of the case for the radiologist. Nonetheless, they concluded in favour of report organisation and format like a ‘laboratory report’, principally to meet the wishes of the referrer [28].
More recently, at ECR 2017, findings were presented that contrasted with these results. The authors created unstructured reports for CT angiographic studies and CTs of abdomen and structured reports for MRI brain and thoracic CT studies. An online survey of almost 150 clinicians asked subjects to read each report and then asked multiple-choice questions based on the reports (while being unable to return to the report). Critical findings were missed in 34.9% of unstructured reports and 17.3% of structured reports. The incorrect diagnosis was selected by the subjects in 18.1% of unstructured reports and 6.2% of structured reports. Overall, structured reports led to better recall of all (and critical) findings and fewer incorrect diagnoses. The worst-performing report was an unstructured CT coronary angiogram; the authors stated: “It seems the more complex the study, the greater benefit that you can yield from having a structured report” [29].
In a 2009 survey of hospital clinicians’ preferences, Plumb et al. found that only 31% of (non-radiologist) consultants believed “normal examination” was a sufficient report. (A 2010 survey of GPs’ preferences made a similar finding—respondents did not know what organs had been examined [30]). A majority felt that it was appropriate to include some information regarding examination technique and quality. Recommendations for further investigations were welcomed, and 63% agreed that if further imaging was recommended by a radiologist, it should be automatically arranged. Strong preferences were expressed for more detailed (as opposed to simpler) reports and for tabular reports rather than prose; most preferred was a detailed tabular report accompanied by a radiologist’s comment [31].
In 2011, Bosmans and colleagues surveyed clinical specialists and GPs (COVER) and radiologists (ROVER) regarding their views of and preferences for radiology reports. Eighty-seven per cent of referrers considered the radiology report an indispensable tool; 63% did not think they were better able to interpret an imaging study in their own specialty than a radiologist. Almost all agreed they need to provide adequate clinical information and state clearly the question they want answered. For complex examinations, 84.5% of referring clinicians and 55.3% of radiologists preferred itemised reports; 56% of clinicians and 72.9% of radiologists rejected the idea that a radiology report should consist of prose. Half of referring clinicians thought the radiologist might not have looked at a particular feature if it was not explicitly mentioned in the report; slightly over half of radiologists agreed that clinicians would assume this. The authors concluded that there is no universal consensus on what constitutes a good report and that the literature on this topic is primarily based on the insights and lifetime experience of specific authors rather than formal assessment of the views and needs of referrers or radiologists. Commenting that “one size does not fit all”, they recommend tailoring the report to the profile of the referring physician. Finally, they encapsulate the dilemma by noting: “Medicine certainly needs talented and competent radiologists. But do we want them to be data entry clerks rather than journalists, poets or essayists?” [32].
Clinicians based in hospitals are able to attend in-house meetings and conferences, interact on a face-to-face basis with radiologists, view images and engage in discussion with the reporting radiologist. Primary care practitioners usually lack most or all of these opportunities and must rely more completely on the content and recommendations in the radiologists’ reports [2, 30]. Furthermore, terminology or concepts familiar to a specialist may be unusual to a primary care physician [2]. For example, the normal range of renal size measurements on ultrasound is likely to be known to a nephrologist; whether a specific measurement is normal or not may need to be specified to a family practitioner. GPs have also been shown not to value inclusion of examination technique or details of contrast media used [30]. It is important that reporting radiologists take into account the likely reader of a report in framing a report; we should try to put ourselves in the position of the likely reader when deciding what to dictate, and how to structure that dictation, ensuring that the reader will understand clearly what we mean to convey.
The list of differential diagnoses we offer may also need to be tailored to the referrer. GPs may prefer a longer differential list than hospital-based specialists (without rambling, suggesting uncertainty) and have been shown to prefer more recommendations about further investigation (radiological and other) [2]. Balance is required to avoid a long list of (perhaps irrelevant) possibilities, diluting the significance of important observations. “Irrelevant observations have a cost, paid by the distraction they cause from the salient information” [3]. In addition, recommendations for further investigation must be pertinent and proportionate and not force the hand of a referrer to initiate unnecessary investigation or follow-up [3, 9].
One final point worth bearing in mind is a stated preference of GPs for a high level of detail in reports to facilitate showing the reports to patients [30]. In the current era of easy sharing of information, we must not lose sight of the likelihood that patients will read reports we generate; frivolity or unguarded inappropriate language must be avoided.