Interfaces for 2D images
Imaging devices have largely provided two-dimensional images: X-ray planar imaging at the beginning of the twentieth century, and then computed tomography (CT) or magnetic resonance (MR) sliced imaging in the 1970s. Initially, the image was printed like a photo, and there was no machine interaction. Today, if we look at any interpretation room around the world, chances are good that we will find the same setup, combining a chair, a desk and one or more computers with a keyboard, a mouse and a screen. The picture of modern radiology can be understood through the evolution of the computer UI, as the image has become digital and is displayed on computers.
In the 1960s, the command-line interface (CLI) was the only way to communicate with computers. The keyboard was the only input, and a strict computer language had to be known to operate the system. In 1966, Douglas Engelbart invented the computer mouse. Together with Xerox, and then Apple's Mac OS X or Microsoft Windows, they participated in developing a graphical user interface (GUI), known as “WIMP” (windows, icons, menus and pointing device) [5], which vastly improved the user experience. This system made computers accessible to everyone, with minimal skill required. Today, the WIMP UI remains nearly unchanged, and it is the most commonly used UI for personal computers. In 2007, the post-WIMP era exploded with the introduction of the “natural user interface (NUI)” using touchscreens and speech recognition introduced by Apple iOS, followed by Google Android, used mainly for tablet personal computers (PC) and smartphones (Fig. 3) [6].
Digital radiology and the current workstation were introduced during the WIMP era. The specific UI was designed with a keyboard and a mouse, and this setup has remained in use for approximately 30 years. Resistance to change and the “chasm” or delay in the new technology adoption curve explains the UI stagnation globally in the field of radiology [7].
However, is there really a better alternative to a mouse and a keyboard? Weiss et al. tried to answer this question, and compared five different setups of IU devices for six different PACS users during a 2-week period [8]. The study did not include post-WIMP UI. The authors concluded that no one device was able to replace the pairing of the keyboard and mouse. The study also revealed that the use of both hands was thought to be a good combination. However, the evaluation focused on image manipulation and did not consider single-handed control needed for microphone use in reporting.
The authors proposed an interesting marker of efficacy for radiologic IU as the highest ratio of “eyes-to-image” versus “eyes-to-interface” device time.
Some solutions for improving the WIMP-UI have been tested. They combine an “eye tracking” technology with the pointing technique [9]. The objective is to eliminate a large portion of cursor movement by warping the cursor to the eye gaze area [10]. Manual pointing is still used for fine image manipulation and selection. Manual and gaze input cascaded (MAGIC) pointing can be adapted to computer operating systems using a single device (Fig. 4).
Regarding post-WIMP UI, and especially touchscreens, there is abundant literature, most of which deals with emergency setup involving non-radiologist readers as well [11]. Indeed, the tablet PC offers greater portability and teleradiology possibilities. Tewes et al. showed no diagnostic difference between the use of high-resolution tablets and PACS reading for interpreting emergency CT scans [12]. However, touchscreen adoption is not evident at the moment, even if full high-definition screens fulfil quality assurance guidelines. Users have found the windowing function less efficient than the mouse, and have also noted screen degradation due to iterative manipulations. Technically, portable tablet size and hardware specifications are not powerful enough for image post-processing. However, cloud computing and streaming can provide processor power similar to a stand-alone workstation (Fig. 5). Their portability makes them more adaptable for teleradiology and non-radiology departments. One possible solution discussed recently is a hybrid type of professional tablet PC for imaging professionals [13]. The interface is designed to enable direct interaction on the screen using a stylet and another wheel device. Microsoft and Dell are currently proposing design solutions for specific use with photo and painting software. These desktops could be used for radiology workstations with a few UX-specific design modifications (Fig. 6).
Interventional radiology is a specific process with specific needs, the most important of which is maintaining the sterility of the operating site while manipulating the images. Ideally, the operation has to be autonomic for at least basic features such as selecting series, reformatting, slicing, and pan and zoom manipulation. Some have proposed taking a mouse or trackpad inside the sterile protected area, or even using a tablet PC to visualize images. However, the most efficient setup in these conditions is touchless interaction [14], which will minimize the risk of contamination.
Iannessi et al. developed and tested a touchless UI for interventional radiology [15]. Unlike previous efforts, the authors worked on redesigning a specific IU adapted to the kinetic recognition sensor without a pointer (Fig. 7). The user experience has been clearly improved with respect to simple control of the mouse pointer [16]. This is also a good example of environment constraints and user-centred design. Indeed, the amplitude of the arm movements had to be reduced to a minimum, considering the high risk of contamination inside a narrow operating room.
Interfaces for 3D images
Three-dimensional imaging volumes began to be routinely produced in the 1990s. They were originally acquired on MRI or reconstructed from multi-slice helical CT acquisitions, and volume acquisition later became available from rotational angiography or ultrasound as well [17]. With the exception of basic X-ray study, medical imaging examination rarely does not include 3D images.
Volume acquisition can now be printed in three dimensions, similar to the case with 2D medical films [18]. Obviously, this option can be considered only for selected cases such as preoperative planning, prosthesis or education. It is expensive and absolutely not conducive to productive workflow [19].
Some authors dispute the added value of 3D representations. Indeed, radiology explores the inside of organs, and except in rare situations, 2D slices give more information than a 3D representation of the surfaces.
However, mental transformation from 2D to 3D can be difficult. For example, when scoliosis needs to be understood and measured, 3D UI appears to be more efficient [20]. Some orthopedic visualization, cardiovascular diagnoses and virtual colonoscopic evaluations are also improved by 3D UI [21,22,23]. For the same reasons, 3D volume representations are appreciated by surgeons and interventional radiologists, as they help to guide complex surgery or endovascular procedures [24,25,26]. Preoperative images improve surgical success [27]. Moreover, advanced volume rendering provides more realistic representations, transforming medical images into a powerful communication tool with patients (Fig. 8) [28, 29].
However, both use and usability of such acquisition volumes remain poor. There are many reasons for the non-use of 3D images, including the absence of full automation of the required post-treatment. Also, exploitation of 3D volume is hindered by the lack of adapted display and command UI [30]. By displaying 3D images on 2D screens, we lose part of the added information provided by the 3D volume [31].
With regard to inputs, touchless interfaces have been demonstrated as one interesting option. A kinetic sensor placed in front of a screen senses 3D directional movements in order to manipulate the virtual object with almost natural gestures [14, 32].
For displays, some authors have explored the use of holographic imaging in radiology, especially in the field of orthopedic diagnostic imaging [23, 33, 34]. In 2015, the first holographic medical display received FDA approval. This includes 3D glasses and a stylet for manipulation (Fig. 9).
Another possibility for displaying 3D volume is the use of augmented reality. The principle is to show the images with a real-time adjustment to observe cephalogyric motion. This can be done using a head-mounted device such as Google Glass, a handheld device such as a smartphone, or a fixed device. Nakata et al. studied the latest developments in 3D medical imaging manipulation. The authors demonstrated improved efficiency of such UI compared to a two-button mouse interaction [35]. Augmented reality and 3D images have also been used in surgical practice for image navigation [36, 37]. Conventional registration requires a specific acquisition, and the process is time-consuming [38]. Sugimoto et al. proposed a marker-less surface registration which may improve the user experience and encourage the use of 3D medical images (Fig. 10) [39]. Recent promotion of the HoloLens (Microsoft, Redmond, WA, USA), a headset mixed-reality device including an efficient UI controlled by voice, eye and gesture, may help to accelerate radiological applications of augmented reality, especially for surgery (Fig. 10) [40].
Another UI for displaying 3D medical images is virtual reality. In this case, it is a completely immersive experience. The operator wears a device and the environment is artificially created around him. Some authors have proposed including a 3D imaging volume inside the environment to give the user the opportunity to interact with it (Fig. 11).
Outlook for the future
We believe that UX and UI specifically designed for radiology is the key for future use and adoption of new computer interface devices. A recent survey including 336 radiologists revealed that almost one-third of the radiologists were dissatisfied with their computing workflow and setup [41]. In addition to innovative hardware devices, efforts should focus on an efficient software interface. We are mainly concerned with PACS software in this discussion. Indeed, a powerful specific UI has to meet the radiologist's needs, and these needs are high (Fig. 12).
Regarding image manipulation, Digital Imaging and Communications in Medicine (DICOM) viewers are typically built with two blocks: the browser for study images and series selection, and the image viewer with manipulation tools. The key elements of the PACS UI architecture are hanging protocol and icons, image manipulation, computer-aided diagnosis and visualization features [42]. The goal of a hanging protocol is to present specific types of studies in a consistent manner and to reduce the number of manual image ordering adjustments performed by the radiologist [43]. In fact, automated scenarios should be promoted in order to present the maximum information by default at initial presentation [44]. In addition, the hanging protocols and icons should be user-friendly, intuitive and customizable. Visualization features can be incorporated into a stand-alone facility and integrated with the workstation. The software requires expert functionality that entails more than just simple scrolling, magnification and windowing.
For diagnostic imaging, in addition to the UI for image manipulation, radiologists need a UI for workflow management that includes medical records and worklists [42]. As teleradiology evolves, the concept of “SuperPACS” will probably drive the next UI to an integrated imaging viewer [45]. Indeed, medical information is tedious and labor-intensive when it is not integrated on the same interface and/or computer. The interface should aggregate all needed information for the reporting task. It is the same for the reporting and the scheduling systems. Enhancing the performance of automated voice recognition should enable real-time dictation, where we can fully interact with the images [41].
As explained above, 3D manipulation and display must be promoted for the added value they provide. Even though the technology may be ready for robust utilization, there is an intractable delay in radiologist adoption [7]. Radiologists, like any customer, are resistant to change, and the design of radiology-specific UI will hasten the revolution [14, 30].