AI in Radiology

Context

I worked as a part-time design research assistant at Purkayastha Lab for Health Innovation to uncover the benefits of augmenting humans with AI in clinical settings.

My Role

UX Researcher and Interaction Designer

Project Duration
  1. Dec 2019 - May 2020 (6 months)
Tools Used
Figma, Adobe Photoshop
Result

The study was accepted to experiment benefits of augmenting humans with AI in clinical settings. The proposed interaction has been implemented and is currently available in the open-source clinical suite, LibreHealth

Introduction

Critical problem

“4% of error rate per year for 1 billion radiographic examinations leading to 40 Million diagnostic errors.” National Library of Medicine

Often, radiologists are overworked as they diagnose the study lists they receive in a day. Due to multitasking, their workdays can be tiresome. How might AI help radiologists? Can it augment their abilities to produce better results? I worked with a team of experts to explore these possibilities.

Responsibilities

  • Led the end-to-end research and design process, collaborating with teams from various universities to conduct a feasibility analysis and narrow down the scope.
  • Served as a Functional Mentor, along with a technical mentor, in the Google Summer of Code challenge to provide clarifications on Minimum Viable Product (MVP) requirements.
  • I contacted 15 Food and Drug Administration (FDA) approved medical device organizations and conducted 8 interviews to analyze the standards and validation methods applied to AI-based devices. As a result, a research paper titled 'Current Clinical Applications of AI in Radiology' was published in the Journal of the American College of Radiology (JACR).
Research Paper
Research

Highlights from secondary research

I delved deep into research articles to understand the widespread pain points of radiologists. This step revealed the magnitude of the error rate in diagnosing, teaching me that confronting our mistakes and finding solutions to provide patients with better care is the best help we can offer. Here are some highlights from the research.

Understanding Integrated Health Enterprise Workflow

I collaborated with the principal investigator of this study to lay out this user flow.

Familiarizing with DICOM viewers

Familiarized myself with the terminologies that radiologists use by observing interventional radiologists during their typical workday and reading related research articles. I believe that understanding the domain is a must for asking the right question and empathizing with the user.

To understand the current tools used by radiologists, I took a close look at different open-source DICOM viewers and the existing capabilities of these tools in assisting radiologists. Some common features that I observed include

  • The export and import options for study lists are useful for transferring studies across hospitals.
  • The UI/presentation layer provides flexibility to annotate and edit radiographs, allowing adjustments in size, shape, color, and transformation
  • There are protocols available to customize the layout and appearance of image series in the viewport.
Research

Highlights from user research

Below are my findings based on interviews with radiologists and observations. Taking pictures in the radiology reading room was prohibited. I researched the following two hospitals.

Insights from Semi-Structured Interview
  • Uses DICOM viewers for diagnosing study-list (Images)
  • Work collaboratively most of the time but seeks solitary while dictating
  • Prefers to have a references for terminologies. Currently, uses Radiopedia, DORIS and looks up for terminologies
  • Time-sensitive emergency studies need extra attention and are highly stressful
  • Wish to have patient history loaded in priority order
Insights from Observation
  • Works in high contrast environment
  • Multi-tasking (Phone call with physician, reading image,  taking notes, answering trainees)
  • Switches to multiple screens to complete the diagnosing process
  • Frequently repeats a sentence to the dictation device and ends up correcting manually
  • Keen focus on zooming in to go through series of images

Breakdowns in user flow

I wanted to visualize a typical day in a radiologist's life and identify the pain points. So, I chose to use the sequence model as a design tool because it illustrates the detailed steps required to accomplish each task crucial to their work and the obstacles they encounter.

Synthesis

Putting together all my learnings

From the research insights, I identified five areas where AI could serve as augmented intelligence for radiologists.

  • Augmenting the radiologist's final decision with autodetected abnormalities using AI-ML algorithms
  • Auto-importing and exporting studies with key object annotations
  • Auto-fill in the report template and allow radiologists to bind only key information
  • Filtering based on modality and priority for a particular user expertise/role
  • Integration of radiopedia (Encyclopedia with terminologies related to radiology) as an in-built documentation feature in DICOM viewers

As a regular Photoshop user, I've always known that I can vectorize image selections and modify specific areas. I wondered if this interaction could assist radiologists in accurately segmenting the bounding box drawn by AI around detected anomalies.

Bounding box - Rectangles over images, outlining the object of interest formed from X & Y coordinates
Segmentation - Delineation of areas of interest in imaging in terms of pixels.

Iterations

Exploring interactions through co-design session

During co-design sessions with my design team, I sketched various interactions for AI intervention to enable seamless collaboration among radiologists. In our discussions, we treated AI as another entity capable of learning patterns from radiologists and assisting in identifying abnormalities in radiographs. As a result, we decided to create a persona for AI.

Iteration 2

Based on the current DICOM standards, our radiologists approved Interaction 1 (above). Post this session, I defined the interaction flow and sketched out to complete the integrated workflow in Open Health Imaging Foundation (OHIF) Viewer.

Final design

Proposed workflow

Implementation

Mentorship & Development

I collaborated with a technical mentor and developers during Google Summer of Code '20 as a functional mentor. I presented the prototype and discussed the next steps for building the chosen feature. After a few iterations to align with the existing branding and using available components, the standalone workflow was built by a developer last summer. You can watch the interaction in the gitlab link here

Reflection

Next steps

To integrate the standalone application with a DICOM viewer, I graduated before this project was integrated with the Open Source Health Imaging Foundation (OHIF) Viewer. If I were to measure key performance metrics, I would conduct A/B testing to quantitatively assess AI-Human collaborative learning gains, reduction in cognitive load, effort, and improvement in error rates over time.

Key learnings

Leading a design project from scratch in a small cross-functional team setting taught me both technical and interpersonal skills. Exploring radiology was new to me, but I learned to effectively communicate ideas and functional requirements to both technical experts and non-technical users through preparation. Wearing my developer hat allowed me to confidently convey technical terms. While working with constraints and navigating the complex radiology ecosystem posed challenges, I found it interesting that every field expert was supportive in helping me understand technical intricacies.