AI in Radiology


I worked as a part time design research assistant in Purkayastha Lab for Health Innovation to uncover the benefits of augmenting humans with AI in clinical settings.

My Role

UX Researcher and Interaction Designer

Project Duration
  1. Dec 2019 - May 2020 (6 months)
Tools Used
Figma, Adobe Photoshop

The study was accepted to experiment the benefits of augmenting humans with AI in clinical settings. The proposed interaction is implemented and is currently available in open source clinical suite, LibreHealth


Critical problem

“4% of error rate per year for 1 billion radiographic examinations leading to 40 Million diagnostic errors.” National Library of Medicine

Often, radiologist over work to diagnose the study list they receive. Due to multi-tasking, their workdays are tiresome. How might AI help radiologist? Can they augment to produce better results? I worked with team of experts to figure out the possibilities.


  • Led the end-to-end research and design process. Collaborated with different teams from different universities to conduct feasibility analysis and narrow down the scope.
  • I also served as a Functional Mentor along with a technical mentor in Google summer of code challenge to provide clarifications around requirements of Minimum Viable Product (MVP).
  • Contacted 15 Food and Drug Administration (FDA) approved medical device organizations and conducted 8 interviews to analyze the standards, validation methods applied for those AI based devices. As a result, a research Paper on "Current Clinical Applications of AI in Radiology" was published in Journal of the American College of Radiology (JACR).
Research Paper

Highlights from secondary research

I deep dived into research articles to understand the widespread pain points of radiologists. This step uncovered the magnitude of the error rate in diagnosing and taught me that confronting our mistakes and finding solutions could equip patients with a better care is the best help we could do. Some of the highlights from research

Understanding Integrated Health Enterprise Workflow

I collaborated with the principle investigator of this study to layout this user flow.

Familiarizing with DICOM viewers

Familiarized myself with terminologies that radiologists use by watching interventional radiologists' typical workday and reading related research articles. I felt understanding the domain is a must for asking the right question and empathizing with the user.

To understand the current tools used by radiologists, I took a close look at different open source DICOM viewers and existing capabilities of these tools in assisting radiologists. Common features that I observed

  • Export and import options for study lists (Useful to transfer the studies across hospitals)
  • Flexibility to annotate and edit radiographs in the UI / presentation layer (Size, shape, color, transform)
  • Protocols to customize the layout / appearance of image series in the view port.

Highlights from user research

Below are my findings based on the interview with radiologist and observation. Taking pictures in radiology reading room was prohibited. Below are the two hospitals I researched with.

Insights from Semi-Structured Interview
  • Uses DICOM viewers for diagnosing study-list (Images)
  • Work collaboratively most of the time but seeks solitary while dictating
  • Prefers to have a references for terminologies. Currently, uses Radiopedia, DORIS and looks up for terminologies
  • Time-sensitive emergency studies need extra attention and highly stressful
  • Wishes to have patient history loaded in priority order
Insights from Observation
  • Works in high contrast environment
  • Multi-tasking (Phone call with physician, reading image,  taking notes, answering trainees)
  • Switches to multiple screens to complete the diagnosing process
  • Frequently repeats a sentence to the dictation device and ends up correcting manually
  • Keen focus on zooming in to go through series of images

Breakdowns in user flow

I wanted to visualize a typical day in a radiologist's life and identify the pain points.  I chose to use the sequence model as it shows the detailed steps performed to accomplish each task important to the work, and the problems getting in their way.


Putting together all my learnings

From the research insights, identified five areas where AI could turn as augmented intelligence to radiologists.

  • Augmenting radiologist's final decision with autodetected abnormalities by AI-ML algorithms
  • Auto importing and exporting studies with key object annotations
  • Auto-fill in report template and allowing radiologists to bind only key information
  • Filtering based on modality and priority for a particular user expertise / role
  • Integration of radiopedia (Encyclopedia with terminologies related to radiology) as an in-built documentation feature in DICOM viewers

As a regular photoshop user, I had always had in mind that I can vectorize image selections and change the area. I imagined if this interaction could help radiologist to precisely segment the bounding box drawn by AI over the detected anomaly.

Bounding box - Rectangles over images, outlining the object of interest formed from X & Y co-ordinates
Segmentation - Delineation of areas of interest in imaging in terms of pixels.


Exploring interactions through co-design session

Through co-design session with my design squad, I sketched different interactions for AI intervention that enables radiologist collaboration seamlessly. In our discussion, we considered AI as another person who could learn patterns from radiologists and help identifying abnormalities in radiographs, so, we decided to have persona for AI.

Iteration 2

Based on the visibility and current DICOM standards, our radiologists approved Interaction 1. Post this session, I defined the interaction flow and sketched to see how this idea will work in OHIF Viewer.

Final design

Workflow I proposed



I collaborated with a technical mentor and developers at Google Summer of Code' 20 as functional mentor. I presented the prototype and next steps to build the chosen feature. After few iterations to match the existing branding and using available components, the standalone workflow built by a developer last summer. Watch the interaction in the gitlab link here


Next steps

To integrate the standalone application with DICOM viewer. I graduated before this project was integrated with Open Source Health Imaging Foundation (OHIF) Viewer. So, If I were to measure the key performance metrics, I would conduct A/B Testing to quantitatively measure AI-Human assemblage learning gains and reduction of cognitive load, effort and improvement in error rate over time.

Key learnings

Leading a design project from scratch in a small cross-functional team setting taught me both technical and interpersonal skills. Exploring radiology was new to me but I learned to communicate the ideas, functional requirements to both technical experts and non-tech users by preparing well in advance. My developer hat helped me to communicate technical terms with confidence. Though working with constraints and making sense of the existing radiology ecosystem seemed challenging, interestingly every field expert were supportive in helping me understand technical intricacies.