A research study to uncover the benefits of augmenting humans with AI in clinical settings


Design Research Assistant
Purkayastha Lab for Health Innovation


Dec 2019 - May 2021
(6 months)

Tools Used

Figma, Adobe Illustrator, Adobe Photoshop, Whiteboard


Reduced effort & cognitive load for radiologists
Implemented and it is available in open source clinical suite
Simplified workflow procedures


My Role

Project Brief

r-ADAI will assess the gains of AI & Radiologist co-learning. The results from this research will prove the benefits of symbiotic relationship between AI & Human radiologists by focusing on the critical key performance indicators (KPI)

How Might We, understand and confront our mistakes?

4% of error rate per year for 1 billion radiographic examinations leading to 40 Million diagnostic errors. Link

Constraints that I had

The solution should be contained in existing DICOM viewers (Used by radiologist’s study images). A simple metaphor to understand would be – Developers use IDEs to write code– Designers use pencils to sketch – Radiologists use DICOM viewers to study images

Terminologies that might help you

In order to ask right questions to radiologists and conceptualize designs, I spent a good amount of time in learning these terminologies.It might help you understand the case study as well.
Bounding box - Rectangles over images, outlining the object of interest formed from X & Y co-ordinates
Segmentation - Delineation of areas of interest in imaging in terms of pixels or voxels
CheXpert Xray data set – Largest chest radiograph data set (used in this study to test accuracy)
Study list – List of patient’s X-rays available for study/ assigned to radiologists
Object Detection – Probability of detecting and classifying image with a probability of given condition.
Template Binding – Adding data / input to a existing template (Forms)
PACS – Picture Archiving and communication system (An economic storage container for storing medical images)
OHIF Viewer (open-source web based medical viewer. Also known as DICOM Viewer) & Basic Machine learning terminologies

What I came up with

How does this solution help radiologists

Often, radiologist over work to diagnose the study list they receive. Due to multi-tasking, their workdays are tiresome. So, having an AI to co-ordinate and learn the patterns which radiologist use to detect and annotate images, will reduce several steps to scroll through series and detect abnormalities in the images. In the long run, AI would learn from radiologists and rely on decision making.

How did I arrive at this interaction?

Highlights from User Research

Familiarized myself with  radiology department's terminologies by watching interventional radiologist's typical workday and reading related research articles. I felt understanding the domain is a must for asking the right question and empathizing with the user.

Taking pictures in radiology reading room was prohibited. Below are my findings based on the interview with radiologist and observation. Below are the two hospitals I researched with.

Insights from Semi-Structured Interview
  • Uses DICOM viewers for diagnosing study-list (Images)
  • Work collaboratively most of the time but seeks solitary while dictating
  • Prefers to have a references for terminologies. Currently, uses Radiopedia, DORIS and looks up for terminologies
  • Time-sensitive emergency studies need extra attention and highly stressful
  • Wishes to have patient history loaded in priority order
Insights from Observation
  • Works in high contrast environment
  • Multi-tasking (Phone call with physician, reading image,  taking notes, answering trainees)
  • Switches to multiple screens to complete the diagnosing process
  • Frequently repeats a sentence to the dictation device and ends up correcting manually
  • Keen focus on zooming in to go through series of images

Visualizing the breakdowns via Sequence Model

Empathizing with the User

I jotted down my learnings from the user research into empathy map and pictured the ideal thoughts of a fictional radiologist, Judy using a Persona.

Empathizing with the User

To organize the insights and lay a foundation for exploring ideas, I did affinity mapping exercise and identified pain points of radiologists.

Highlights from Secondary Research

I deep dived into research articles to understand the widespread pain points of radiologists. This step uncovered the magnitude of the error rate in diagnosing. I learned that confronting our mistakes and finding solutions could equip patients with a better care.

Understanding IHE (Integrated Healthcare Enterprise) Workflow

Familiarizing with DICOM Viewers

To understand about the current tools used by radiologists, I took a close look at different open sourced DICOM viewers and existing capabilities of these tools in assisting radiologists. Common features that I observed
  • Export and import options for study lists (Useful to transfer the studies across hospitals)
  • Flexibility to annotate and edit radiographs in the UI / presentation layer (Size, shape, color, transform)
  • Protocols to customize the layout / appearance of image series in the view port.

Different Ideas I came up with

From the research insights, identified five areas where AI could turn as augmented intelligence to radiologists.
  • Augmenting radiologist's final decision with autodetected abnormalities by AI-ML algorithms
  • Auto importing and exporting studies with key object annotations
  • Auto-fill in report template and allowing radiologists to bind only key information
  • Filtering based on modality and priority for a particular user expertise / role
  • Integration of radiopedia (Encyclopedia with terminologies related to radiology) as an in-built documentation feature in DICOM viewers

Gaining Different Perspectives

I wanted to seek feedback from experts in each field. We gathered a meeting with Interventional radiologist, informatics expert, ML Engineers from our team and presented ideas to them. They welcomed ideas supported by the research, and Engineers assessed the feasibility of the idea. Based on this internal discussion with experts, we chose to proceed with Idea 1 and decided to add other ideas in the future. With this scope, we posted the idea in the LibreHealth Google Summer of Code forum and recruited open source student developers internationally to work on the project.

Pre-visualizing the selected idea in a scenario

Exploring interactions through Co-design Session

Through co-design session with fellow designers, sketched different interactions for AI intervention that enables radiologist collaboration seamlessly. In our discussion, we considered AI as another person who could learn patterns from radiologists and help identifying abnormalities in radiographs, so, we decided to have persona for AI.
Iteration 2
Based on the visibility and current DICOM standards, our radiologists approved Interaction 1. Post this session, I defined the interaction flow and sketched to see how this idea will work in OHIF Viewer.
Iteration 3 - Final Wireframes

Developer Handoff & Mentoring

I collaborated with a technical mentor and developers at Google Summer of Code' 20 as functional mentor. I presented the prototype and next steps to build the chosen feature. Below is the standalone workflow built by a developer last summer. More info can be found in the github link here

Next Steps & Ongoing Research


Leading a design project from scratch in a small cross-functional team setting taught me both technical and interpersonal skills. Exploring radiology was new to me but I learned to communicate the ideas, functional requirements to both technical experts and non-tech users by preparing well in advance. My developer hat helped me to communicate technical terms with confidence. Though working with constraints and making sense of the existing ecosystem seems challenging, interestingly every field expert were supportive in helping me understand technical intricacies. This helped me to translate ideas into a working product.