LibreHealth r-ADAI will assess the gains of AI & Radiologist co-learning. The results from this research will prove the benefits of symbiotic relationship between AI & Human radiologists by focus on the critical key performance metrics (KPI)
Reduced error rate due to fatigue & cognitive load
Improved turnaround time while reading images
Reduced effort & promote communication with Med Team
UX Researcher, Design Lead, Functional Mentor & Research Assistant
Contacted 15Food and Drug Administration (FDA) approved medical device organizations and conducted 8 interviews to analyze the standards, validation methods applied for those AI based devices. As a result, a research Paper on "Current Clinical Applications of AI in Radiology" was published in Journal of the American College of Radiology (JACR).
Conducted user research, ideated solutions and handed off the design to developers. I will walk you through my contributions throughout the design process below. Let's get started with the story!
Highlights from User Research
I familiarized myself with radiology department's terminologies by watching interventional radiologist's typical workday and reading related research articles. I felt understanding the domain is a must for asking the right question and empathizing with the user.
Taking pictures in radiology reading room was prohibited. Below are my findings based on the interview with radiologist and observation. Below are the two hospitals I researched with.
Insights from Semi-Structured Interview
Uses DICOM viewers for diagnosing study-list (Images)
Work collaboratively most of the time but seeks solitary while dictating
Prefers to have a references for terminologies. Currently, uses Radiopedia, DORIS and looks up for terminologies
Time-sensitive emergency studies need extra attention and highly stressful
Wishes to have patient history loaded in priority order
Switches to multiple screens to complete the diagnosing process
Frequently repeats a sentence to the dictation device and ends up correcting manually
Keen focus on zooming in to go through series of images
Visualizing the breakdowns via Sequence Model
Empathizing with the user
I jotted down my learnings from the user research into empathy map and pictured the ideal thoughts of a fictional radiologist, Judy using a Persona.
To organize the insights and lay a foundation for exploring ideas, I did affinity mapping exercise and identified pain points of radiologists.
Highlights from Secondary Research
I deep dived into research articles to understand the widespread pain points of radiologists. This step uncovered the magnitude of the error rate in diagnosing. I learned that confronting our mistakes and finding solutions could equip patients with a better care.
4% of error rate per year for 1 billion radiographic examinations leading to 40 Million diagnostic errors.
Understanding IHE (Integrated Healthcare Enterprise) Workflow
Familiarizing with DICOM Viewers
To understand about the current tools used by radiologists, I took a close look at different open sourced DICOM viewers and existing capabilities of these tools in assisting radiologists. Common features that I observed
Export and import options for study lists (Useful to transfer the studies across hospitals)
Flexibility to annotate and edit radiographs in the UI / presentation layer (Size, shape, color, transform)
Protocols to customize the layout / appearance of image series in the view port.
For the early research, the solution should
Be integrated and consistent with existing OHIF viewer (Open Health Information Foundation) and PACS (Picture Archiving and Communication System)
Focus on improving key performance metrics (error rate, effort, time)
From the research insights, identified five areas where AI could turn as augmented intelligence to radiologists.
Augmenting radiologist's final decision with autodetected abnormalities by AI-ML algorithms
Auto importing and exporting studies with key object annotations
Auto-fill in report template and allowing radiologists to bind only key information
Filtering based on modality and priority for a particular user expertise / role
Integration of radiopedia (Encyclopedia with terminologies related to radiology) as an in-built documentation feature in DICOM viewers
Gaining Different Perspectives
I wanted to seek feedback from experts in each field. We gathered a meeting with Interventional radiologist, informatics expert, ML Engineers from our team and presented ideas to them. They welcomed ideas supported by the research, and Engineers assessed the feasibility of the idea. Based on this internal discussion with experts, we chose to proceed with Idea 1 and decided to add other ideas in the future. With this scope, we posted the idea in the LibreHealth Google Summer of Code forum and recruited open source student developers internationally to work on the project.
Pre-visualizing the idea in scenarios
Exploring interactions through Co-design Session
Through co-design session with fellow designers, sketched different interactions for AI intervention that enables radiologist collaboration seamlessly. In our discussion, we considered AI as another person who could learn patterns from radiologists and help identifying abnormalities in radiographs, so, we decided to have persona for AI.
Based on the visibility and current DICOM standards, our radiologists approved Interaction 1. Post this session, I defined the interaction flow and sketched to see how this idea will work in OHIF Viewer.
There are several AI models that could best identify abnormalities. The option of choosing an AI Model based on modality (CT, MRI) will be available in the top action bar. This option can be made default by radiology technician based on the facility and type of diagnosis.
Auto Bounding Box
AI model draws a bounding box to highlight the abnormalities in the radiograph and measures it. The options to edit boundary will let the radiologist to make final decision and further edit the AI's bounding box.
Radiologist can edit the boundary to update abnormality measurements before saving and documenting.
Developer Handoff & Mentoring
I collaborated with a technical mentor and developers at Google Summer of Code' 20 as functional mentor. I presented the prototype and next steps to build the chosen feature. Below is the standalone workflow built by a developer last summer. More info can be found in the github link here
A/B Testing accompanied by questionnaire to measure AI-Human assemblage learning gains and reduction of cognitive load, effort and improvement in error rate over time.
Iterate to address feedback from user evaluation
Leading a design project from scratch in a small cross-functional team setting taught me both technical and interpersonal skills. Exploring radiology was new to me but I learned to communicate the ideas, functional requirements to both technical experts and non-tech users by preparing well in advance. My developer hat helped me to communicate technical terms with confidence. Though working with constraints and making sense of the existing ecosystem seems challenging, interestingly every field expert were supportive in helping me understand technical intricacies. This helped me to translate ideas into a working product.