Design Research Assistant
Purkayastha Lab for Health Innovation
Dec 2019 - May 2021
(6 months)
Figma, Adobe Illustrator, Adobe Photoshop, Whiteboard
Reduced effort & cognitive load for radiologists
Implemented and it is available in open source clinical suite
Simplified workflow procedures
r-ADAI will assess the gains of AI & Radiologist co-learning. The results from this research will prove the benefits of symbiotic relationship between AI & Human radiologists by focusing on the critical key performance indicators (KPI)
How Might We, understand and confront our mistakes?
4% of error rate per year for 1 billion radiographic examinations leading to 40 Million diagnostic errors. Link
The solution should be contained in existing DICOM viewers (Used by radiologist’s study images). A simple metaphor to understand would be – Developers use IDEs to write code– Designers use pencils to sketch – Radiologists use DICOM viewers to study images
In order to ask right questions to radiologists and conceptualize designs, I spent a good amount of time in learning these terminologies.It might help you understand the case study as well.
Bounding box - Rectangles over images, outlining the object of interest formed from X & Y co-ordinates
Segmentation - Delineation of areas of interest in imaging in terms of pixels or voxels
CheXpert Xray data set – Largest chest radiograph data set (used in this study to test accuracy)
Study list – List of patient’s X-rays available for study/ assigned to radiologists
Object Detection – Probability of detecting and classifying image with a probability of given condition.
Template Binding – Adding data / input to a existing template (Forms)
PACS – Picture Archiving and communication system (An economic storage container for storing medical images)
OHIF Viewer (open-source web based medical viewer. Also known as DICOM Viewer) & Basic Machine learning terminologies
Often, radiologist over work to diagnose the study list they receive. Due to multi-tasking, their workdays are tiresome. So, having an AI to co-ordinate and learn the patterns which radiologist use to detect and annotate images, will reduce several steps to scroll through series and detect abnormalities in the images. In the long run, AI would learn from radiologists and rely on decision making.
Familiarized myself with radiology department's terminologies by watching interventional radiologist's typical workday and reading related research articles. I felt understanding the domain is a must for asking the right question and empathizing with the user.
Taking pictures in radiology reading room was prohibited. Below are my findings based on the interview with radiologist and observation. Below are the two hospitals I researched with.
I wanted to seek feedback from experts in each field. We gathered a meeting with Interventional radiologist, informatics expert, ML Engineers from our team and presented ideas to them. They welcomed ideas supported by the research, and Engineers assessed the feasibility of the idea. Based on this internal discussion with experts, we chose to proceed with Idea 1 and decided to add other ideas in the future. With this scope, we posted the idea in the LibreHealth Google Summer of Code forum and recruited open source student developers internationally to work on the project.
Leading a design project from scratch in a small cross-functional team setting taught me both technical and interpersonal skills. Exploring radiology was new to me but I learned to communicate the ideas, functional requirements to both technical experts and non-tech users by preparing well in advance. My developer hat helped me to communicate technical terms with confidence. Though working with constraints and making sense of the existing ecosystem seems challenging, interestingly every field expert were supportive in helping me understand technical intricacies. This helped me to translate ideas into a working product.