"Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less."
-Marie Curie (1867-1934)
The de facto position concerning new medical discoveries cannot be that it will inevitably be used for sinister purposes. This anti-science viewpoint stifles innovation and progress for humanity, thus stunting potential breakthroughs in medicine. Tools available to researchers and health professionals are just that, tools. The usage and application are dependent on who wields these instruments. It is not hard to imagine the first Neanderthals who discovered fire might have been dubious about its utilization, once one of them tried to touch it.
Recently it was found that Artificial Intelligence (AI) used to screen medical images can detect with near-perfect accuracy the patient’s racial ethnicity without prior knowledge from outside inputs. This ability for AI covered a broad series of image types; from CT scans to chest X-Rays, and persisted over all anatomical regions. This finding was replicated even from images that were cropped, corrupted, or not ideally clear. Even more surprising is that human experts trained to analyze these scans cannot determine the patient’s race, yet AI trained with deep learning developed this ability by itself.
The AI utilizes ‘Deep Learning’ as the machine learning model that imitates the human learning process to a degree with predictive analytics, and complex learning algorithms.
A good analogy describes the process as:
“To understand deep learning, imagine a toddler whose first word is dog. The toddler learns what a dog is -- and is not -- by pointing to objects and saying the word dog. The parent says, "Yes, that is a dog," or, "No, that is not a dog." As the toddler continues to point to objects, he becomes more aware of the features that all dogs possess. What the toddler does, without knowing it, is clarify a complex abstraction -- the concept of dog -- by building a hierarchy in which each level of abstraction is created with knowledge that was gained from the preceding layer of the hierarchy.”
In this fashion, deep learning differs from traditional linear algorithms that are dictated by the computer programmer. A deep learning model will independently sift through massive data sets to create complex statistical models from large amounts of unstructured data. This represents a new frontier in AI technology. Prior to the 21st century, access to big data sets and cloud computing were out of reach for many computer programmers.
Not everyone in the medical field sees the potential in AI recognition. This article from MIT discounts any benefit to the computer model’s ability to detect race.
“The fact that algorithms 'see' race, as the authors convincingly document, can be dangerous. But an important and related fact is that, when used carefully, algorithms can also work to counter bias,” says Ziad Obermeyer, associate professor at the University of California at Berkeley, whose research focuses on AI applied to health.
This understandable caution for bias by AI is the prevailing theme for a significant part of the medical community. Numerous papers and articles sound the alarm for potential abuses of AI racial recognition. However, not one study or opinion piece expanded on the potential uses for AI’s talent to discern race in medical imaging. Remember, no human experts can do this, regardless of experience or training. What if this could unlock new, highly effective therapies tailored to different races depending on the disease? Maybe this leads to discovering distinct correlations that would allow for earlier detections of cancer and other diseases. AI using multivariate analysis would give all patients an edge in their treatment plans. An entirely new realm of possibilities could open up for science, but only if the courage is there to look for it.
This may be of no value in the field of medicine, and possibly even be a feature of AI that needs to be eliminated entirely. Yet to dismiss out of hand the ability for AI to detect racial identity in medical imaging without more research into potential uses would be folly and a discredit to the goal of furthering medical science and artificial intelligence.
Links:
AI recognition of patient race in medical imaging: a modelling study
https://www.thelancet.com/journals/landig/article/PIIS2589-7500(22)00063-2/fulltext
Artificial intelligence predicts patients’ race from their medical images
https://news.mit.edu/2022/artificial-intelligence-predicts-patients-race-from-medical-images-0520
AI systems can detect patient race, creating new opportunities to perpetuate health disparities
https://news.emory.edu/stories/2022/05/hs_ai_systems_detect_patient_race_27-05-2022/story.html
Risks of AI Race Detection in the Medical System
https://hai.stanford.edu/policy-brief-risks-ai-race-detection-medical-system
AI Can Detect Race When Clinicians Cannot, Increasing Risk of Bias
https://healthitanalytics.com/news/ai-can-detect-race-when-clinicians-cannot-increasing-risk-of-bias
AI programs can tell race from X-rays, but scientists don’t know how. Here’s why that’s bad.
Hidden in Plain Sight: If AI Can Detect Race, What About Bias?
https://www.medscape.com/viewarticle/977619
Deep Learning
https://www.techtarget.com/searchenterpriseai/definition/deep-learning-deep-neural-network