Like it or not, AI has become a part of daily life, from scheduling our mornings to helping doctors read cancer scans. But new research shows these powerful tools may be picking up more than medical clues.
Some models can infer a patient’s race or other demographics from tissue slides, even when that information isn’t provided, creating hidden biases that affect certain groups. Scientists now believe the problem lies in how these systems learn, not just in what’s missing from their data. The good news is that researchers have also found promising ways to curb these disparities, offering a clearer path toward safer AI in the years ahead.
Intrigued? Dive into this gallery to see what scientists found and why it matters for all of us.