
Unpacking the black box of AI
Would you trust AI with your cancer diagnosis? A USask graduate is making sure you can.
By Chris Putnam
Dr. Sakib Mostafa (PhD) went into artificial intelligence research because he’s fascinated by AI—and because he’s afraid of it.
As a child in Bangladesh, Mostafa was both captivated and disturbed by the depictions of technology in films like The Terminator and the writings of Jules Verne.
“I’m the kind of person who really likes to face the fear rather than running away from it. So throughout my whole life, if there was something that bothered me, or if there was something that I was afraid of, rather than staying away from it I preferred to solve that problem,” Mostafa (MSc’20, PhD’24) said.
The USask College of Arts and Science and College of Engineering graduate recently began a prestigious post-doctoral fellowship at Stanford University, where he is building AI models to detect cancer. The work grew out of his graduate studies at USask, which focused on the problem of explainable AI.
With today’s AI tools, the most likely danger isn’t the rise of killer robots, but the rise of systems that are ineffective or untrustworthy because they can’t be understood—even by their creators. Such a system is called a black box. Most deep learning-based AI systems, such as ChatGPT, are black boxes.
“Once you have a result (from an AI model), if you try to go backwards to figure out how the input data was used to get that result, it’s just not possible. Because once that data goes through the AI model, it is broken down into so many pieces that it’s not possible to keep track of the data flow,” Mostafa said.
During his USask computer science PhD studies supervised by Dr. Debajyoti Mondal (PhD), Mostafa recalls working with an AI model that processed photos of plant leaves to classify diseases. The team assumed the system was analyzing many aspects of the images to inform its outputs, but after a painstaking study of the model, they discovered it was ignoring everything but the leaves’ edges.
It was a blunt reminder that being accurate isn’t the only thing that matters for an AI system. If we are to trust AI, we also need to understand how and why it makes decisions.
“It’s really important to understand the tool that you are using. You cannot just go blindly using a tool, right? If I gave you a sword and you didn’t know how to use that sword, it might cut you,” Mostafa said.
This issue will become vitally important as AI tools are brought into high-stakes fields such as law enforcement and medicine. You wouldn’t trust a police detective who makes arrests based on eye colour, or a doctor who only looks at shoe sizes—no matter their success rate.
That’s why Mostafa’s roots in explainable AI are vital to his current work in Stanford’s Department of Radiation Oncology. His group is developing AI tools to detect cancer in individual patients. Similar to a human doctor, their system is able to consider multiple types of data together, such as genomics and medical images, to arrive at a diagnosis.
Mostafa is working to understand exactly how the system is interpreting that onslaught of data—not only to ensure the system can be trusted with life-altering medical diagnoses, but to make it more effective at its job.
“What I found out (during my PhD studies) was that if we create an explanation of a model, we can improve the model. If we are giving it data and there is some portion of that data that is causing the model to make the wrong decisions, now we can fix that data and make the model better and better,” he said.
The goal of Mostafa’s team is to build a system able to detect not just the presence of cancer, but its stage and type. The system could also identify patterns and connections between data types that traditional methods overlook, improving accuracy and saving lives.
If successful, the system could be piloted at Stanford Hospital and eventually serve as a diagnostic tool.
“That’s the end goal for us,” said Mostafa.
Being where he is today “feels unreal” for Mostafa, who grew up in Bangladesh hearing of places like Stanford but hardly believing they were real. After completing his PhD at USask, he did a post-doc at the National Research Council of Canada, where he applied AI models to develop climate-resilient crops.
After years of working with plants, he found his true interest was in applying research to help humans and pursued his current path in AI-assisted medicine.
“The University of Saskatchewan, it gave me so many opportunities and it helped me become who I am today, and I will be forever grateful to them for that. Because when I came here, I was just a crazy kid who had a lot of dreams and who wanted to try a lot of things. And at the University of Saskatchewan, they gave me that platform to make my dream come true. I owe everything to them,” Mostafa said.