Skip to main content
Je Donne

Holding AI accountable to benefit society

The BMO Responsible AI Awards Program supports students across McGill conducting interdisciplinary research into artificial intelligence

Jocelyn Wong shares her research.

Jocelyn Wong shares her research at the BMO Responsible AI Awards Program research symposium.

The artificial intelligence boom is here. Whether it’s through generative AI, voice assistants and smart-homes, online shopping and e-commerce, finance or manufacturing, AI is rapidly moving into all aspects of our lives. As a result, researchers are increasingly looking at AI systems and how they can serve the populations they target – and more broadly, human society.

Established four years ago with a gift from BMO Financial Group, the BMO Responsible AI Awards Program at McGill supports students from across faculties to pursue research into responsible artificial intelligence.

Run by the McGill Collaborative for AI and Society (McCAIS) – the University’s hub for the responsible use of AI and its impact on society – the BMO program has two streams: one for undergraduates and one for graduate students.

Ting Wang, BSc’21, MSc’24, is in the first year of her PhD in Family Medicine and Primary Care at McGill and is a BMO Responsible AI Fellow. She conducts her research at the intersection of AI and healthcare, “focusing on the care trajectories of older adults with cardiovascular disease and dementia,” she explains. She uses machine learning methods to predict future resource needs and adverse outcomes, such as frequent hospitalizations. 

Ting Wang presents her research.

Ting Wang presents her research.

As required for all program participants, Wang has two supervisors from separate disciplines. “They exposed me to different perspectives, ideas, methods, and even research cultures,” she says.

One of those supervisors is Prof. Samira A. Rahimi, who helps oversee the BMO Responsible AI Awards Program as co-director of McCAIS, which is hosted by the Computational and Data Systems Institute (CDSI) at McGill. Rahimi’s own interdisciplinary research covers AI in primary healthcare.

The importance of an interdisciplinary approach for this program and AI research more generally is clear for Rahimi. “Too often, researchers and clinicians operate in their own bubbles,” Rahimi says. “AI experts develop systems without fully understanding the realities of clinical practice, while clinicians tend to rely on familiar routines and may resist engaging with new technologies. The result is theories and tools that never translate into real-world use.” 

Defining responsible AI

Rahimi points out that in these early days with AI, there is no standard definition of responsible AI.

What qualifies as “responsible” depends on the context or field. “I work in healthcare research, and we have to be really careful as lives are on the line,” Rahimi says. “The goal is to make sure there’s no risk, or as little as possible, for patients and care providers. In this area of research, we have to be cautious.” 

For her, responsible AI in healthcare specifically involves developing systems that don’t support the systematic bias and inequity that already exists in the healthcare system.

“We need to make sure AI-enabled tools are actually accessible to clinicians across our healthcare system,” Rahimi says. “But beyond that, are they being explained clearly to patients? Are clinicians properly trained to use them? Right now, most have almost no education in AI, yet we’re giving clinicians tools they can’t fully analyze or interpret.” 

Dr. Samira Rahimi

Dr. Samira Rahimi at the BMO Responsible AI Program’s Research Symposium.

“If we don’t include a particular stakeholder group,” adds Wang, “it feels like we haven’t done our job correctly, like we haven’t fully captured all of these different experiences, perspectives, and expertise.” 

Wang and Rahimi note that AI systems are more likely to be implemented when they are created in a responsible and interdisciplinary way.

“We can’t get human expertise, empathy, human clinical judgment from AI, but AI can definitely support clinicians, make redundant or repetitive tasks more efficient, and provide more time for meaningful patient-provider connection,” says Wang. 

The stakes are high. “I think if a lot of these different things are not kept in mind when designing, developing and implementing AI, it’s going to widen health disparities.”

How to audit an AI system

Jocelyn Wong, BASc’25, was inspired by her Gender, Sexuality, Feminist, and Social Justice Studies minor when she researched auditing in AI as a BMO Responsible AI Award Recipient last summer.

The Responsible Autonomous & Intelligent System Ethics (RAISE) lab she worked in is in the Faculty of Engineering and is led by one of Wong’s two Program supervisors, Prof. AJung Moon. RAISE is an interdisciplinary group that explores ways to maximize the value of robots and other intelligent machines while minimizing risk to society.

Jocelyn Wong

Jocelyn Wong.

Wong evaluated how stakeholders are considered as part of an AI audit, looking beyond the AI system itself – including the identities of people beyond their role in the organization behind the AI system. 

“We need to look at who is in the room,” says Wong. “To make sure it’s reflective of the society we live in and the population that is going to use the AI system.”

There were some surprising divides to bridge in Wong’s interdisciplinary experience. When she presented her research to the engineering students and researchers in the RAISE lab, she realized she needed to begin with the basics. “I needed to define feminism, intersectional feminism and Black feminism,” she explains.

In turn, her labmates provided her with invaluable support during what was her first time doing research of this scope. 

Her exposure to engineers who think in different ways and research different materials helped prepare her for the Program’s Research Symposium, where the BMO Responsible AI award undergraduate recipients present their work to people from different backgrounds and disciplines. 

“It was great to get the feedback that my research was meaningful,” says Wong. 

For Rahimi, this branch of AI research is having its moment. “Now is the time to conduct more research on how to make AI safe for different sectors, and how to responsibly use and develop these systems.”

Learn more about this year's recipients of the BMO Responsible AI Awards.

Make a gift

Choisissez un montant 50$ 250$ 500$ 1000$