Recently announced as a Canada Research Chair (CRC) in Technology and Social Change, Fan’s work is richly interdisciplinary, combining media studies, science and technology studies, interactive storytelling, critical design, and research-creation. Her CRC program examines technological design and bias in AI - "something that’s been on everybody’s mind these days."
By investigating the design of sexist, racist, and classist AI voice assistant software, racist facial recognition systems, and exploitative AI hardware production, she is working to identify how AI produces human experiences that reinforce social inequalities. And she wants to change that.
Fan’s goal is to encourage and enhance equity, diversity, and inclusion in AI design and to improve technological literacy.
Of course the speed of AI advances is a challenge for everyone, she says, as Open AI forges ahead, pushing other companies to move faster and work on more ways to integrate AI into our everyday lives. "They’re developing day-to-day, and the CEO of Open AI himself is addressing issues of governance and regulation. But this has always been an issue with tech industry versus legislation and governance: it develops so fast that regulators cannot keep up."
Creating equitable human-AI experiencesFan’s CRC research will be based in her Unseen-AI Lab (U&AI Lab) and engage interdisciplinary collaborators to develop approaches that prevent inequitable AI at the design and production stage. Her research plan includes three case studies on software, hardware, and big data - all focused through an equity lens.
In the first, Fan is looking at how inequities and stereotypes found in human labour are transferred to our experiences interacting with an AI assistant. She gives an example: "What would it look like to have a feminist Siri where, if you used abusive language towards her, she just shuts off?" Working with colleagues, Fan aims write more equitable software scripts, and in this way, design civility and fairness into the user experience.
For the hardware study, Fan will develop VR experiences to educate and expose people to the environmental consequences of the ever-growing demand for superpowered computers and more/newer personal devices in our daily life.
The third study examines the inherent racism of big data used for facial recognition technology - which are trained primarily on databases of white faces. Given these databases simply don’t have enough diverse faces, Fan wants to rebuild and expand the data. "It’s ambitious, but why not build one?" Of course, to ensure the biodata collection and management is equitable, this work will involve collaborators with ethics expertise, she adds.
It has to be interdisciplinaryNow based in Waterloo’s Department of Sociology and Legal Studies, Fan’s academic career has traversed English literature to media studies to technology studies and research-creation. By her postdoctoral studies, she found "it didn’t make sense to be disciplinary anymore." Today she is an experienced practitioner of digital installations, digital storytelling, creative coding, and game design. To date she’s had 18 solo and collaborative research-creation projects, including collaborations with MIT, Georgia Tech, and Waterloo’s Institute for Quantum Computing.
"This work has to be interdisciplinary," says Fan who is inviting researchers and students from a range of STEM, humanities and social science disciplines to collaborate in the U&AI Lab. "Computer science and engineering students will be able to help us examine the risks and benefits of AI technologies that they’re currently learning to design and produce. Students from the social sciences and humanities trained in critical theories of gender, race, and class, as well as qualitative and quantitative methods, can tackle real-world evolving issues in AI industry and policy."
Combining this breadth of expertise and experience in her CRC research, Fan will contribute to AI design in research and industry, improve technological literacy, and most important, strengthen equity, diversity and inclusion in human-AI experiences.