University of Toronto researchers developing AI system to tackle harmful social media content
Hate speech and misinformation on social media can have a devastating impact, particularly on marginalized communities. But what if we used artificial intelligence to combat such harmful content? That's the goal of a team of University of Toronto researchers who were awarded a Catalyst Grant by the Data Sciences Institute (DSI) to develop an AI system to address the marginalization of communities in data-centric systems - including social media platforms such as Twitter. The collaborative research team consists of Syed Ishtiaque Ahmed , an assistant professor in the department of computer science in the Faculty of Arts & Science; Shohini Bhattasali , an assistant professor in the department of language studies at University of Toronto Scarborough; and Shion Guha , an assistant professor cross-appointed between the department of computer science and the Faculty of Information and the director of the Human-Centered Data Science Lab. Their goal is to make content moderation more inclusive by involving the communities affected by harmful or hateful content on social media. The project is a collaboration with two Canadian non-profit organizations: the Chinese Canadian National Council for Social Justice (CCNC-SJ) and the Islam Unravelled Anti-Racism Initiative. Historically marginalized groups are most affected by content moderation failings as they have lower representation among human moderators and their data is less available for algorithms, Ahmed explains. (L-R) Syed Ishtiaque Ahmed, Shohini Bhattasali and Shion Guha (supplied photos) "While most social media platforms have taken measures to moderate and identify harmful content and limit its spread, human moderators and AI algorithms often fail to identify it correctly and take proper actions," he says.
Advert