Western prof Luke Stark appointed CIFAR Azrieli Global Scholar

Faculty of Information and Media Studies  Luke Stark is the first Western resear
Faculty of Information and Media Studies Luke Stark is the first Western researcher to be named a CIFAR Azrieli Global Scholar. (Christopher Kindratsky/Western Commuications)
Faculty of Information and Media Studies (FIMS) professor Luke Stark is the first Western researcher to be named a CIFAR Azrieli Global Scholar.

Ten scholars were chosen out of 285 applicants from nearly 40 countries.

Stark’s two-year term includes $100,000 in funding to advance research initiatives and build global connections. The Canadian Institute for Advanced Research (CIFAR) program, focused on building leadership capacity and international collaboration, is described as a community of researchers "addressing the most important questions facing science and humanity."

An expert in artificial intelligence (AI), Stark plans to use the appointment to further his research and enhance collaborative projects with Western colleagues. He will focus on generative AI systems and how they can be understood as animated characters, along with his work on AI predictive technology and its use as a tool in the homelessness crisis.

Stark will also use the funding to engage students in research activities, professional development opportunities and leadership programs.

"It’s a huge bonus to be able to draw on the CIFAR Global Scholars program to bring folks to Western and connect Western to the outside world," he said.

Stark discussed his work, the appointment and how he plans to build on the collaborative culture in FIMS at Western.

Suzanne Elshorafa: How do you feel about becoming a CIFAR Azrieli Global Scholar?

Luke Stark: I’m pleased and honoured, of course. At the same time, I’m very conscious that with a privilege like this appointment comes a lot of responsibility. I’ll work to make the most of it not just for my own research, but also for FIMS and for the Western community more broadly - especially our graduate students.

How have the goals for AI systems with human interaction evolved over time?

LS: All AI systems have been developed with the idea that they’re going to interact with humans. How they do that, and why they do that, has always been an open question.

What are the main ethical concerns surrounding AI designed for human interaction?

LS: I think deliberately developing these systems to elicit a powerful emotional or personal response from a human being has a lot of ethical questions attached to it. We know that humans are very prone to projecting agencies onto non-living organisms or living things. The example I always give is Wilson the volleyball from the film Castaway. Tom Hanks projects a personality onto it because he’s stuck on this desert island. When you’re deliberately designing systems to heighten that human tendency, I think that’s a huge area of ethical concern. There are also concerns around labour and the way digital data is collected, cleaned, processed and moderated. The entire AI ecosystem relies on low-paid content moderators, often in the Global South or other countries. That’s a huge part of AI that isn’t discussed enough. AI systems are designed to be interactive and compelling with the kind of data analysis and prediction that these systems can produce. That has strong and worrying implications for our media, from what content you see, to the spaces in which we make decisions as societies, especially as a democratic society. Those systems are not always transparent, and it’s not always clear who is behind a chatbot or an automated system.

Part of your research looks at how AI is being used to predict and manage homelessness. How are those systems working?

LS: When I moved to London in 2020, there were news stories about how the City of London was developing a predictive AI model, to predict who or what kind of person would be chronically homeless. My wonderful colleague Joanna Redden , who works on the way governments use AI systems and data systems, and I knew we needed to investigate this because it’s important, and a great example of how these themes are playing out at a local level. Last year, we were awarded a SSHRC insight grant to continue this work. We’re in the process of setting up interviews with all sorts of folks in London, including the city, nonprofit agencies and different stakeholders to try and understand how digital data identification and prediction is playing out amidst the city’s attempts to deal with homelessness as a major problem. Prediction models around homelessness and managing homelessness are now becoming increasingly common in many American cities. One of the things that we are really emphasizing in the project is that AI technologies and predictive technologies are just the latest in a series of datafication tools that have been applied to things like homelessness.

How accurate are these predictive AI technologies?

LS: So, this is the messy problem. The short answer is, not nearly as accurate as their proponents claim that they are. In some cases, they’re not accurate at all, and in other cases they’re accurate, but in a way that is not actually useful. It really depends on what you’re trying to predict. Phenomena that are regular, repeatable and stable across time are easier to predict. You can predict the movement of asteroids in space, or certain elements of biological cell processes, but then you get to human behavior. Whether it’s conscious or even unconscious human behavior, AI systems fall into the same challenge that some of the social sciences have been grappling with for 100 or 200 years, if not longer. AI systems are only as good or accurate as the data that’s being analyzed. If the data is partial, incomplete or biased, this system will produce a prediction reflecting all those inconsistencies and biases.

What impact do you think AI-human interaction technologies will have on social relationships and human communication in the long term?

LS: It’s complicated because humans have enough trouble communicating interpersonally as it is. When you create things that seem human and are communicating in human language - with compelling speech - but are not sentient, rather developed for specific purposes, it seems like a recipe for a lot of potential manipulation and problems. Every communication technology, every media technology, reshapes human interaction through its affordances. That’s another way of saying what Marshall McLuhan said, the medium is the message. I’m not saying that mediated technologies are bad, but they do have effects, especially when the design of AI systems is so concentrated in a small number of corporations, and the development concentrated in the hands of a very small - and not very diverse - group of people.

How can interdisciplinary collaboration (e.g. between AI researchers, psychologists, computer scientists) advance the development of AI for human interaction?

LS: I do a lot of thinking about how human values get built into technologies. The first step in any interdisciplinary collaboration is to sit down and think about what you value and how you want that to be expressed in a technical project. I don’t think interdisciplinary collaboration is necessarily a panacea for positive social or societal outcomes, precisely because there’s more in common between psychology and computer science than both parties realize. I do think that a much broader, equitable, interdisciplinary or multidisciplinary collaboration between computer science, social sciences, historians, philosophers and folks who understand deeply the reality of human experience, could potentially be fruitful and valuable. But even that doesn’t necessarily guarantee positive social outcomes.