Research led by U of T computer science PhD candidate Jiannan Li explores how an interactive camera robot can assist instructors and others in making how-to videos (photo by Matt Hintsa)
Research led by U of T computer science PhD candidate Jiannan Li explores how an interactive camera robot can assist instructors and others in making how-to videos (photo by Matt Hintsa) - A group of computer scientists from the University of Toronto wants to make it easier to film how-to videos. The team of researchers have developed Stargazer , an interactive camera robot that helps university instructors and other content creators create engaging tutorial videos demonstrating physical skills. For those without access to a cameraperson, Stargazer can capture dynamic instructional videos and address the constraints of working with static cameras. "The robot is there to help humans, but not to replace humans," explains lead researcher Jiannan Li , a PhD candidate in University of Toronto's department of computer science in the Faculty of Arts & Science. "The instructors are here to teach. The robot's role is to help with filming - the heavy-lifting work." The Stargazer work is outlined in a published paper presented this year at the Association for Computing Machinery Conference on Human Factors in Computing Systems, a leading international conference in human-computer interaction. Li's co-authors include fellow members of University of Toronto's Dynamic Graphics Project (dgp) lab: postdoctoral researcher Mauricio Sousa , PhD students Karthik Mahadevan and Bryan Wang , Professor Ravin Balakrishnan and Associate Professor Tovi Grossman ; as well as Associate Professor Anthony Tang (cross-appointed with the Faculty of Information); recent University of Toronto Faculty of Information graduates Paula Akemi Aoyaui and Nicole Yu ; and third-year computer engineering student Angela Yang.
TO READ THIS ARTICLE, CREATE YOUR ACCOUNT
And extend your reading, free of charge and with no commitment.