Oscar-winning animator and filmmaker Chris Landreth at U of T computer science
Traditional animators are masterful at capturing the expressions we see on screen, like Disney Pixar’s characters Joy, Fear and Disgust, in the animated film, Inside Out.
Other filmmakers use tools like motion-capture to track the muscular points of a person’s face to animate their speech.
But what if software could predict facial expression using only the sound of an actor’s voice?
“As an animator, I’m trying to animate realistically,” says Chris Landreth, Academy Award® winning animator and filmmaker – and now distinguished research artist-in-residence in the University of Toronto’s department of computer science.
“Most animators don’t like motion-capture because it imposes what really happened on what they want to do – which is not necessarily what a person would do – but in the mind of the animator, what they want their character to do.”
Landreth and computer science PhD student Pif Edwards are developing a breakthrough speech detection and animation approach that would easily generate virtual actors, or make scenes of crowds where hundreds of people are needed to talk and express, seamless.
“For example, a sales person on the phone sounds cheerful. You can’t say why, but a certain timbre comes to a person’s voice when you know they’re smiling, or have a pleasant demeanor. To quantitatively discern and translate those qualities procedurally to an animated face is a major goal of our research.”
To achieve this, software listens to a voice, selects the phoneme sounds, creates a streaming list of phonemes, and applies it to an animated character.
While further work is needed in how the phoneme sounds are presented – particularly bilabial sounds where the lips are meant to touch – it’s currently the most advanced work in predictive animation the pair has seen. Later, they can move on to more complicated automation, like emotion.
“The emotional part is a tough nut,” said Landreth, who is scheduled to give a public lecture on his animation research on Jan 27..
The New Face of CGI: Advances in Facial and Speech Animation
Earlier in his career, Landreth was a researcher at Alias|Wavefront (now Autodesk) when the company was developing Maya, now considered the industry standard software for computer-generated animation and special effects.
While at the company, he met Professor Eugene Fiume, who at the time was on leave from U of T, and Karan Singh, who subsequently joined U of T as a professor; both are part of U of T's Dynamic Graphics Project (DGP) laboratory, a leading facility in computer graphics and human-computer interaction research.
Nearly 10 years after meeting Fiume and Singh, Landreth began collaborating with the lab on some of the tools needed for his next animated work. The film, Ryan (2004), about Canadian animator Ryan Larkin, went on to win the Academy Award® for Best Animated Short.
The short achieved new technical heights for which Professor Singh was credited as a research and development director. The “cord” algorithm for animated curves, making hair, string, wire and other rope-like objects act independently of the characters, added further realism.
The film’s development included essential expertise from Professor Anne Agur, musculoskeletal anatomy laboratory in the department of surgery, and Professor Nancy McKee, a hand surgeon in the division of plastic and reconstructive surgery, in the physical aspects of the characters.
Landreth, an adjunct professor, has taught a graduate course in computer science and given lectures on facial animation work in the Faculty of Medicine’s master of science in biomedical communications. He teaches independent classes at colleges and universities around the globe, including DreamWorks’ three campuses.
To advance the art and science of animated filmmaking, the Faculty of Arts & Science, the Faculty of Medicine and the department of computer science are supporting Landreth’s ongoing research collaborations at U of T for a two-year term as distinguished research artist in residence. The results of his work here are likely to feature in his next animated work, currently under development.
Landreth also foresees further uses for their predictive software. To capture, show, and change, on demand, a computer graphic model, can be a useful therapeutic tool from psychological therapy research to helping those with autism.
“This isn’t just production. Or just animation. My experience being involved in facial research is that it spans many disciplines, which is what I find wonderful about it.”
Nina Haikara is a writer with the department of computer science at the University of Toronto