In Partial Fulfillment of the Requirements for the Degree of
Doctor of Philosophy
Will defend his dissertation
Most of current facial animation approaches largely focus on the accuracy or efficiency of their algorithms. However, human perception, the ultimate measuring stick of the visual fidelity of facial animations, was not effectively exploited in these approaches. In this dissertation, we investigate and incorporate human perception insights into four challenging research topics in facial animation area. (1) Expressive facial animation: We present a perceptual metric to automatically measure the emotional expressiveness of facial animations, by correlating expressive facial motion patterns with subjective perceptual evaluations. (2) Data-driven speech animation: We propose a statistical model to automatically predict the quality of synthesized speech animations generated by various data-driven approaches, by learning the association between speech animation synthesis errors and the perceptual quality. (3) Talking avatar head motion: We quantitatively analyze the impact of audio-head motion characteristics onto human perception. Our analysis results show the correlation between perceptual evaluations and audio features as well as head motion patterns. (4) Facial animation editing: We present a novel statistical learning method which is able to learn the facial editing style from a set of perceptually-guided editing pairs. Our approach can dramatically reduce the manual efforts required by most of facial animation editing.
Date: Monday, April 30, 2012
Time: 11:00 AM
Faculty, students, and the general public are invited.
Advisor: Prof. Zhigang Deng