Calendar - University of Houston
Skip to main content

Real-time Facial Performance Capture and Manipulation

Wednesday, October 7, 2020

11:00 am - 12:00 pm

Location

TBA

Abstract

Creation of fine-scale, realistic 3D facial models and animations for many applications, including films, VR systems, and games, have attracted a lot of attentions in recent years. Popular social media and mobile applications bring increasing demand for light-weight acquisition and manipulation of high resolution facial performances on mobile devices. In this talk, I will present our recent works on monocular video based, real-time facial performance capture, manipulation, and generation. The first work I will present a novel real-time framework to reconstruct high-resolution facial geometry and appearance by capturing an individual-specific face model with fine-scale wrinkle details from a single monocular RGB video input. On top of the first work, I will further present algorithms to manipulate the expression in an input monocular face video and perform real-time face swapping between a given 2D portrait and an input monocular face video. 

About the Speaker

Dr. Zhigang Deng is a Full Professor and the Director of Graduate Studies in the Computer Science Department at University of Houston. He earned Ph.D. in Computer Science at the University of Southern California in 2006. Prior to that, he also completed B.S. degree in Mathematics from Xiamen University, China, and M.S. in Computer Science from Peking University, China. His current research interests are in the broad areas of computer graphics/animation, virtual human modeling & animation, human computer interaction, and humanoid robots. He has published 140+ peer-reviewed research papers including many papers on prestigious conferences including SIGGRAPH, CHI, MICCAI, EG, ICCV, ECCV, AAAI, I3D, SCA, ICRA, etc. He is the recipient of many awards, including CASA Best Paper Award, ACM ICMI Ten Year Technical Impact Award, UH Teaching Excellence Award, ICRA Best Medical Robotics Paper Award Runner-up, and Google Faculty Research Award. His research has been funded by NSF, NIH, NASA, DOD, QNRF, Texas NHARP, and various industry sources (including Electronic Arts, Honda, Google, Nokia, etc). His research on SSDR has been widely incorporated into mainstream graphics/animation packages including Maya and Houdini, and has ben practically used in many game studios including Electronic Arts. More information can be found on his webpage, http://graphics.cs.uh.edu/zdeng

CS Seminar October 7, 2020.
Location
Online/Virtual - Link TBA
Contact
csevents@uh.edu