Computer Science Seminar - University of Houston
Skip to main content

Computer Science Seminar

Video-based Motion Capture: Challenges and Progress

When: Thursday, April 25, 2013
Where: PGH 563 (please notice room #)
Time: 11:00 AM

Speaker: Dr. Jinxiang Chai, Texas A&M

Host: Dr. Zhigang Deng

Motion capture technologies have made revolutionary progress in computer animation in the past decade. With the detailed motion data and editing algorithms, we can directly transfer expressive performance of a real person to a virtual character, interpolate existing data to produce new sequences, or compose simple motion clips to create a rich repertoire of motor skills. In addition to computer graphics applications, motion capture technologies have enabled tremendous advancement in computer vision, robotics, biomechanics, and natural user interactions.

Current motion capture technologies are often restrictive, cumbersome, and expensive. Video-based motion capture offers an appealing solution because they require no markers,  no sensors, or no special suits and thereby do not impede the subject’s ability to perform the motion. Graphics and vision researchers have been actively exploring the problem of video-based motion capture for many years, and have made great advances. However, these results are often vulnerable to ambiguities in video data (e.g., occlusions), degeneracies in camera motion, and a lack of discernible features on a human body/hand.

In this talk, I will describe our recent efforts on acquiring human motion using video/depth cameras. First, I will show how to capture physically realistic 3D full-body performances (e.g., gymnastics) from a monocular video sequence taken by an ordinary video camera. This is the first video-based motion capture technology that simultaneously captures full-body poses, joint torques, and contact forces using single-camera video streams.  In the second part of my talk, I will describe a fast robust automated method that accurately captures full-body motion data using a single depth camera. Lastly,  I will present a novel motion capture method for acquiring physically realistic hand grasping and manipulation data using multiple video cameras.

Bio:
Jinxiang Chai is currently an associate professor in the Department of Computer Science and Engineering at Texas A&M University. He received his Ph.D in robotics from the School of Computer Science, Carnegie Mellon University in 2006. His primary research is in the area of computer graphics and animation with broad applications in other disciplines such as computer vision, robotics, human computer interaction, and biomechanics. He is particularly interested in developing representations and efficient computational models that allow acquisition, analysis, understanding, simulation, and control of natural human movements.  He draws on ideas from graphics, vision, machine learning, robotics, biomechanics, psychology, and applied math. He received an NSF CAREER award for his work on theory and practice of Bayesian human motion synthesis.