Dissertation Proposal - University of Houston
Skip to main content

Dissertation Proposal

In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy

Aobo Jin

will defend his proposal

Contour-based 3D Modeling for Character Animation


Sketch-based modeling has undergone a variety of research during the past two decades. To build a bridge between the gap between 2D sketch and 3D models, previous work utilizes multi-view 2D input or contours with semantic meaning to infer the depth information used to model relative 3D models. However, those techniques require careful alignment input or complex interaction. The goal of this dissertation is to reduce the complexity requirement and increase the modeling result. With the proposed techniques in this dissertation, two applications are designed to tackle specific problems. In this proposal, I propose a framework to build a embedding space containing 2D contour and 3D modeling features. I first designed a variational autoencoder (VAE) architecture to build a embedding space for 2D contours. Then I propose a volumetric autoencoder to project 3D models onto the embedding space. Given a 2D contour, we can encode it through the encoder of VAE and decode it through the decoder of volumetric autoencoder to generate a 3D model. A 3D model manipulation application can be done with the proposed method. Given a 3D model, we can encode it through the encoder of a volumetric autoencoder and manipulate the shape on the embedding space and generate the preferred 3D model through the decoder of a volumetric autoencoder. The first work can generate simple 3D models with few surface details. To generate complex 3D models such as a 3D human body, I would like to explore the 3D character modeling problem, given a relative 2D sketch. I also plan to generate a 2D character animation based on the projection of generated 3D character models.

Date: Thursday, May 21, 2020
Time: 2:00 - 4:00 PM
Place: Online Presentation -  MS Teams Meeting
The TEAM code is: 4wclrvu
Advisor: Dr. Zhigang Deng

Faculty, students, and the general public are invited.