Computer Science Focus on Research - University of Houston
Skip to main content

Computer Science Focus on Research

When: Monday, April 6, 2020
Where: Online Presentation - Google Meet:
Time: 11:00 AM

Focus on Research (FoR) is an opportunity for any COSC Ph.D. student to discuss a research project (with or without preliminary results), a conference dry run, or any research topic of interest to present to an audience of peers and faculty. It is a great avenue for Ph.D. students to practice presentation skills in front of a larger and broader audience.

ViPER: Vehicle Pose Estimation using Ultra-WideBand Radios

Alireza Ansaripour, Ph.D. Student


With the growth of IoT applications, the need for an accurate and robust location-tracking system has become essential for many industries. With Ultra WideBand (UWB) radios capable of calculating the signal arrival time by approximately 15ps accuracy, the implementation of accurate localization systems has become plausible. However, their ability in centimeter-level ranging and localization can be compromised when these systems are implemented in real-world scenarios with objects causing Non-Line of Sight (NLoS) situations. In this work, we present ViPER, an accurate and robust position-estimation system for real-world harsh environments.


Alireza Ansaripour is a first-year Ph.D. student who works under the supervision of Dr. Omprakash Gnawali in the Networked Systems Lab. His research is mostly but not limited to Internet of Things (IoT) technologies.

On the Usefulness of Personality Traits in Opinion-Oriented Tasks

Marjan Hosseinia, Ph.D. Student


We use a deep bidirectional transformer to extract Myers-Briggs personality types from user-generated data in multi-label classification mode. Our dataset is large and made up of three available personality data of various social media platforms including Reddit, Twitter, and a personality forum. We infer personality information from our transformer-based model and investigate if it can be useful for downstream opinion-oriented text classification tasks. Experimental evidence shows the effectiveness of the pre-trained model on personality data in stance detection, authorship verification, and sentiment analysis.


Marjan Hosseinia is a Ph.D. student at University of Houston. She works under the supervision of Dr. Arjun Mukherjee. Her research interests are natural language processing and opinion mining.

Platform for Interactive Immersion into Imaging Data with an Augmented Reality Interface

Jose Daniel Velazco-Garcia, Ph.D. Student


Augmented reality (AR) visualization through head mounted displays (HMDs), such as the Microsoft HoloLens, has shown tremendous potential in many fields. Since a wide range of medical applications can benefit from this technology, worldwide researchers and developers have focused major effort and innovation in developing tools that assist physicians with diagnosis and image-guided interventions, as well as image=guided surgery-planning simulations and post-operative assessments. However, the computational power needed by many of these medical applications to run in real-time makes a single HMD implementation technically impossible. As a result, the need of a separate computational unit to offload data processing and analysis algorithms is deemed necessary. In this presentation, we present a modular platform we have been developing and experimenting using ergonomic and efficient human-cyber interfaces for interactive immersion of the physician into the data while enabling data analytics and processing.


Jose Daniel Velazco-Garcia is a 3rd-year Ph.D. student and NSF GFRP fellow at the Department of Computer Science at University of Houston. He received a Bachelor’s degree in computer science from University of Houston-Clear Lake (UHCL) in 2017. He has worked as a software developer at Tietronix and had an internship with Hamad Medical Corporation at Qatar developing an interface for a prostate biopsy surgical robot. Currently, he is working with Dr. Nikolaos Tsekos and Dr. Ernst Leiss in the visualization of imaging data and interactive control of imaging devices using an augmented reality interface in real-time.

Claim Verification Under the Positive-Unlabeled Setting

Fan Yang, Ph.D. Student


We extend claim verification to the context of positive-unlabeled (PU) learning. Existing works assume the truth and the falsity of the claims are known for training, and forms the task as a supervised learning problem. However, this assumption underestimates the difficulty of collecting false claims; we argue that claim verification is more challenging in the absence of negative labels. We consider a more practical setting, where only a comparatively small number of true claims are labeled and more claims remain unlabeled. Thus, we formulate the claim verification task as a PU learning problem. We decouple learning representation of claim-evidence pairs from PU learning and adopt a pre-trained universal language model to encode claim-evidence pairs. We further propose to use the generative adversarial network (GAN) to capture the latent alignment between encoded claim-evidence pairs and the truthfulness. We leverage the verification as part of the GAN by extending previous GAN-based PU learning. We show that the proposed model achieves the best performance with a small amount of labeled data and is robust to the truthfulness prior to estimation. We conduct a thorough analysis of the model selection. The proposed approach performs the best under two practical scenarios: 1) the unlabeled data is more than the labeled data, and 2) the unlabeled positive data is more than the unlabeled negative data.


Fan Yang is a 5th-year Ph.D. student, advised by Dr. Arjun Mukherjee. Fan is interested in deep learning and natural language understanding, with a particular focus on detecting misleading information.