Computer Science Focus on Research - University of Houston
Skip to main content

Computer Science Focus on Research

When: Monday, March 18, 2019
Where: PGH 563
Time: 11:00 AM


Adversarial Examples in NLP and Potential Solutions from Cybersecurity Principles

Daniel Lee, PhD Student

Adversarial machine learning (ML) has been gaining a lot of attention in the world of deep learning. Deep learning is an approach to supervised machine learning with several stacked layers of neural networks. Deep learning itself has garnered tremendous popularity because of its ability to out-perform traditional machine learning approaches in what were once considered difficult tasks, e.g., image classification, machine translation and automatic text summarization. Adversarial examples were first studied for spam detector. When similar principles and newer techniques were applied, classifiers that were "superhuman" in image classification were easily fooled. With pinpoint pertubations of the input image, the adversary could choose the new class of the image. These pertubations are so small that to the human eye, the image is unchanged. Natural language processing (NLP) also employs many different neural network architectures to give state-of-the-art results. Adversarial ML has not yet entered the domain of NLP. We will investigate the significance of adversarial examples in NLP. We will also tie it to security concerns of future deep learning and make parallels between well-practiced cybersecurity principles and how they will can be leveraged for possible solutions in adversarial examples.

Daniel Lee received his Bachelor of Science in Computer Science from the Jonsson School of Engineering and Computer Science at the University of Texas at Dallas. He is currently seeking his PhD in Computer Science with the REDAS lab under the advising and mentoring of Dr. Verma. His research interests are in cybersecurity, natural language processing and deep learning.

Application Agnostic learning for Realistic Network traffic generation

Oluwamayowa Adeleke, PhD Student

Research and testing in networking often require experimentation with real or representative network traffic. Privacy policies often limit access to production traffic by third parties, which includes most academic researchers. Presently, most experimenters rely on synthetic traffic generators that send packets at fixed rates, or at rates based on statistical distributions; others replay captured packets traces, which often have limited durations. In response to this problem, we propose to create ‘traffic models’ for the patterns of network traffic seen in production networks by using machine learning (ML) algorithms in conjunction with statistical distributions, to model applications’ network behaviors from traffic traces, after removing all protocol-specific reactions to network impairments. In this talk, we discuss our current progress, describing how our proposed system processes real production traffic to create these traffic models that can be taken to a totally different testbed network for regeneration of similar traffic. The outcome and methods derived from this research, promises to improve the methods via which experimentation on large scale networks (datacenter, cloud, enterprise, and IOT) is done, especially in academia.

Oluwamayowa Adeleke received his Bachelor's degree in Electrical Engineering from the Ladoke Akintola University of Technology, Nigeria in 2010, and subsequently received a Masters from the University of Illinois at Chicago in 2015. He has worked in various positions with Alcatel Lucent, Amazon Web Services, and Intel Inc. His interests lie in Computer Networks, Cloud Computing and applications of machine learning to computer networks.

Semi-Supervised Low Light Face Enhancement for Mobile Face Unlock

Ha Le Anh Vu, PhD Student

Facial recognition is becoming a standard feature on new smartphones. However, the face unlocking feature of devices using regular 2D camera sensors exhibits poor performance in low light environments. In this paper, we propose a semi-supervised low light face enhancement method to improve face verification performance on low light face images. The proposed method is a network with two components: decomposition and reconstruction. The decomposition component splits an input low light face image into face normals and face albedo, while the reconstruction component enhances and reconstructs the lighting condition of the input image using the spherical harmonic lighting coefficients of a direct ambient white light. The network is trained in a semi-supervised manner using both labeled synthetic data and unlabeled real data. Qualitative results demonstrate that the proposed method produces more realistic images than the state-of-the-art low light enhancement algorithms. Quantitative experiments confirm the effectiveness of our low light face enhancement method for face verification. By applying the proposed method, the gap of verification accuracy between extreme low light and neutral light face images is reduced from approximately 3% to 0.5%.

Ha received his B.S in Computer Science from Hanoi University of Science and Technology, Vietnam in 2010. Two years later, He finished his M.S in the Computer Science Department, Chonnam National University, S. Korea. Since 2014, he has been a Ph.D. student in Computational Biomedicine Lab, Computer Science Department of University of Houston. His main research interests include face relighting, 3D face reconstruction, and face recognition.