In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
will defend his proposal
Adversarial Text Attacking Deep Neural Network Models
AbstractMany natural language processing (NLP) tasks now leverage the power of deep learning to solve tasks once thought difficult, like machine translation and sentiment analysis. In the last decade, vulnerabilities have been revealed in these deep learning models. They have been shown to be susceptible to small perturbations in the inputs, especially image datasets. The perturbations are seemingly innocuous to humans, but effect a misclassification by the deep learning models. First explored in the continuous domain of image classification, the body of research on adversarial examples is garnering much interest in discrete domains like NLP and software. My research analyzes the applicability of the research of continuous space adversarial examples to the discrete space. We look at different classes of models used, and how these are affected by the difference in input space. Specifically, we will analyze the fundamental architectures (recurrent and convolutional neural networks) and also more complex models like the recurrent convolutional neural network (RCNN).
Date: Tuesday, December 10, 2019
Time: 2:00 PM - 3:00 PM
Place: PGH 550
Advisor: Dr. Rakesh Verma
Faculty, students, and the general public are invited.