Daniel Obembe

ORCID Identifier(s)


Graduation Semester and Year




Document Type


Degree Name

Master of Science in Computer Science


Computer Science and Engineering

First Advisor

Chengkai Li


Being able to determine which statements are factual and therefore likely candidates for further verification is a key value-add in any automated fact-checking system. For this task, it has been shown that LSTMs outperform regular machine learning models, such as SVMs. However, the complexity of LSTMs can also result in over fitting (Gal and Ghahramani,1997), leading to poorer performance as models fail to generalize. To resolve this issue, we set out to utilize adversarial training as away to improve the performance of LSTMs for the task of classifying statements as factual or non-factual. In our experiment, we implement the adversarial training of an LSTM using the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015) (Miyato et al., 2017) for the generation of adversarial examples. To implement adversarial training, we normalized the model inputs, which in this case are vector representations of words, called word embeddings. We also modified the loss function by adding a perturbation. In other words, we trained the neural network to correctly classify perturbed input values. We discovered that the adversarially trained LSTM outperforms the regularly trained LSTM on some performance metrics, but not all. Specifically, the adversarially trained LSTM shows increased precision with respect to sentences that are classified as check-worthy, and increased recall with respect to sentences which are classified as not being check-worthy.


Neural networks, LSTMs, Adversarial training, Recurrent neural networks, Fact-checking, Machine learning


Computer Sciences | Physical Sciences and Mathematics


Degree granted by The University of Texas at Arlington