Dissertations, Theses, and Capstone Projects

Date of Degree


Document Type


Degree Name





Kyle Gorman

Subject Categories

Computational Linguistics


natural language processing, machine learning


This thesis presents experiments with using representation learning to explore how neural networks learn. Neural networks which take text as input create internal representations of the text during their training. Recent work has found that these representations can be used to perform other downstream linguistic tasks, such as part-of-speech (POS) tagging. This demonstrates that the neural networks are learning linguistic information and storing this information in the representations. We focus on the representations created by neural machine translation (NMT) models and whether they can be used in POS tagging. We train 5 NMT models including an auto-encoder. We extract the encoder from each model and utilize the representations that the encoder produces to train a hand-crafted Encoder-Tagger (ET) model to do POS tagging. We explore the impact of various features including NMT target language, NMT BLEU score, encoder depth, sequence length, token frequency, and percentage of out-of-vocabulary (OOV) tokens in a sequence. We find that NMT encoder representations contain sufficient linguistic information to perform POS tagging and that there are correlations between several features, which helps us to better understand the inner workings of neural networks.