Dissertations, Theses, and Capstone Projects

Date of Degree

9-2023

Document Type

Thesis

Degree Name

M.A.

Program

Linguistics

Advisor

William Sakas

Subject Categories

Computational Linguistics

Keywords

language modeling, statistics, AI

Abstract

We investigate the properties of natural language prompts that determine their difficulty in machine reading comprehension tasks. While much work has been done benchmarking language model performance at the task level, there is considerably less literature focused on how individual task items can contribute to interpretable evaluations of natural language understanding. Such work is essential to deepening our understanding of language models and ensuring their responsible use as a key tool in human machine communication. We perform an in depth mixed effects analysis on the behavior of three major generative language models, comparing their performance on a large reading comprehension dataset, and draw some counterintuitive conclusions on the relationship between different prompt features and model accuracy and how that relationship varies between different models. Firstly, we observe a divergence in model accuracy as the prompt’s token count grows with overall stronger models increasing in accuracy and overall weaker models decreasing. Secondly, all models unexpectedly exhibit accuracy gains as they are faced with increasing syntactic complexity – a metric derived from a prompt’s constituency parse tree. Lastly, a post hoc analysis revealed that the overall most difficult prompts had the greatest ability to discriminate between different language models, suggesting their outsized usefulness in MRC evaluation methodologies. These findings raise fascinating questions about the nature of language model understanding and suggest new, more interpretable approaches to their evaluation.

Share

COinS