Date of Degree
Artificial Intelligence and Robotics | Cognitive Psychology | Computational Linguistics | Morphology
quasi-regularity, neural model, morphological inflection, grapheme-phoneme mapping
Many aspects of language can be categorized as quasi-regular: the relationship between the inputs and outputs is systematic but allows many exceptions. Common domains that contain quasi-regularity include morphological inflection and grapheme-phoneme mapping. How humans process quasi-regularity has been debated for decades. This thesis implemented modern neural network models, transformer models, on two tasks: English past tense inflection and Chinese character naming, to investigate how transformer models perform quasi-regularity tasks. This thesis focuses on investigating to what extent the models' performances can represent human behavior. The results show that the transformers' performance is very similar to human behavior in many aspects, such as accuracy, answer variability, etc. However, there are still some differences in the models' performance and human behavior, such as humans are more likely to produce irregular forms for nonce English verbs and are more likely to produce regular pinyin for unknown Chinese characters.
Ma, Xiaomeng, "Evaluating Neural Networks as Cognitive Models for Learning Quasi-regularities in Language" (2023). CUNY Academic Works.
Artificial Intelligence and Robotics Commons, Cognitive Psychology Commons, Computational Linguistics Commons, Morphology Commons