Date of Degree
Industrial and Organizational Psychology
Fluid Intelligence, Mobile Assessment, Bilingual, Group Differences, Intelligence Testing
As fluid intelligence tests are an integral part of modern employee selection protocols, assessment designers are tasked to ensure the construct is measured accurately for all test-takers regardless of their demographic traits. Disparities in bilingual and monolingual working memory capabilities, which are critical for successful fluid intelligence test performance, might make it challenging for test designers to accomplish this goal. Best design practice in such cases is to identify assessment conditions that allow for equitable expression of the test construct. Thus, the purpose of this study was to examine whether the content of items present on a fluid intelligence test and the features of the medium through which the test is presented influence score differences between monolinguals and bilinguals. Drawing on research related to neurocognition, intelligence testing and mobile assessment, I proposed a moderated moderation model in which item type (i.e., novel graphic or pseudoword) and assessment medium features (i.e., scrolling or no scrolling requirement), jointly moderate the relation between linguistic background and fluid intelligence test scores. Hypotheses were tested among 255 Prolific members who completed 16 fluid reasoning items adapted from the LSAT to accommodate the test factors of interest. Specifically, the content about which individuals needed to reason in the items was either novel graphics or pseudoword stimuli. The items were designed such that participants either needed to scroll between information pertinent for answering each item and the questions or this information was frozen at the top of the page so it was always in the participant’s view for all questions, eliminating the need to scroll. Results demonstrated that item-type and the need to scroll affected within rather than between group fluid intelligence test performance. Monolinguals performed significantly better on novel graphic items when scrolling was needed compared to when scrolling was not needed. By contrast, bilinguals performed significantly better on fluid reasoning tests containing pseudoword items when scrolling was needed compared to when it was not. From these findings, I contribute to the literature in three theoretical and practical ways. First, I offer fluid intelligence test designers nuanced guidance for implementing language reduction strategies into their test items. Second, I study cultural influences on employment test score differences, which are seldom examined in the intelligence test group difference literature. Finally, I expound the mobile assessment literature, which is just beginning to examine how device type interacts with test features to influence performance rather than psychometric indices. A full discussion of the findings, their implications, limitations of the current study, and directions for future research is included.
Alenick, Paige R., "Intelligence Testing in the New (Langu)age: Effects of Item-Type and Assessment Medium Features on Fluid Intelligence Test Linguistic Group Score Differences" (2022). CUNY Academic Works.