Dissertations, Theses, and Capstone Projects
Date of Degree
9-2025
Document Type
Doctoral Dissertation
Degree Name
Doctor of Philosophy
Program
Educational Psychology
Advisor
Bruce D. Homer
Committee Members
Jan L. Plass
Joan M. Lucariello
Jay Verkuilen
Bixi Zhang
Subject Categories
Cognitive Psychology | Cognitive Science | Developmental Psychology | Educational Assessment, Evaluation, and Research | Educational Psychology | Educational Technology
Keywords
Game-based measurements, machine learning, executive function, learning analytics, educational games, educational data mining
Abstract
Traditional performance-based measurements have faced criticism due to their potential to induce test fatigue and their limited ecological validity. In contrast, game-based assessments have demonstrated the capacity to engage participants effectively in tests and enhance ecological validity by replicating real-world scenarios. Nevertheless, conventional statistical approaches may prove insufficient in capturing the intricate, non-linear associations between participants' gaming performance and their cognitive capabilities. Leveraging the potential of machine learning (ML) techniques, which can model complex problems, offers a promising avenue for estimating students' performance in video games. This dissertation aims to explore various ML methods for the validation of three game-based executive function (EF) assessments. The dissertation included 137 young adults (M = 21.84 years, SD = 2.71) with balanced gender representation (49.59% male, 48.78% female) and diverse ethnic backgrounds, primarily African American (36.59%), Latino/Hispanic (29.27%), and Asian/Pacific Islander (17.07%). Participants engaged with three distinct video games, each designed to measure one of the three EF components: inhibition, shifting, and updating. Additionally, they completed a series of traditional EF tasks, including the Flanker task, the Dimensional Change Card Sort task, and the N-back task, all within a single session. Three analyses for the three game-task pairs were conducted: (1) Correlational Analysis: this initial phase involved conducting a correlational analysis to investigate the relationships between participants' game performance metrics and the outcomes of the EF tasks, (2) Unsupervised Learning: employing unsupervised ML techniques, specifically Gaussian Mixture Models (GMM) and clustering, exploratory analyses were performed to cluster participants based on their EF task performance, and (3) Supervised Learning: a subsequent phase involved training a series of supervised ML models, including k-nearest neighbor and random forests with the aim of classifying or predicting participants’ task performance according to their game performance.
The results highlight both the promise and limitations of game-based assessments. Correlational analyses revealed weak to moderate relationships between game metrics and EF task outcomes, with inconsistencies across the three domains. Clustering analyses provided valuable insights into participant profiles, underscoring the context-dependent and multidimensional nature of executive functions. Supervised learning models achieved moderate prediction accuracy but struggled to capture the complex and non-linear relationships between game metrics and EF outcomes. Negative or near-zero R-squared values indicated that the models often underperformed compared to baseline predictions.
The findings emphasize the complexity of mapping game performance to EF metrics, indicating that the current game-derived features only partially reflect the cognitive demands of traditional EF tasks. Variability in participant behavior, influenced by individual strategies, measurement noises, environmental factors, and cognitive differences, further complicated predictive modeling. These results underscore the need for enhanced feature engineering, refined modeling techniques, and expanded datasets to improve predictive accuracy and generalizability.
This research contributes to the growing field of digital game-based assessment by providing empirical evidence of its potential and limitations. While challenges remain, the findings lay the groundwork for future studies to develop more sophisticated game-based tools for cognitive assessment. By leveraging the engaging and scalable nature of digital games, this work paves the way for innovative approaches to understanding and supporting executive functions in diverse populations.
Recommended Citation
Chen, Ming, "Using Machine Learning Methods to Evaluate Undergraduates’ Executive Function in the Context of Game-Based Measurement" (2025). CUNY Academic Works.
https://academicworks.cuny.edu/gc_etds/6396
Included in
Cognitive Psychology Commons, Cognitive Science Commons, Developmental Psychology Commons, Educational Assessment, Evaluation, and Research Commons, Educational Psychology Commons, Educational Technology Commons
