Date of Degree
Industrial and Organizational Psychology
mobile assessment, SJT, situational judgment test, predictor method factors, test-taker reactions
In recent years, job applicants have increasingly taken internet-based pre-employment tests on mobile devices (e.g., smartphones) in addition to nonmobile devices (e.g., computers). This mobile assessment phenomenon introduces new issues into the test design process, such as ensuring consistent assessment outcomes across different device types. Mobile assessment research has focused on device attributes and predictor constructs as explanations for potential differences across device types but has given little attention to predictor methods. Examining the role of predictor methods is important for understanding how to design assessments that perform comparably across device types, particularly for highly modular methods like situational judgment tests (SJTs). Thus, the present study examined how two predictor method factors, contextualization and stimulus format, affect SJT scores and test-taker reactions across device types. Group score differences across device types were also explored. A quasi-experimental, 2x3x2, between-subjects design was used to examine these relationships. Two hundred participants, recruited from an undergraduate student research pool and through snowball sampling, took an SJT in one of twelve conditions which differed in terms of the level of contextualization (none vs. medium vs. high), stimulus format (text-only vs. pictures), and type of device used to complete the test (mobile vs. nonmobile). Contrary to expectations, neither SJT scores nor test-taker reactions differed across device types in any of the experimental conditions, which suggests that the experience of taking the assessment was comparable on a smartphone and a computer regardless of predictor method factors. The key findings were that both predictor method factors had a positive effect on SJT performance, such that adding context and pictures to the assessment was associated with higher scores. The two predictor method factors also had small and inconsistent positive effects on test-taker reactions. No significant group score differences were found for either device type, although limited sample sizes prevented extensive analysis. Overall, the study findings provide little support for mobile assessment frameworks but suggest that contextualization and stimulus format can meaningfully impact SJT outcomes. Thus, the current study offers insights into the boundary conditions of device-based differences in assessment outcomes as well as the role of predictor method factors on SJTs.
Kato, Anne E., "Is Less More? Examining the Effects of Predictor Method Factors on Mobile SJT Scores and Test-Taker Reactions" (2022). CUNY Academic Works.