ORCID Identifier(s)

0009-0009-5875-7392

Graduation Semester and Year

Winter 2025

Language

English

Document Type

Thesis

Degree Name

Master of Science in Psychology

Department

Psychology

First Advisor

Michelle Martin-Raugh

Second Advisor

Larry Martinez

Third Advisor

Logan Watts

Abstract

As AI continues to evolve, tasks once thought impossible for AI have now become a reality. In the area of test development, one area where the use of AI holds promise is in the development of Situational Judgement Tests (SJTs). While SJTs are often presented as an effective assessment tool (Motowidlo et al., 1990) that can be easily automated, the cost of time and resources for development is often considered prohibitive. Utilizing AI in SJT development has the potential to significantly reduce the resources required for such a process. To test the viability of utilizing AI to assist in the development process, this study assesses and compares the validity evidence for 2 similar SJTs, one developed by human subject matter experts and the other with the assistance of AI. Specifically, I evaluate the reliability, convergent and divergent validity, predictive validity, and fairness of both SJTs. The AI-generated SJT was generally comparable to the Human-developed SJT, demonstrating AI’s potential for use in SJT development. Limitations regarding convergent and criterion validity are discussed. Additionally, this research helps inform a process by which AI can potentially be used to create valid, psychometrically sound SJTs.

Keywords

Situational Judgement Tests, AI, LLM, Psychometrics

Disciplines

Industrial and Organizational Psychology

License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Available for download on Friday, December 10, 2027

Share

COinS