Cavanagh, Thomas M., authorKraiger, Kurt, advisorGibbons, Alyssa, committee memberHenry, Kim, committee memberMaynard, Travis, committee member2007-01-032007-01-032014http://hdl.handle.net/10217/83718Online tests are a relatively efficient way to assess large numbers of job candidates and are becoming increasingly popular with organizations. Due to their unproctored nature, however, online selection tests provide the potential for candidates to cheat, which may undermine the validity of these tests for selecting qualified candidates. The purpose of this study was to test the appropriateness of utility theory as a framework for understanding decision-making in regard to cheating on an online cognitive ability test (CAT) by manipulating the probability of passing the test with cheating, the probability of being caught cheating, and the value of being caught cheating in two samples: 518 adults recruited through Amazon mTurk, and 384 undergraduate students. The probability of being caught cheating significantly affected performance on the CAT for the mTurk sample, but not for the student sample, and significantly moderated the relationship between CAT score during session one and CAT score during session two for the student sample. Neither the probability of being caught cheating nor the value of being caught cheating significantly affected CAT performance or validity in either sample. Findings regarding the prevalence and effectiveness of cheating are discussed.born digitaldoctoral dissertationsengCopyright and other restrictions may apply. User is responsible for compliance with all applicable laws. For information about copyright law, please see https://libguides.colostate.edu/copyright.unproctored Internet testingprevalencevaliditycheatingmTurkCheating on online assessment tests: prevelance and impact on validityText