Sanchez, Diana R., authorGibbons, Alyssa, advisorKraiger, Kurt, advisorKiefer, Kate, committee memberTroup, Lucy, committee member2007-01-032007-01-032014http://hdl.handle.net/10217/88593Automated scoring has promised benefits for personnel assessment, such as faster and cheaper simulations, but there is yet little research evidence regarding these claims. This study explored the feasibility of automated scoring for complex assessments (e.g., assessment centers). Phase 1 examined the practicality of converting complex behavioral exercises into an automated scoring format. Using qualitative content analysis, participant behaviors were coded into sets of distinct categories. Results indicated that variations in behavior could be described by a reasonable number of categories, implying that automated scoring is feasible without drastically limiting the options available to participants. Phase 2 compared original scores (generated by human assessors) with automated scores (generated by an algorithm based on the Phase 1 data). Automated scores had significant convergence with and could significantly predict original scores, although the effect size was modest at best and varied significantly across competencies. Further analyses revealed that strict inclusion criteria are important for filtering out contamination in automated scores. Despite these findings, we cannot confidently recommend implementing automated scoring methods without further research specifically looking at the competencies in which automated scoring is most effective.born digitalmasters thesesengCopyright and other restrictions may apply. User is responsible for compliance with all applicable laws. For information about copyright law, please see https://libguides.colostate.edu/copyright.assessment centerstechnologyqualitative content analysisautomated scoringAutomated scoring in assessment centers: evaluating the feasibility of quantifying constructed responsesText