Automated scoring in assessment centers: evaluating the feasibility of quantifying constructed responses
dc.contributor.author | Sanchez, Diana R., author | |
dc.contributor.author | Gibbons, Alyssa, advisor | |
dc.contributor.author | Kraiger, Kurt, advisor | |
dc.contributor.author | Kiefer, Kate, committee member | |
dc.contributor.author | Troup, Lucy, committee member | |
dc.date.accessioned | 2007-01-03T06:23:25Z | |
dc.date.available | 2007-01-03T06:23:25Z | |
dc.date.issued | 2014 | |
dc.description.abstract | Automated scoring has promised benefits for personnel assessment, such as faster and cheaper simulations, but there is yet little research evidence regarding these claims. This study explored the feasibility of automated scoring for complex assessments (e.g., assessment centers). Phase 1 examined the practicality of converting complex behavioral exercises into an automated scoring format. Using qualitative content analysis, participant behaviors were coded into sets of distinct categories. Results indicated that variations in behavior could be described by a reasonable number of categories, implying that automated scoring is feasible without drastically limiting the options available to participants. Phase 2 compared original scores (generated by human assessors) with automated scores (generated by an algorithm based on the Phase 1 data). Automated scores had significant convergence with and could significantly predict original scores, although the effect size was modest at best and varied significantly across competencies. Further analyses revealed that strict inclusion criteria are important for filtering out contamination in automated scores. Despite these findings, we cannot confidently recommend implementing automated scoring methods without further research specifically looking at the competencies in which automated scoring is most effective. | |
dc.format.medium | born digital | |
dc.format.medium | masters theses | |
dc.identifier | Sanchez_colostate_0053N_12706.pdf | |
dc.identifier.uri | http://hdl.handle.net/10217/88593 | |
dc.language | English | |
dc.language.iso | eng | |
dc.publisher | Colorado State University. Libraries | |
dc.relation.ispartof | 2000-2019 | |
dc.rights | Copyright and other restrictions may apply. User is responsible for compliance with all applicable laws. For information about copyright law, please see https://libguides.colostate.edu/copyright. | |
dc.subject | assessment centers | |
dc.subject | technology | |
dc.subject | qualitative content analysis | |
dc.subject | automated scoring | |
dc.title | Automated scoring in assessment centers: evaluating the feasibility of quantifying constructed responses | |
dc.type | Text | |
dcterms.rights.dpla | This Item is protected by copyright and/or related rights (https://rightsstatements.org/vocab/InC/1.0/). You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s). | |
thesis.degree.discipline | Psychology | |
thesis.degree.grantor | Colorado State University | |
thesis.degree.level | Masters | |
thesis.degree.name | Master of Science (M.S.) |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Sanchez_colostate_0053N_12706.pdf
- Size:
- 5.05 MB
- Format:
- Adobe Portable Document Format
- Description: