Multimodal agents for cooperative interaction
dc.contributor.author | Strout, Joseph J., author | |
dc.contributor.author | Beveridge, Ross, advisor | |
dc.contributor.author | Ortega, Francisco, committee member | |
dc.contributor.author | Daunhauer, Lisa, committee member | |
dc.date.accessioned | 2021-01-11T11:20:06Z | |
dc.date.available | 2021-01-11T11:20:06Z | |
dc.date.issued | 2020 | |
dc.description.abstract | Embodied virtual agents offer the potential to interact with a computer in a more natural manner, similar to how we interact with other people. To reach this potential requires multimodal interaction, including both speech and gesture. This project builds on earlier work at Colorado State University and Brandeis University on just such a multimodal system, referred to as Diana. I designed and developed a new software architecture to directly address some of the difficulties of the earlier system, particularly with regard to asynchronous communication, e.g., interrupting the agent after it has begun to act. Various other enhancements were made to the agent systems, including the model itself, as well as speech recognition, speech synthesis, motor control, and gaze control. Further refactoring and new code were developed to achieve software engineering goals that are not outwardly visible, but no less important: decoupling, testability, improved networking, and independence from a particular agent model. This work, combined with the effort of others in the lab, has produced a "version 2'' Diana system that is well positioned to serve the lab's research needs in the future. In addition, in order to pursue new research opportunities related to developmental and intervention science, a "Faelyn Fox'' agent was developed. This is a different model, with a simplified cognitive architecture, and a system for defining an experimental protocol (for example, a toy-sorting task) based on Unity's visual state machine editor. This version too lays a solid foundation for future research. | |
dc.format.medium | born digital | |
dc.format.medium | masters theses | |
dc.identifier | Strout_colostate_0053N_16283.pdf | |
dc.identifier.uri | https://hdl.handle.net/10217/219514 | |
dc.language | English | |
dc.language.iso | eng | |
dc.publisher | Colorado State University. Libraries | |
dc.relation.ispartof | 2020- | |
dc.rights | Copyright and other restrictions may apply. User is responsible for compliance with all applicable laws. For information about copyright law, please see https://libguides.colostate.edu/copyright. | |
dc.subject | artificial intelligence | |
dc.subject | gesture | |
dc.subject | speech | |
dc.subject | communication | |
dc.subject | agents | |
dc.subject | multimodal | |
dc.title | Multimodal agents for cooperative interaction | |
dc.type | Text | |
dcterms.rights.dpla | This Item is protected by copyright and/or related rights (https://rightsstatements.org/vocab/InC/1.0/). You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s). | |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | Colorado State University | |
thesis.degree.level | Masters | |
thesis.degree.name | Master of Science (M.S.) |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Strout_colostate_0053N_16283.pdf
- Size:
- 911.64 KB
- Format:
- Adobe Portable Document Format