Show simple item record

dc.contributor.advisorBeveridge, Ross
dc.contributor.advisorOrtega, Francisco
dc.contributor.authorWang, Heting
dc.contributor.committeememberSharp, Julia
dc.contributor.committeememberPeterson, Christopher
dc.date.accessioned2020-09-07T10:08:55Z
dc.date.available2020-09-07T10:08:55Z
dc.date.issued2020
dc.description2020 Summer.
dc.descriptionIncludes bibliographical references.
dc.description.abstractIn Human-Computer Interaction, it is difficult to give machines emotional intelligence to resemble human affects, such as the ability of empathy. This thesis presents our work of an emotionally expressive avatar named Diana that can recognize human affects, and show her empathy by using dynamic facial expressions. Diana's behaviors and facial expressions were modeled from Human-Human multimodal interactions to help to provide human-like perceptions in users. Specifically, we designed her empathic facial expressions as a linear combination of the action units in the Facial Action Coding System [1], with the action units that were previously found to improve the accuracy and judgments of human likeness. Our work studies the role of affect between a human and Diana working together in a blocks world. We first conducted an elicitation study to extract naturally occurring gestures from naive human pairs. The pair of human collaborated on a task remotely through video communication to build wooden blocks. The video footage of their interactions composed a dataset named EGGNOG [2]. We provided descriptive and statistical analysis of the affective metrics between human signalers and builders in EGGNOG. The metrics included measures of valence (positive or negative experience) and intensities of 7 basic emotions (joy, fear, disgust, anger, surprise, and contempt). We found: 1) Overall the signalers had a broader range of valence and showed more varied emotions than the builders. 2) The intensity of signalers' joy was greater than that in builders, indicating a happier signaler than a builder. 3) For individuals, the person was happier to act as a signaler in a task than act as a builder. Additionally, valence was more associated with a person's role in a task and less associated with personality traits. Other emotions were all weak and no significant difference was found between signalers and builders. To adapt to the user's affects in the later Human-Avatar interaction, we modeled Diana's empathic behaviors based upon findings in EGGNOG and the Appraisal theory [3]. We created a Demo mode of Diana whose affective states, i.e., facial expressions that simulated empathy, dynamically transitioned between 5 finite states (neutral, joy, sympathy, concentration, and confusion) with respect to the user's affects and gestures. We also created a Mimicry mode of Diana who mimicked the user's instant facial expressions. Human subject studies involving three modes of this avatar (Demo, Mimicry, and Emotionless) were conducted with 21 participants. The difference in votes from a 5-point Likert scale perception questionnaire or a NASA TLX perceived load survey was both statistically insignificant. However, compared to the Mimicry Diana and the Emotionless Diana, a descriptive analysis indicated users spent more time engaging with the empathic Diana, and both the Demo and Mimicry mode of Diana were preferred by users over the Emotionless Diana. Some participants commented about Diana's facial expressions as natural and friendly while 3 other participants were elicited uncomfortable feelings and mentioned the Uncanny Valley effect. Results indicated our approach of adding affects to Diana was perceived differently by different people and received both positive and negative feedback. Our work provided another implementable direction of the human-centered user interfaces with complex affective states. However, there was no evidence that the empathic facial expressions were more preferred by participants than the mimicked facial expressions. In the future, Diana's empathic facial expressions may be refined by modeling more human-like action unit movements with the help of deep learning networks, and the user perception in subjective reports may get improved.
dc.format.mediumborn digital
dc.format.mediummasters theses
dc.identifierWang_colostate_0053N_16252.pdf
dc.identifier.urihttps://hdl.handle.net/10217/212076
dc.languageEnglish
dc.publisherColorado State University. Libraries
dc.relation.ispartof2020- CSU Theses and Dissertations
dc.rightsCopyright of the original work is retained by the author.
dc.subjectempathic avatar
dc.subjecthuman-computer interaction
dc.subjectperception
dc.subjectfacial action coding system
dc.subjectaffective computing
dc.subjectmultimodal system
dc.titleEmpathic avatar in task-driven human-computer interaction, An
dc.typeText
dcterms.rights.dplaThe copyright and related rights status of this Item has not been evaluated (https://rightsstatements.org/vocab/CNE/1.0/). Please refer to the organization that has made the Item available for more information.
thesis.degree.disciplineComputer Science
thesis.degree.grantorColorado State University
thesis.degree.levelMasters
thesis.degree.nameMaster of Science (M.S.)


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record