Wang, Heting, authorBeveridge, Ross, advisorOrtega, Francisco, advisorSharp, Julia, committee memberPeterson, Christopher, committee member2020-09-072020-09-072020https://hdl.handle.net/10217/212076Zip file contains data CSVs.In Human-Computer Interaction, it is difficult to give machines emotional intelligence to resemble human affects, such as the ability of empathy. This thesis presents our work of an emotionally expressive avatar named Diana that can recognize human affects, and show her empathy by using dynamic facial expressions. Diana's behaviors and facial expressions were modeled from Human-Human multimodal interactions to help to provide human-like perceptions in users. Specifically, we designed her empathic facial expressions as a linear combination of the action units in the Facial Action Coding System [1], with the action units that were previously found to improve the accuracy and judgments of human likeness. Our work studies the role of affect between a human and Diana working together in a blocks world. We first conducted an elicitation study to extract naturally occurring gestures from naive human pairs. The pair of human collaborated on a task remotely through video communication to build wooden blocks. The video footage of their interactions composed a dataset named EGGNOG [2]. We provided descriptive and statistical analysis of the affective metrics between human signalers and builders in EGGNOG. The metrics included measures of valence (positive or negative experience) and intensities of 7 basic emotions (joy, fear, disgust, anger, surprise, and contempt). We found: 1) Overall the signalers had a broader range of valence and showed more varied emotions than the builders. 2) The intensity of signalers' joy was greater than that in builders, indicating a happier signaler than a builder. 3) For individuals, the person was happier to act as a signaler in a task than act as a builder. Additionally, valence was more associated with a person's role in a task and less associated with personality traits. Other emotions were all weak and no significant difference was found between signalers and builders. To adapt to the user's affects in the later Human-Avatar interaction, we modeled Diana's empathic behaviors based upon findings in EGGNOG and the Appraisal theory [3]. We created a Demo mode of Diana whose affective states, i.e., facial expressions that simulated empathy, dynamically transitioned between 5 finite states (neutral, joy, sympathy, concentration, and confusion) with respect to the user's affects and gestures. We also created a Mimicry mode of Diana who mimicked the user's instant facial expressions. Human subject studies involving three modes of this avatar (Demo, Mimicry, and Emotionless) were conducted with 21 participants. The difference in votes from a 5-point Likert scale perception questionnaire or a NASA TLX perceived load survey was both statistically insignificant. However, compared to the Mimicry Diana and the Emotionless Diana, a descriptive analysis indicated users spent more time engaging with the empathic Diana, and both the Demo and Mimicry mode of Diana were preferred by users over the Emotionless Diana. Some participants commented about Diana's facial expressions as natural and friendly while 3 other participants were elicited uncomfortable feelings and mentioned the Uncanny Valley effect. Results indicated our approach of adding affects to Diana was perceived differently by different people and received both positive and negative feedback. Our work provided another implementable direction of the human-centered user interfaces with complex affective states. However, there was no evidence that the empathic facial expressions were more preferred by participants than the mimicked facial expressions. In the future, Diana's empathic facial expressions may be refined by modeling more human-like action unit movements with the help of deep learning networks, and the user perception in subjective reports may get improved.born digitalmasters thesesZIPCSVengCopyright and other restrictions may apply. User is responsible for compliance with all applicable laws. For information about copyright law, please see https://libguides.colostate.edu/copyright.empathic avatarhuman-computer interactionperceptionfacial action coding systemaffective computingmultimodal systemAn empathic avatar in task-driven human-computer interactionText