Browsing by Author "Ortega, Francisco, advisor"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Open Access An empathic avatar in task-driven human-computer interaction(Colorado State University. Libraries, 2020) Wang, Heting, author; Beveridge, Ross, advisor; Ortega, Francisco, advisor; Sharp, Julia, committee member; Peterson, Christopher, committee memberIn Human-Computer Interaction, it is difficult to give machines emotional intelligence to resemble human affects, such as the ability of empathy. This thesis presents our work of an emotionally expressive avatar named Diana that can recognize human affects, and show her empathy by using dynamic facial expressions. Diana's behaviors and facial expressions were modeled from Human-Human multimodal interactions to help to provide human-like perceptions in users. Specifically, we designed her empathic facial expressions as a linear combination of the action units in the Facial Action Coding System [1], with the action units that were previously found to improve the accuracy and judgments of human likeness. Our work studies the role of affect between a human and Diana working together in a blocks world. We first conducted an elicitation study to extract naturally occurring gestures from naive human pairs. The pair of human collaborated on a task remotely through video communication to build wooden blocks. The video footage of their interactions composed a dataset named EGGNOG [2]. We provided descriptive and statistical analysis of the affective metrics between human signalers and builders in EGGNOG. The metrics included measures of valence (positive or negative experience) and intensities of 7 basic emotions (joy, fear, disgust, anger, surprise, and contempt). We found: 1) Overall the signalers had a broader range of valence and showed more varied emotions than the builders. 2) The intensity of signalers' joy was greater than that in builders, indicating a happier signaler than a builder. 3) For individuals, the person was happier to act as a signaler in a task than act as a builder. Additionally, valence was more associated with a person's role in a task and less associated with personality traits. Other emotions were all weak and no significant difference was found between signalers and builders. To adapt to the user's affects in the later Human-Avatar interaction, we modeled Diana's empathic behaviors based upon findings in EGGNOG and the Appraisal theory [3]. We created a Demo mode of Diana whose affective states, i.e., facial expressions that simulated empathy, dynamically transitioned between 5 finite states (neutral, joy, sympathy, concentration, and confusion) with respect to the user's affects and gestures. We also created a Mimicry mode of Diana who mimicked the user's instant facial expressions. Human subject studies involving three modes of this avatar (Demo, Mimicry, and Emotionless) were conducted with 21 participants. The difference in votes from a 5-point Likert scale perception questionnaire or a NASA TLX perceived load survey was both statistically insignificant. However, compared to the Mimicry Diana and the Emotionless Diana, a descriptive analysis indicated users spent more time engaging with the empathic Diana, and both the Demo and Mimicry mode of Diana were preferred by users over the Emotionless Diana. Some participants commented about Diana's facial expressions as natural and friendly while 3 other participants were elicited uncomfortable feelings and mentioned the Uncanny Valley effect. Results indicated our approach of adding affects to Diana was perceived differently by different people and received both positive and negative feedback. Our work provided another implementable direction of the human-centered user interfaces with complex affective states. However, there was no evidence that the empathic facial expressions were more preferred by participants than the mimicked facial expressions. In the future, Diana's empathic facial expressions may be refined by modeling more human-like action unit movements with the help of deep learning networks, and the user perception in subjective reports may get improved.Item Open Access Collaborating with artists to design additional multimodal and unimodal interaction techniques for three-dimensional drawing in virtual reality(Colorado State University. Libraries, 2023) Sullivan, Brian T., author; Ortega, Francisco, advisor; Ghosh, Sudipto, committee member; Tornatzky, Cyane, committee member; Barrera Machuca, Mayra, committee member; Batmaz, Anil Ufuk, committee memberAlthough drawing is an old and common mode of human creativity and expression, virtual reality (VR) has presented an opportunity for a novel form of drawing. Instead of representing three-dimensional objects with marks on a two-dimensional surface, VR permits people to create three-dimensional (3D) drawings in midair. It remains unknown, however, what would constitute an optimal interface for 3D drawing in VR. This thesis helps to answer this question by describing a co-design study conducted with artists to identify desired multimodal and unimodal interaction techniques to incorporate into user interfaces for 3D VR drawing. Numerous modalities and interaction techniques were proposed in this study, which can inform future research into interaction techniques for this developing medium.Item Embargo Interaction and navigation in cross-reality analytics(Colorado State University. Libraries, 2024) Zhou, Xiaoyan, author; Ortega, Francisco, advisor; Ray, Indrakshi, committee member; Moraes, Marcia, committee member; Batmaz, Anil Ufuk, committee member; Malinin, Laura, committee memberAlong with immersive display technology's fast evolution, augmented reality (AR) and virtual reality (VR) are increasingly being researched to facilitate data analytics, known as Immersive Analytics. The ability to interact with data visualization in the space around users not only builds the foundation of ubiquitous analytics but also assists users in the sensemaking of the data. However, interaction and navigation while making sense of 3D data visualization in different realities still need to be better understood and explored. For example, what are the differences between users interacting in augmented and virtual reality, and how can we utilize them in the best way during analysis tasks? Moreover, based on the existing work and our preliminary studies, improving the interaction efficiency with immersive displays still needs to be solved. Therefore, this thesis focuses on understanding interaction and navigation in augmented reality and virtual reality for immersive analytics. First, we explored how users interact with multiple objects in augmented reality by using the "Wizard of Oz" study approach. We elicited multimodal interactions involving hand gestures and speech, with text prompts shown on the head-mounted display. Then, we compared the results with previous work in a single-object scenario, which helped us better understand how users prefer to interact in a more complex AR environment. Second, we built an immersive analytics platform in both AR and VR environments to simulate a realistic scenario and conducted a controlled study to evaluate user performance with designed analysis tools and 3D data visualization. Based on the results, interaction and navigation patterns were observed and analyzed for a better understanding of user preferences during the sensemaking process. ii Lastly, by considering the findings and insights from prior studies, we developed a hybrid user interface in simulated cross-reality for situated analytics. An exploratory study was conducted with a smart home setting to understand user interaction and navigation in a more familiar scenario with practical tasks. With the results, we did a thorough qualitative analysis of feedback and video recording to disclose user preferences with interaction and visualization in situated analytics in the everyday decision-making scenario. In conclusion, this thesis uncovered user-designed multimodal interaction including mid-air hand gestures and speech for AR, users' interaction and navigation strategies in immersive analytics in both AR and VR, and hybrid user interface usage in situated analytics for assisting decision-making. Our findings and insights in this thesis provide guidelines and inspiration for future research in interaction and navigation design and improving user experience with analytics in mixed-reality environments.