Browsing by Author "Ortega, Francisco R., advisor"
Now showing 1 - 11 of 11
- Results Per Page
- Sort Options
Item Open Access Application of the neural data transformer to non-autonomous dynamical systems(Colorado State University. Libraries, 2023) Mifsud, Domenick M., author; Ortega, Francisco R., advisor; Anderson, Charles, advisor; Thomas, Micheal, committee member; Barreto, Armando, committee memberThe Neural Data Transformer (NDT) is a novel non-recurrent neural network designed to model neural population activity, offering faster inference times and the potential to advance real-time applications in neuroscience. In this study, we expand the applicability of the NDT to non-autonomous dynamical systems by investigating its performance on modeling data from the Chaotic Recurrent Neural Network (RNN) with delta pulse inputs. Through adjustments to the NDT architecture, we demonstrate its capability to accurately capture non-autonomous neural population dynamics, making it suitable for a broader range of Brain-Computer Inter-face (BCI) control applications. Additionally, we introduce a modification to the model that enables the extraction of interpretable inferred inputs, further enhancing the utility of the NDT as a powerful and versatile tool for real-time BCI applications.Item Open Access Assessing usability of full-body immersion in an interactive virtual reality environment(Colorado State University. Libraries, 2020) Raikwar, Aditya R., author; Ortega, Francisco R., advisor; Beveridge, Ross, committee member; Stephens, Jaclyn, committee member; Smith, Charles, committee memberImproving immersion and playability has a direct impact on the effectiveness of certain Virtual Reality applications. This project looks at understanding how to develop an immersive soccer application with the intention to measure skills, particularly for the use of assessment and health promotion. This project will show the requirements to create a top-down immersive experience with commodity devices. The particular system serves the simulation of a soccer training environment to evade opponents, pass to teammates, and score goals with the objective of measuring the difficulty of single, double, and triple tasks. It is expected that the performance will go down as the level of tasks increases. This hypothesis is extremely relevant as it provides a system that could serve as an assessment tool for people with concussions to return to play (with an OK by a physician) or to promote exercise to non-athletes. This thesis provides all the necessary steps to explain the high-level details of highly immersive applications while providing a future-path for human-subject experiments.Item Embargo Comparing memorability of gesture sets in an extended reality application(Colorado State University. Libraries, 2024) Holen, Ethan J., author; Ortega, Francisco R., advisor; Sreedharan, Sarath, committee member; Rhodes, Matthew, committee memberIn free-form gesture sets, memorability is an important yet often under-explored metric, despite evidence that the usability of interfaces improves when designed with more memorable input gestures. This study examines the memorability of three free-form gesture sets in the HoloLens 2: user-defined, elicitation-defined, and expert-defined. In addition, we examine gestures selected by the participants using common techniques from previous elicitation studies. We found that the user-defined gesture set was the most memorable, with an 88.57% recall rate. And was significantly more unforgettable than the expert-defined (72.73% recall) and the elicitation-defined (59.87% recall). This study also analyzed the user-defined gestures from this experiment. Although this was not an elicitation study, many of the methods commonly used in elicitation studies were used here. This analysis found a higher agreement rate when users were primed with a single gesture set before creating their own and a decrease in agreement when showing them two gesture sets beforehand. Given these results, we propose that designing systems with user-defined gestures will result in the most memorable sets; however, expert-defined gesture sets are also highly memorable and may better suit application design constraints.Item Embargo Cooking up a better AR experience: notification design and the liabilities of imperfect cues in augmented reality(Colorado State University. Libraries, 2024) Raikwar, Aditya R., author; Ortega, Francisco R., advisor; Ray, Indrakshi, committee member; Moraes, Marcia, committee member; Soto, Hortensia, committee memberThis dissertation investigates optimizing user experience in Augmented Reality (AR). A virtual cooking environment (ARtisan Bistro) serves as a testbed to explore factors influencing user interaction with AR interfaces. The research starts with notification design, examining strategically placed visual and audio notifications in ARtisan Bistro (Chapter 4). Building on this, Chapter 5 explores optimizing these designs for user awareness and delivering critical information, especially when audio is impractical. This involved exploring visual-only notifications, revealing consistent user performance and attention capture comparable to combined visual-audio notifications (no significant difference found). The research demonstrates that well-designed notifications can significantly improve user experience, but it also raises a crucial question: can users always trust the information presented in AR environments? The possibility of imperfect information delivery underscores the importance of reliable information delivery. Chapter 6 explores the impact of imperfect cues generated by machine learning (ML) on user performance in AR visual search tasks. This research highlights the potential for automation bias when users rely heavily on unreliable cues. By investigating both notification design and the limitations of ML systems for reliable information delivery, this dissertation emphasizes the importance of creating a well-rounded user experience in AR environments. The findings underscore the need for further research on optimizing visual notifications, mitigating automation bias, and ensuring reliable information delivery in AR applications.Item Open Access Exploring the role of biomass design in virtual reality forest bathing(Colorado State University. Libraries, 2024) Masters, Rachel A., author; Ortega, Francisco R., advisor; Interrante, Victoria, committee member; Lionelle, Albert, committee member; Moraes, Marcia, committee member; LoTemplio, Sara, committee memberStress is an increasingly prevalent problem that has severe health consequences if not managed properly. Every day, people are surrounded by work, health, financial, economic, and a variety of other stressors that deplete cognitive resources and put their nervous systems on high alert. Forest bathing, or nature immersion therapy, has been shown to reduce stress while restoring attentional resources, but despite these benefits, many people lack access to nature for a variety of reasons, including distance and health. VR has the potential to support access to virtual nature environments (VNE's) for people who cannot get into nature, yet the optimal design of biomass or plant life in VNE's is still an active area of research. Additionally, most of these VNE's require high end headsets and computers to run, which is not accessible technology for the everyday consumer. Given the current limitations of popular VR technology such as the Meta Quest 3, it is important to understand the relationship between plant asset realism and a VNE's restorative potential so that a balance can be achieved between a VNE that is deployable on everyday consumer headsets and a VNE that offers restorative benefit. This study was an initial exploration into high and low-realism VNE comparisons, accomplished by a mixed design study that compared two groups of participants, high and low-realism, against each other as well as against their own performance in a control condition where they closed their eyes. Through psychological and physiological measures, stress reduction and perceived attention restoration was assessed as a baseline, after a stressor test, then after the experiment condition to observe potential decreases in stress and increases in attention after the environment. Overall, there was only a significant increase in General Restorativeness in the high-realism environment when compared against the control and the low-realism environment, but trends in the data call for future research on this topic.Item Open Access Guiding gaze, evaluating visual cue designs for augmented reality(Colorado State University. Libraries, 2024) Kelley, Brendan, author; Ortega, Francisco R., advisor; Tornatzky, Cyane, committee member; Arefin, Mohammed Safayet, committee memberVisual cueing is an interdisciplinary and complex topic. It has garnered interest for implementation with extended reality (XR). Both augmented reality (AR) and virtual reality (VR), are often employed for visual search tasks. Visual search, a paradigm rooted in cognitive psychology (in particular attention theory), can often benefit from cueing interventions. However, there are several potential pitfalls with using cueing techniques in AR; namely, automation bias, clutter, and cognitive overload. These factors are tied to design and implementation choices, such as modality, representation, dimensionality, reference frame, conveyed information, purpose, markedness, or the task domain. Design factors are subject to both the cognitive factors, as well as, technical specifications of the display technology. To address these factors, this work proposes a within-subject four factor design addressing the question how do different cue designs affect visual search performance? Four cueing conditions are used: no cue (baseline), gaze line, 2D wedge, and 3D arrow. Results support the use of cues for visual search, however the gaze line condition provided for the fastest search time, accuracy, and greatest reduction in head rotation. Additionally, the gaze line cue was preferred by participants and was produced more favorable NASA TLX scores.Item Embargo Learning technical Spanish with virtual environments(Colorado State University. Libraries, 2024) Siebert, Caspian, author; Ortega, Francisco R., advisor; Miller De Rutté, Alyssia, committee member; Krishnaswamy, Nikhil, committee memberAs the world becomes increasingly interconnected through the internet and travel, foreign language learning is essential for accurate communication and a deeper appreciation of diverse cultures. This study explores the effectiveness of a virtual learning environment employing Artificial Intelligence (AI) designed to facilitate Spanish language acquisition among veterinary students in the context of diagnosing a pet. Students' engagement with virtual scenarios that simulate real-life veterinary consultations in Spanish is examined using a qualitative thematic analysis. Participants have conversations with a virtual pet owner, discussing symptoms, diagnosing conditions, and recommending treatments, all in Spanish. Data was collected through recorded interactions with the application and a semi-structured interview. Findings suggest that immersive virtual environments enhance user engagement and interest, and several suggestions were made to improve the application's features. The study highlights the potential for virtual simulations to bridge the gap between language learning and professional training in specialized fields such as veterinary medicine. Finally, a set of implications of design for future systems is provided.Item Open Access Practical aspects of designing and developing a multimodal embodied agent(Colorado State University. Libraries, 2021) Bangar, Rahul, author; Beveridge, Ross, advisor; Ortega, Francisco R., advisor; Peterson, Christopher, committee memberThis thesis reviews key elements that went into the design and construction of the CSU CwC Embodied agent, also known as the Diana System. The Diana System has been developed over five years by a joint team of researchers at three institutions – Colorado State University, Brandeis University and the University of Florida. Over that time, I contributed to this overall effort and in this thesis, I present a practical review of key elements involved in designing and constructing the system. Particular attention is paid to Diana's multimodal capabilities that engage asynchronously and concurrently to support realistic interactions with the user. Diana can communicate in visual as well as auditory modalities. She can understand a variety of hand gestures for object manipulation, deixis, etc. and can gesture in return. Diana can also hold a conversation with the user in spoken and/or written English. Gestures and speech are often at play simultaneously, supplementing and complementing each other. Diana conveys her attention through several non-verbal cues like slower blinking when inattentive, keeping her gaze on the subject of her attention, etc. Finally, her ability to express emotions with facial expressions adds another crucial human element to any user interaction with the system. Central to Diana's capabilities is a blackboard architecture coordinating a hierarchy of modular components, each controlling a part of Diana's perceptual, cognitive, and motor abilities. The modular design facilitates contributions from multiple disciplines, namely VoxSim/VoxML with Text-to-speech/Automatic Speech Recognition systems for natural language understanding, deep neural networks for gesture recognition, 3D computer animation systems, etc. – all integrated within the Unity game engine to create an embodied, intelligent agent that is Diana. The primary contribution of this thesis is to provide a detailed explanation of Diana's internal working along with a thorough background of the research that supports these technologies.Item Open Access The impact of referent display on interaction proposals during multimodal elicitation studies(Colorado State University. Libraries, 2021) Williams, Adam S., author; Ortega, Francisco R., advisor; Beveridge, Ross, committee member; Sharp, Julia, committee memberElicitation studies have become a popular method of participatory design. While traditionally used for finding unimodal gesture-based inputs elicitation has been increasingly used for deriving multimodal interaction techniques. This is concerning as there has been no work that examines how well elicitation methods transfer from unimodal gesture use to multimodal combinations of inputs. This work details a comparison between two elicitation studies that were similar in design apart from the way participants were prompted for interaction proposals. Referents (e.g., commands to be executed) were shown as either text or animations. Interaction proposals for speech, gesture, and gesture+speech input modalities were elicited. Based on the comparison of these studies and other existing elicitation studies the concern of referent display priming uses proposed interaction techniques is brought to light. The results from these elicitation studies were not reproduced. Gesture proposals were the least impacted. With high similarity in the overall proposal space. Speech was biased to have proposals imitating the text as displayed an average of 69.36%. The time between gesture and speech initiation in multimodal use was 166.51% longer when prompted with text. The second contribution of this work is a consensus set of mid-air gesture inputs for use with generic object manipulations in augmented reality environments. This consensus set was derived from the elicitation study that used text-based referent displays which were found to be less biasing on participant gesture production than the animated referents.Item Open Access Understanding user interactions in stereoscopic head-mounted displays(Colorado State University. Libraries, 2022) Williams, Adam S., author; Ortega, Francisco R., advisor; Beveridge, Ross, committee member; Gersch, Joe, committee member; Sharp, Julia, committee memberInteracting in stereoscopic head mounted displays can be difficult. There are not yet clear standards for how interactions in these environments should be performed. In virtual reality there are a number of well designed interaction techniques; however, augmented reality interaction techniques still need to be improved before they can be easily used. This dissertation covers work done towards understanding how users navigate and interact with virtual environments that are displayed in stereoscopic head-mounted displays. With this understanding, existing techniques from virtual reality devices can be transferred to augmented reality where appropriate, and where that is not the case, new interaction techniques can be developed. This work begins by observing how participants interact with virtual content using gesture alone, speech alone, and the combination of gesture+speech during a basic object manipulation task in augmented reality. Later, a complex 3-dimensional data-exploration environment is developed and refined. That environment is capable of being used in both augmented reality (AR) and virtual reality (VR), either asynchronously or simultaneously. The process of iteratively designing that system and the design choices made during its implementation are provided for future researchers working on complex systems. This dissertation concludes with a comparison of user interactions and navigation in that complex environment when using either an augmented or virtual reality display. That comparison contributes new knowledge on how people perform object manipulations between the two devices. When viewing 3D visualizations, users will need to feel able to navigate the environment. Without careful attention to proper interaction technique design, people may struggle to use the developed system. These struggles may range from a system that is uncomfortable and not fit for long-term use, or they could be as major as causing new users to not being able to interact in these environments at all. Getting the interactions right for AR and VR environments is a step towards facilitating their widespread acceptance. This dissertation provides the groundwork needed to start designing interaction techniques around how people utilize their personal space, virtual space, body, tools, and feedback systems.Item Open Access Using gender swap in virtual reality for increasing empathy against stereotype threats(Colorado State University. Libraries, 2020) Borhani, Zahra, author; Ortega, Francisco R., advisor; Beveridge, J. Ross, committee member; Clegg, Benjamin A., committee memberThe stereotypes associated with women in computer science are potential barriers that prevent female students from developing an interest in this field. This problem persists when attempting to establish a career after graduating. This project shows a tool that potentially increases empathy using avatar gender-swap in a virtual reality setting that simulates a job interview experience. The virtual environment includes two avatars, one for the interviewee and one for the interviewer. The objective is to understand the effects of virtual embodiment and the potential to increase empathy towards the opposite sex by participating in a job interview task simulated in virtual reality when the avatar gender is swapped. The participants should perform a job interview task under three different conditions, microaggression stereotype threat, direct stereotype threat, and no threat. This thesis will showcase all the necessary tools required to accomplish this goal and provide a path forward for a user experiment.