Browsing by Author "Clegg, Benjamin, advisor"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Open Access Engagement and not workload is implicated in automation-induced learning deficiencies for unmanned aerial system trainees(Colorado State University. Libraries, 2014) Blitch, John G., author; Clegg, Benjamin, advisor; Delosh, Edward, committee member; Kraiger, Kurt, committee member; Robinson, Daniel, committee memberAutomation has been known to provide both costs and benefits to experienced humans engaged in a wide variety of operational endeavors. Its influence on skill acquisition for novice trainees, however, is poorly understood. Some previous research has identified impoverished learning as a potential cost of employing automation in training. One prospective mechanism for any such deficits can be identified from related literature that highlights automation's role in reducing cognitive workload in the form of perceived task difficulty and mental effort. However three experiments using a combination of subjective self-report and EEG based neurophysiological instruments to measure mental workload failed to find any evidence that link the presence of automation to workload or to performance deficits resulting from its previous use. Rather the results in this study implicate engagement as an underlying basis for the inadequate mental models associated with automation-induced training deficits. The conclusion from examining these various states of cognition is that automation-induced training deficits observed in novice unmanned systems operators are primarily associated with distraction and disengagement effects, not an undesirable reduction in difficulty as previous research might suggest. These findings are consistent with automation's potential to push humans too far "out of the loop" in training. The implications of these findings are discussed.Item Open Access Examining the role of automation transparency in learning with intelligent tutoring systems(Colorado State University. Libraries, 2023) Pharmer, Rebecca L., author; Clegg, Benjamin, advisor; Wickens, Christopher, committee member; Martey, Rosa, committee member; Tompkins, Sara-Anne, committee memberIn the present study, a training system that either assigned restudy of concepts based on learner performance (adaptive instruction) or provided a set amount of restudy (static instruction) was designed to investigate whether adding automation transparency into an intelligent tutoring system would improve learning outcomes in an assembly task. Participants received instruction on the assembly process of 8 unique shapes. They were provided with error sensitive feedback that served the transparency manipulation, where some participants received explanations of why they were receiving restudy or were given generic feedback. Findings indicate that adaptive instruction may be most beneficial to learning when automation transparency provides learners with an understanding of how the system is responding to their performance. Findings and implications to be discussed.Item Open Access Impact of long-term visual representations on consolidation in visual working memory(Colorado State University. Libraries, 2010) Blalock, Lisa Durrance, author; Clegg, Benjamin, advisor; McCabe, David, committee member; Kraiger, Kurt, committee member; Smith, Charles A. P., committee memberTo view the abstract, please see the full text of the document.Item Open Access Switch choice in applied multi-task management(Colorado State University. Libraries, 2014) Gutzwiller, Robert, author; Clegg, Benjamin, advisor; Wickens, Christopher, committee member; Kraiger, Kurt, committee member; Hayne, Stephen, committee memberLittle to date is known concerning how operators make choices in environments where cognitive load is high and they are faced with multiple different tasks to choose from. This dissertation reviewed a large body of voluntary task switching literature concerning basic research into choice in task switching, as well as what literature was available for applied task switching. From this and a prior model, a revised model of task switching choice that takes into account specific task attributes of difficulty, priority, interest and salience, was developed. In the first experiment, it was shown that task difficulty and priority influenced switching behavior. While task attributes were hypothesized to influence switching, a second major influence is time on task. In the second experiment, it was shown that tasks indeed vary in their interruptability over time, and this was related in part to what task was competing for attention as well as the cognitive processing required for the ongoing task performance. In a third experiment, a new methodology was developed to experimentally assess the role of diminishing rate of returns for performing a task. This declining rate was expected (and did result in) a general increase of switching away from an ongoing task over time. In conclusion, while task attributes and time on task play a major role in task switching in the current studies, defining the time period for theorized effects appears to be the next major step toward understanding switching choice behavior. Additionally, though the experiments are novel and certainly make a major contribution, to the extent that behavior is only represented in them, the methodology may miss some amount of `other' task behavior, such as visual sampling.Item Open Access The influence of trust, self-confidence and task difficulty on automation use(Colorado State University. Libraries, 2023) Patton, Colleen E., author; Clegg, Benjamin, advisor; Wickens, Christopher, committee member; Fisher, Gwen, committee member; Ortega, Francisco, committee memberAutomation can be introduced statically or dynamically to help humans perform tasks. Static automation includes always-present automation types, whereas in dynamic automation, the presence of automation is controlled by another source, typically a human. In static automation, trust, automation accuracy, task difficulty and prior experience with the automation all contribute to the human dependence on the automation. In the dynamic literature however, a small body of research suggests that accuracy and task difficulty do not impact the decision to use automation, but a combination of trust and self-confidence does. The difference between the influence (or lack thereof) of task difficulty in static and dynamic automation is unusual, and prior literature does not make a strong case as to why this difference exists. Through three experiments, the influences of task difficulty, prior experience, trust, self-confidence, and their interactions are investigated. Experiment 1 used a dual task warehouse management paradigm with a lower-workload and higher-workload version of the task. Results indicated that trust-self-confidence difference was related to automation use, such that higher trust and lower self-confidence led to more use. Additionally, the difficulty manipulation did not have an impact on automation use, but self-confidence did not change across the two levels of difficulty. Experiment 2 investigated four levels of difficulty through a dynamic decision making task with participants detecting hostile ships. There was a difference in automation use at the easiest and most difficult levels, indicating that if the task difficulty difference is salient enough, it may influence automation use. The trust-self-confidence relationship was also present here, but these measures were only collected at the end of the task so their influence across the difficulty levels could not be measured. Experiment 3 used the same paradigm as Experiment 2 to investigate how perceived difficulty, as compared to objective difficulty, influences automation use. Results indicated that perceived workload influenced automation use, as did the change the trust-self-confidence difference. The findings of these experiments provide insight into how trust and self-confidence interact to influence the choice to use automation and provide novel evidence for the importance of workload in discretionary automation use decisions. This suggests the importance of consideration of human operator perceptions and beliefs about a system and of themselves when considering how often automation will be used. These findings create a foundation for a model of influences on automation use.