Browsing by Author "Blanchard, Nathaniel, committee member"
Now showing 1 - 20 of 20
Results Per Page
Sort Options
Item Open Access A spiral design: redesigning CS 1 based on techniques for memory recall(Colorado State University. Libraries, 2021) Lionelle, Albert, author; Beveridge, J. Ross, advisor; Ghosh, Sudipto, committee member; Blanchard, Nathaniel, committee member; Folkestad, James, committee memberComputer Science (CS 1) offerings in most universities tend to be notoriously difficult. Over the past 60 years about a third of students either fail or drop out of the course. Past research has focused on improving teaching methods through small changes without changing the overall course structure. The premise of this research is that restructuring the CS 1 course using a Spiral pedagogy based on principles for improving memory and recall can help students learn the information and retain it for future courses. Using the principles of Spacing, Interleaving, Elaboration, Practiced Retrieval, and Reflection, CS 1 was fundamentally redesigned with a complete reordering of topics. The new pedagogy was evaluated by comparing the students with those coming from a traditional offering in terms of (1) CS 1 performance, (2) CS 2 performance, and (3) retention of students between CS 1 and 2. Additionally, students were tracked on the individual outcome / topic level of their performance, and students filled out surveys measuring learning motivation and self-regulation. The Spiral pedagogy helped students outperform those who learned via the traditional pedagogy by 9% on final exam scores in CS 1 with a significant difference. Furthermore, 23% of students taught using the Spiral pedagogy mastered greater than 90% of the outcomes, where those taught with the traditional method only mastered 5% of the learning outcomes. Students who were taught with the Spiral pedagogy showed a greatly increased interest in Computer Science by the end of the course, with women showing the greatest increase in interest towards Computer Science. Retention is increased between CS 1 and CS 2 with a 19.2% increase for women, and a 9.2% increase overall. Five weeks later, students were given the same final exam by way of a review exam in CS 2. With the gap in time to forget, those taught with the Spiral pedagogy scored 10-12.5% higher than their peers taught using the traditional method. The change in pedagogy showed an influence with Cohen's d = .69. Furthermore, students continued to do better in CS 2 with increased grades across all assessments, including programming capabilities. By the end of CS 2 only 65% of students who learned by the traditional method passed CS 2 with a C or above while students who learned via the Spiral pedagogy had 80% pass with a C or above. The framework for the Spiral Design is presented along with implementation suggestions if others wish to duplicate the pedagogy for their course along with future research suggestions; including building a Spiral Curriculum to enhance performance across courses and interactive tools to act as a means of intervention using techniques proven to improve recall of past content. Overall the Spiral Design shows promising results as the next generation in course design for supporting student achievement and provides additional pathways for future research.Item Open Access Classification of P300 from non-invasive EEG signal using convolutional neural network(Colorado State University. Libraries, 2022) Farhat, Nazia, author; Anderson, Charles W., advisor; Kirby, Michael, committee member; Blanchard, Nathaniel, committee memberBrain-Computer Interface system is a communication tool for the patients of neuromuscular diseases. The efficiency of such a system largely depends on the accurate and reliable detection of the brain signal employed in its operation. P300 Speller, a well-known BCI system, which helps the user select the desired alphabet in the communication process uses an Electroencephalography signal called P300 brain wave. The spatiotemporal nature and the low Signal-to-noise ratio along with the high dimensionality of P300 signal imposes difficulties in its accurate recognition. Moreover, its inter- and intra-subject variability necessitates case-specific experimental setup requiring considerable amount of time and resources before the system's deployment for use. In this thesis Convolutional Neural Network is applied to detect the P300 signal and observe the distinguishing features of P300 and non-P300 signals extracted by the neural network. Three different shapes of the filters, namely 1-D CNN, 2-D CNN, and 3-D CNN are examined separately to evaluate their detection ability of the target signals. Virtual channels created with three different weighting techniques are explored in 3-D CNN analysis. Both within-subject and cross-subject examinations are performed. Single trial accuracy with CNN implementation. Higher single trial accuracy is observed for all the subjects with CNN implementation compared to that achieved with Stepwise Linear Discriminant Analysis. Up to approximately 80% within-subject accuracy and 64% cross- subject accuracy are recorded in this research. 1-D CNN outperforms all the other models in terms of classification accuracy.Item Open Access Computer vision algorithm to extract color data of pixels in microfluidic paper based analytical devices(Colorado State University. Libraries, 2021) Deotale, Saurabh, author; Beveridge, James Ross, advisor; Blanchard, Nathaniel, committee member; Henry, Charles, committee memberMicrofludic paper-based devices are fast becoming an inexpensive and faster option than traditional methods for substance detection and chemical measurements. These devices are designed to be used in the field for quicker result. One hurdle towards that goal is a manual step of data extraction from the images of these devices for further analysis and results. This involves identifying and extracting color data from specific regions of interest. The color data is the color values in BGR and HSV color channels of the pixels lying in these regions of interest. The manual demands labor and time that can avoided by automating this process using computer vision techniques. The goal of this thesis is to aid chemists by automating the data extraction process. This thesis presents a layered algorithm which uses simple techniques like region growing and thresholding in conjunction with leveraging the knowledge of the device design to extract the required data. This data is then labeled and compiled in CSV file for further analysis.Item Open Access Counting with convolutional neural networks(Colorado State University. Libraries, 2021) Shastri, Viraj, author; Beveridge, J. Ross, advisor; Blanchard, Nathaniel, committee member; Peterson, Christopher, committee memberIn this work, we tackle the question: Can neural networks count? More precisely, given an input image with a certain number of objects, can a neural network tell how many are there? To study this, we create a synthetic dataset consisting of black and white images with variable numbers of white triangles on a black background, oriented right-side up, down, left or right. We train a network to count the right-side up triangles; specifically, we see this as a closed-set classification problem where the class is the number of right-side up triangles in the image. These evaluations show that our networks, even in their simplest designs, are able to count a particular object in an image with a very small epsilon of approximation. We conclude that the neural networks are enforced with more complex learning capabilities than given credit for.Item Open Access Deep learning for bioinformatics sequences: RNA basecalling and protein interactions(Colorado State University. Libraries, 2024) Neumann, Don, author; Ben-Hur, Asa, advisor; Beveridge, Ross, committee member; Blanchard, Nathaniel, committee member; Reddy, Anireddy, committee memberIn the interdisciplinary field of bioinformatics, sequence data for biological problems comes in many different forms. This ranges from proteins, to RNA, to the ionic current for a strand of nucleotides from an Oxford Nanopore Technologies sequencing device. This data can be used to elucidate the fundamentals of biological processes on many levels, which can help humanity with everything from drug design to curing disease. All of our research focuses on biological problems encoded as sequences. The main focus of our research involves Oxford Nanopore Technology sequencing devices which are capable of directly sequencing long read RNA strands as is. We first concentrate on improving the basecalling accuracy for RNA, and have published a paper with a novel architecture achieving state-of-the-art performance. The basecalling architecture uses convolutional blocks, each with progressively larger kernel sizes which improves accuracy for the noisy nature of the data. We then describe ongoing research into the detection of post-transcriptional RNA modifications in nanopore sequencing data. Building on our basecalling research, we are able to discern modifications with read level resolution. Our work will facilitate research into the detection of N6-methyladeosine (m6A) while also furthering progress in the detection of other post-transcriptional modifications. Finally, we recount our recently accepted paper regarding protein-protein and host-pathogen interaction prediction. We performed experiments demonstrating faulty experimental design for interaction prediction which have plagued the field, giving the faulty impression the problem has been solved. We then provide reasoning and recommendations for future work.Item Open Access Embodied multimodal referring expressions generation(Colorado State University. Libraries, 2024) Alalyani, Nada H., author; Krishnaswamy, Nikhil, advisor; Ortega, Francisco, committee member; Blanchard, Nathaniel, committee member; Wang, Haonan, committee memberUsing both verbal and non-verbal modalities in generating definite descriptions of objects and locations is a critical human capability in collaborative interactions. Despite advancements in AI, embodied interactive virtual agents (IVAs) are not equipped to intelligently mix modalities to communicate their intents as humans do, which hamstrings naturalistic multimodal IVA. We introduce SCMRE, a situated corpus of multimodal referring expressions (MREs) intended for training generative AI systems in multimodal IVA, focusing on multimodal referring expressions. Our contributions include: 1) Developing an IVA platform that interprets human multimodal instructions and responds with language and gestures; 2) Providing 24 participants with 10 scenes, each involving ten equally-sized blocks randomly placed on a table. These interactions generated a dataset of 10,408 samples; 3) Analyzing SCMRE, revealing that the utilization of pointing significantly reduces the ambiguity of prompts and increases the efficiency of IVA's execution of humans' prompts; 4) Augmenting and synthesizing SCMRE, resulting in 22,159 samples to generate more data for model training; 5) Finetuning LLaMA 2-chat-13B for generating contextually-correct and situationally-fluent multimodal referring expressions; 6) Integrating the fine-tuned model into the IVA to evaluate the success of the generative model-enabled IVA in communication with humans; 7) Establishing the evaluation process which applies to both humans and IVAs and combines quantitative and qualitative metrics.Item Open Access Exploring correspondences between Gibsonian and telic affordances for object grasping using 3D geometry(Colorado State University. Libraries, 2023) Tomar, Aniket, author; Krishnaswamy, Nikhil, advisor; Blanchard, Nathaniel, committee member; Clegg, Benjamin, committee memberObject affordance understanding is an important open problem in AI and robotics. Gibsonian affordances of an object are actions afforded due to its physical structure and can be directly perceived by agents. A telic affordance is an action that is conventionalized due to an object's typical use or purpose. This work explores the extent to which a 3D CNN analogue can infer grasp affordances from only 3D shape information. This experiment was designed as a grasp classification task for 3D meshes of common kitchen objects with labels derived from human annotations. 3D shape information was found to be insufficient for current models to learn telic affordances, even though they are successful at shape classification and Gibsonian affordance learning. This was investigated further by training a classifier to predict the telic grasps directly from the human annotations to a higher accuracy indicating that the information required for successful classification existed in the dataset but was not effectively utilized. Finally, the embedding spaces of the two classifiers were compared and found to have no significant correspondence between them. This work hypothesizes that this is due to the two models capturing fundamentally different distributions of affordances with respect to objects, one representing Gibsonian affordances or shape information, and the other, telic affordancesItem Open Access Familiarity-detection from different facial feature-types: is the whole greater than the sum of its parts?(Colorado State University. Libraries, 2023) Carlaw, Brooke N., author; Cleary, Anne, advisor; Rhodes, Matthew, committee member; Thomas, Michael, committee member; Blanchard, Nathaniel, committee memberPrior research indicates that perceived familiarity with a cue during cued recall failure can be systematically increased based on the amount of feature overlap between that cue and studied items in memory (Huebert et al., 2022; McNeely-White et al., 2021, Ryals & Cleary, 2012). However, these studies used word or musical stimuli. Faces represent a special class of stimuli, as evidence suggests that unlike other types of stimuli (such as word or musical stimuli), faces may be primarily processed in a holistic fashion. A recent study demonstrated that even when a person's identity was prevented by the presence of a facial occlusion like a surgical mask or sunglasses, familiarity-detection with the occluded face could still occur, suggesting that holistic processing was not a requirement for facial familiarity-detection (Carlaw et al., 2022). However, some researchers have suggested that although faces can be decomposed into component parts when partially occluded, when faces are presented unoccluded in their entirety, the holistic face processing system may then be obligatory (Manley et al., 2019). The present study suggests that this is not the case. Isolating specific feature types at encoding through partial occlusion of faces at study (via a surgical mask or sunglasses), then embedding those familiarized feature sets in otherwise novel whole faces at test, systematically and combinedly increased the perceived familiarity of the otherwise novel whole faces. These results suggest that even whole faces are processed as sets of component parts.Item Open Access Forest elephants modulate their behavior to adapt to sounds of danger(Colorado State University. Libraries, 2023) Verahrami, Anahita K., author; Bombaci, Sara, advisor; Blanchard, Nathaniel, committee member; Wittemyer, George, committee memberThe African forest elephant (Loxodonta cyclotis) plays a critical role in upholding the structure and function of the Congo Basin, the world's second largest tropical forest which crucially contributes to global carbon sequestration. Research has demonstrated an 86% decline in forest elephant population numbers between 1990-2021, largely because of hunting for ivory. Due to the species' cryptic nature in their dense rainforest habitat, little is known on how they respond to human disturbances such as gun hunting. The studies that have been completed reveal that forest elephants may respond to disturbance by demonstrating changes in their abundance, distribution, and nocturnal activity. Changes in forest elephant distribution and activity not only have ramifications for the species' activity budgets, which when affected, may influence their foraging and reproductive behaviors and success, but may also impact the species' interspecific interactions with vegetation in certain areas, affecting forest growth and function. However, little is known on how a key population of this critically endangered species in the northern Republic of Congo is responding to disturbance such as hunting in the region. Using acoustic detection models in combination with a landscape-scale acoustic monitoring effort in and around Nouabalé-Ndoki National Park, Republic of Congo, I assess how forest elephant vocal activity is being influenced by gun hunting. Using these data, I examine (1) how forest elephant vocal activity changes across an eight-day period and (2) if forest elephants are shifting to more nighttime vocal activity following a gun hunting event. Results show that, on average, forest elephants are present and vocal at sites without gun events 53% of the time, but at sites with gun events, this value drops to 43%. Results also indicate that this change in activity following a gun hunting event is sustained over the eight-day period examined and does not vary from day-to-day. Results from the analysis exploring how the proportion of nighttime calling activity changes in response to gun hunting show that number of gunshots is an important predictor of nighttime vocal activity. Specifically, as the number of gunshots increase, there is a dramatic increase in the proportion of nighttime calling activity. Understanding the degree at which forest elephants are affected by gun hunting provides a convincing argument to focus limited conservation resources on developing more effective strategies to reduce indirect impacts from hunting on this critically endangered and ecologically important species.Item Embargo Informing methane emissions inventories using facility aerial measurements at midstream natural gas facilities(Colorado State University. Libraries, 2023) Brown, Jenna A., author; Windom, Bret, advisor; Zimmerle, Daniel, advisor; Blanchard, Nathaniel, committee memberIncreased interest in greenhouse gas (GHG) emissions, including recent legislative action and voluntary programs, has increased attention on quantifying, and ultimately reducing, methane emissions from the natural gas supply chain. While inventories used for public or corporate GHG policies have traditionally utilized bottom-up (BU) methods to estimate emissions, the validity of such inventories has been questioned. To align with climate initiatives, multiple reporting programs are transitioning away from BU methods to utilizing full-facility measurements using airborne, satellite or drone (top-down (TD)) techniques to inform, improve, or validate inventories. This study utilized full-facility estimates from two independent TD methods at 15 midstream natural gas facilities in the U.S.A., and were compared with a contemporaneous daily inventory assembled by the facility operator, employing comprehensive inventory methods. Methods produced multiple full-facility methane estimates at each facility, resulting in 801 individual paired estimates (same facility, same day), and robust mean estimates for each facility. Mean estimates for each facility, aggregated across all facilities, differed by 28% [10% to 43%] for the first deployment and nearly 2:1 (49% [32% to 68%]) the second deployment. Estimates from the two TD methods statistically agreed in 12% (97 of 801) of the paired measurements. These data suggest that one or both methods did not produce accurate facility-level estimates, at a majority of facilities and in aggregate across all facilities. Operator inventories, which included extensions to capture sources beyond regular inventory requirements and to integrate local measurements, estimated significantly lower emissions than the TD estimates for 96% (1535 of 1589) of the paired comparisons. Significant disagreement is observed at most facilities, both between the two TD methods and between the TD estimates and operator inventory. Overall results were coupled with two case studies where TD estimates at two pre-selected facilities were coupled with comprehensive onsite measurements to understand factors driving the divergence between TD and BU inventory emissions estimates. In 3 of 4 paired comparisons between the intensive onsite estimates and one of the TD methods, the intensive on-site work did not conclusively diagnose the difference in estimates. In these cases, the preponderance of evidence suggests that the TD methods mis-estimate emissions an unknown fraction of the time, for unknown reasons. The results presented here have two implications. Firstly, these findings have important implications for the construction of voluntary and regulatory reporting programs that rely on emission estimates for reporting, fees or penalties, or for studies using full-facility estimates to aggregate TD emissions to basin or regional estimates. Secondly, the TD full-facility measurement methods need to undergo further testing, characterization, and potential improvement specifically tailored for complex midstream facilities.Item Open Access Learned perception systems for self-driving vehicles(Colorado State University. Libraries, 2022) Chaabane, Mohamed, author; Beveridge, Ross J., advisor; O'Hara, Stephen, committee member; Blanchard, Nathaniel, committee member; Anderson, Chuck, committee member; Rebecca, Atadero, committee memberBuilding self-driving vehicles is one of the most impactful technological challenges of modern artificial intelligence. Self-driving vehicles are widely anticipated to revolutionize the way people and freight move. In this dissertation, we present a collection of work that aims to improve the capability of the perception module, an essential module for safe and reliable autonomous driving. Specifically, it focuses on two perception topics: 1) Geo-localization (mapping) of spatially-compact static objects, and 2) Multi-target object detection and tracking of moving objects in the scene. Accurately estimating the position of static objects, such as traffic lights, from the moving camera of a self-driving car is a challenging problem. In this dissertation, we present a system that improves the localization of static objects by jointly optimizing the components of the system via learning. Our system is comprised of networks that perform: 1) 5DoF object pose estimation from a single image, 2) association of objects between pairs of frames, and 3) multi-object tracking to produce the final geo-localization of the static objects within the scene. We evaluate our approach using a publicly available data set, focusing on traffic lights due to data availability. For each component, we compare against contemporary alternatives and show significantly improved performance. We also show that the end-to-end system performance is further improved via joint training of the constituent models. Next, we propose an efficient joint detection and tracking model named DEFT, or "Detection Embeddings for Tracking." The proposed approach relies on an appearance-based object matching network jointly learned with an underlying object detection network. An LSTM is also added to capture motion constraints. DEFT has comparable accuracy and speed to the top methods on 2D online tracking leaderboards while having significant advantages in robustness when applied to more challenging tracking data. DEFT raises the bar on the nuScenes monocular 3D tracking challenge, more than doubling the performance of the previous top method (3.8x on AMOTA, 2.1x on MOTAR). We analyze the difference in performance between DEFT and the next best-published method on nuScenes and find that DEFT is more robust to occlusions and large inter-frame displacements, making it a superior choice for many use-cases. Third, we present an end-to-end model to solve the tasks of detection, tracking, and sequence modeling from raw sensor data, called Attention-based DEFT. Attention-based DEFT extends the original DEFT by adding an attentional encoder module that uses attention to compute tracklet embedding that 1) jointly reasons about the tracklet dependencies and interaction with other objects present in the scene and 2) captures the context and temporal information of the tracklet's past observations. The experimental results show that Attention-based DEFT performs favorably against or comparable to state-of-the-art trackers. Reasoning about the interactions between the actors in the scene allows Attention-based DEFT to boost the model tracking performance in heavily crowded and complex interactive scenes. We validate the sequence modeling effectiveness of the proposed approach by showing its superiority for velocity estimation task over other baseline methods on both simple and complex scenes. The experiments demonstrate the effectiveness of Attention-based DEFT for capturing spatio-temporal interaction of the crowd for velocity estimation task, which helps it to be more robust to handle complexities in densely crowded scenes. The experimental results show that all the joint models in this dissertation perform better than solving each problem independently.Item Embargo Legalism reconsidered: Weberian problems and Confucian solutions in the Han Feizi(Colorado State University. Libraries, 2024) Smith, Jackson T., author; Harris, Eirik Lang, advisor; Archie, Andre, committee member; Blanchard, Nathaniel, committee memberAfter a thorough analysis of the political-philosophical climate of Warring States era China, I argue that Han Feizian Legalism is ultimately untenable on account of its necessarily sprawling bureaucratic apparatus which precludes adaptability and rapid response in the face of both internal and external crises. I show further that while Han Fei's criticisms of Confucianism are serious problems for the Confucian theorist, they are not vicious to generally cultivationist political theory. I go on to offer, through a synthesis of Confucian and Legalist doctrines, a solution which manages to patch the holes in both accounts and ultimately forge a broadly neo-classical approach to political organization, Legalism+, which relies on an epistemic naturalism à la Plato as the synthetic ground for Confucian and Legalist theory.Item Open Access Linear mappings: semantic transfer from transformer models for cognate detection and coreference resolution(Colorado State University. Libraries, 2022) Nath, Abhijnan, author; Krishnaswamy, Nikhil, advisor; Blanchard, Nathaniel, committee member; King, Emily J., committee memberEmbeddings or vector representations of language and their properties are useful for understanding how Natural Language Processing technology works. The usefulness of embeddings, however, depends on how contextualized or information-rich such embeddings are. In this work, I apply a novel affine (linear) mapping technique first established in the field of computer vision to embeddings generated from large Transformer-based language models. In particular, I study its use in two challenging linguistic tasks: cross-lingual cognate detection and cross-document coreference resolution. Cognate detection for two Low-Resource Languages (LRL), Assamese and Bengali, is framed as a binary classification problem using semantic (embedding-based), articulatory, and phonetic features. Linear maps for this task are extrinsically evaluated on the extent of transfer of semantic information between monolingual as well as multi-lingual models including those specialized for low-resourced Indian languages. For cross-document coreference resolution, whole-document contextual representations are generated for event and entity mentions from cross- document language models like CDLM and other BERT-variants and then linearly mapped to form coreferring clusters based on their cosine similarities. I evaluate my results on gold output based on established coreference metrics like BCUB and MUC. My findings reveal that linearly transforming vectors from one model's embedding space to another carries certain semantic information with high fidelity thereby revealing the existence of a canonical embedding space and its geometric properties for language models. Interestingly, even for a much more challenging task like coreference resolution, linear maps are able to transfer semantic information between "lighter" models or less contextual models and "larger" models with near-equivalent performance or even improved results in some cases.Item Embargo Machine learning and deep learning applications in neuroimaging for brain age prediction(Colorado State University. Libraries, 2023) Vafaei, Fereydoon, author; Anderson, Charles, advisor; Kirby, Michael, committee member; Blanchard, Nathaniel, committee member; Burzynska, Agnieszka, committee memberMachine Learning (ML) and Deep Learning (DL) are now considered as state-of-the-art assistive AI technologies that help neuroscientists, neurologists and medical professionals with early diagnosis of neurodegenerative diseases and cognitive decline as a consequence of unhealthy brain aging. Brain Age Prediction (BAP) is the process of estimating a person's biological age using Neuroimaging data, and the difference between the predicted age and the subject's chronological age, known as Delta, is regarded as a biomarker for healthy versus unhealthy brain aging. Accurate and efficient BAP is an important research topic, and hence ML/DL methods have been developed for this task. There are different modalities of Neuroimaging such as Magnetic Resonance Imaging (MRI) that have been used for BAP in the past. Diffusion Tensor Imaging (DTI) is an advanced quantitative Neuroimaging technology that gives insight into microstructure of White Matter tracts that connect different parts of the brain to function properly. DTI data is high-dimensional, and age-related microstructural changes in White Matter include non-linear patterns. In this study, we perform a series of analytical experiments using ML and DL methods to investigate the applicability of DTI data for BAP. We also investigate which Diffusivity Parameters, which are DTI metrics that reflect direction and magnitude of diffusion of water molecules in the brain, are relevant for BAP as a Supervised Learning task. Moreover, we propose, implement, and analyze a novel methodology that can detect age-related anomalies (high Deltas), and can overcome some of the major and fundamental limitations of the current supervised approach for BAP, such as "Chronological Age Label Inconsistency". Our proposed methodology, which combines Unsupervised Anomaly Detection (UAD) and supervised BAP, focuses on addressing a fundamental challenge in BAP which is how to interpret a model's error. Should a researcher interpret a model's error as an indication of unhealthy brain aging or the model's poor performance that should be eliminated? We argue that the underlying cause of this problem is the inconsistency of chronological age labels as the ground truth of the Supervised Learning task, which is the common basis of training ML/DL models. Our Unsupervised Learning methods and findings open a new possibility to detect irregularities and abnormalities in the aging brain using DTI scans, independent of inconsistent chronological age labels. The results of our proposed methodology show that combining label-independent UAD and supervised BAP provides a more reliable and methodical way for error analysis than the current supervised BAP approach when it is used in isolation. We also provide visualization and explanations on how our ML/DL methods make their decisions for BAP. Explainability and generalization of our ML/DL models are two important aspects of our study.Item Open Access Metacognitive states and feelings of curiosity: information-seeking behaviors during momentary retrieval-failure(Colorado State University. Libraries, 2022) McNeely-White, Katherine L., author; Cleary, Anne M., advisor; Seger, Carol A., committee member; Henry, Kimberly, committee member; Blanchard, Nathaniel, committee memberCuriosity during learning increases information-seeking behaviors and subsequent memory retrieval success, yet the mechanisms that drive curiosity and subsequent information-seeking behaviors are poorly understood from a theoretical perspective. Hints throughout the literature suggest that curiosity may be a metacognitive signal, encouraging the experiencer to seek out additional information that will resolve a knowledge gap. Furthermore, a recently demonstrated association between a retrieval- failure-based metacognitive state (the tip-of-the-tongue state) and increased feelings of curiosity points toward an adaptive function of these states. The current study examined the relationship between curiosity and the retrieval-failure-based metacognitive states déjà vu and déjà entendu. Participants received test lists containing novel visual environment cues (Experiment 1) or novel isolated tonal sequence cues (Experiment 2) for previously studied episodes. Across both experiments, participants gave higher curiosity ratings during target retrieval failure to cue stimuli that contained previously encountered features. Further, higher curiosity ratings were given during reported déjà vu or déjà entendu, and these states were associated with increased expenditure of limited resources to discover the answer. The full pattern suggests that déjà vu and déjà entendu may drive curiosity, serve adaptive roles in encouraging further search efforts, and that curiosity may emerge due to feature-matching familiarity-detection processes.Item Open Access Perception systems for robust autonomous navigation in natural environments(Colorado State University. Libraries, 2022) Trabelsi, Ameni, author; Beveridge, Ross J., advisor; Blanchard, Nathaniel, committee member; Anderson, Chuck, committee member; King, Emily, committee memberAs assistive robotics continues to develop thanks to the rapid advances of artificial intelligence, smart sensors, Internet of Things, and robotics, the industry began introducing robots to perform various functions that make humans' lives more comfortable and enjoyable. While the principal purpose of deploying robots has been productivity enhancement, their usability has widely expanded. Examples include assisting people with disabilities (e.g., Toyota's Human Support Robot), providing driver-less transportation (e.g., Waymo's driver-less cars), and helping with tedious house chores (e.g., iRobot). The challenge in these applications is that the robots have to function appropriately under continuously changing environments, harsh real-world conditions, deal with significant amounts of noise and uncertainty, and operate autonomously without the intervention or supervision of an expert. To meet these challenges, a robust perception system is vital. This dissertation casts light on the perception component of autonomous mobile robots and highlights their major capabilities, and analyzes the factors that affect their performance. In short, the developed approaches in this dissertation cover the following four topics: (1) learning the detection and identification of objects in the environment in which the robot is operating, (2) estimating the 6D pose of objects of interest to the robot, (3) studying the importance of the tracking information in the motion prediction module, and (4) analyzing the performance of three motion prediction methods, comparing their performances, and highlighting their strengths and weaknesses. All techniques developed in this dissertation have been implemented and evaluated on popular public benchmarks. Extensive experiments have been conducted to analyze and validate the properties of the developed methods and demonstrate this dissertation's conclusions on the robustness, performance, and utility of the proposed approaches for intelligent mobile robots.Item Embargo Performance of continuous emission monitoring systems at operating oil and gas facilities(Colorado State University. Libraries, 2024) Day, Rachel Elizabeth, author; Riddick, Stuart, advisor; Zimmerle, Daniel, advisor; Blanchard, Nathaniel, committee member; Marzolf, Greg, committee memberGlobally, demand to reduce methane (CH4) emissions has become paramount and the oil and natural gas (O&G) sector is highlighted as one of the main contributors, being the largest industrial emission source at ≈30%. In efforts to follow legislation of CH4 emission reductions, O&G operators, emission measurement solution companies, and researchers have been testing various techniques and technologies to accurately measure and quantify CH4 emissions. As recent changes to U.S. legislative policies in the Greenhouse Gas Reporting Program (GHGRP) and Inflation Reduction Act (IRA) are imposing a methane waste emission charge beginning in 2024, O&G operators are looking for more continuous and efficient methods to effectively measure emissions at their facilities. Prior to these policy updates, bottom-up measurement methods were the main technique used for reporting yearly emissions to the GHGRP, which involves emission factors and emission source activity data. Top-down measurement methods such as fly-overs with airplanes, drones, or satellites, can provide snap in time surveys of the overall site emissions. With prior research showing the variance between top-down and bottom-up emission estimates, O&G operators have become interested in continuous emissions monitoring systems (CEMs) for their sites to see emission activity continually overtime. A type of CEM, a continuous monitoring (CM) point sensor network (PSN), monitors methane emissions continuously with sensors mounted at the perimeter of O&G sites. CM PSN solutions have become appealing, as they could potentially offer a relatively cost effective and autonomous method of identifying sporadic and fugitive leaks. This study evaluated multiple commercially available CM PSN solutions under single-blind controlled release testing conducted at operational upstream and midstream O&G sites. During releases, PSNs reported site-level emission rate estimates of 0 kg/h between 38-86% of the time. When non-zero site-level emission rate estimates were provided, no linear correlation between release rate and reported emission rate estimate was observed. The average, aggregated across all PSN solutions during releases, shows 5% of mixing ratio readings at downwind sensors were greater than the site's baseline plus two standard deviations. Four of six total PSN solutions tested during this field campaign provided site-level emission rate estimates with the site average relative error ranging from -100% to 24% for solution D, -100% to -43% for solution E, -25% for solution F (solution F was only at one site), and -99% to 430% for solution G, with an overall average of -29% across all sites and solutions. Of all the individual site-level emission rate estimates, only 11% were within ± 2.5 kg/h of the study team's best estimate of site-level emissions at the time of the releases.Item Open Access Quality assessment of protein structures using graph convolutional networks(Colorado State University. Libraries, 2024) Roy, Soumyadip, author; Ben-Hur, Asa, advisor; Blanchard, Nathaniel, committee member; Zhou, Wen, committee memberThe prediction of protein 3D structure is essential for understanding protein function, drug discovery, and disease mechanisms; with the advent of methods like AlphaFold that are capable of producing very high quality decoys, ensuring the quality of those decoys can provide further confidence in the accuracy of their predictions. In this work we describe Qε, a graph convolutional network that utilizes a minimal set of atom and residue features as input to predict the global distance test total score (GDTTS) and local distance difference test score (lDDT) of a decoy. To improve the model's performance, we introduce a novel loss function based on the ε-insensitive loss function used for SVM-regression. This loss function is specifically designed for the characteristics of the quality assessment problem, and provides predictions with improved accuracy over standard loss functions used for this task. Despite using only a minimal set of features, it matches the performance of recent state-of-the-art methods like DeepUMQA. The code for Qε is available at https://github.com/soumyadip1997/qepsilon.Item Open Access Revealing and analyzing the shared structure of deep face embeddings(Colorado State University. Libraries, 2022) McNeely-White, David G., author; Beveridge, J. Ross, advisor; Blanchard, Nathaniel, committee member; Kirby, Michael, committee member; Peterson, Chris, committee memberDeep convolutional neural networks trained for face recognition are found to output face embeddings which share a fundamental structure. More specifically, one face verification model's embeddings (i.e. last--layer activations) can be compared directly to another model's embeddings after only a rotation or linear transformation, with little performance penalty. If only rotation is required to convert the bulk of embeddings between models, there is a strong sense in which those models are learning the same thing. In the most recent experiments, the structural similarity (and dissimilarity) of face embeddings is analyzed as a means of understanding face recognition bias. Bias has been identified in many face recognition models, often analyzed using distance measures between pairs of faces. By representing groups of faces as groups, and comparing them as groups, this shared embedding structure can be further understood. Specifically, demographic-specific subspaces are represented as points on a Grassmann manifold. Across 10 models, the geodesic distances between those points are expressive of demographic differences. By comparing how different groups of people are represented in the structure of embedding space, and how those structures vary with model designs, a new perspective on both representational similarity and face recognition bias is offered.Item Open Access The role of landing foot orientation on linear traction in stop and stop-jump tasks(Colorado State University. Libraries, 2021) Taylor, Laura Thistle, author; Reiser, Raoul F., II, advisor; Fling, Brett W., committee member; Blanchard, Nathaniel, committee memberIntroduction: The incidence of lower extremity injury has been shown to be greater on artificial turf (AT) than on natural grass across a variety of sports. Injury risk and performance are influenced by the traction characteristics of the foot-surface interface shortly after initial foot contact. The foot's orientation relative to the ground upon landing potentially contributes to these traction characteristics. Although landing foot orientation has been shown to be predictive of lower extremity injury risk on hardcourt surfaces, it remains unclear if foot orientation influences landing ground reaction forces and traction on AT. This information could contribute to modifications in athlete technique, cleat design, and surface characteristics to optimize athlete performance and reduce injury risk. The primary purpose of this investigation was to examine how foot orientation upon landing on AT during stop and stop-jump tasks influences linear traction and foot loading characteristics. Secondary goals were to investigate differences in landing strategy between males and females and the effect of subsequent task demands between the two movements. Methods: Twenty-nine collegiate club-level or higher athletes (15 females) accustomed to competing on AT participated. A third-generation AT was prepared over a foam shock pad to manufacturer specifications with a sand base and crumb rubber performance infill. Isolated panels were secured over two side-by-side force platforms. Subject kinematics were measured using optical capture with reflective markers. Subjects performed six acceptable trials of a stop task and a stop-jump task. Each limb was analyzed separately from initial foot contact through the landing phase. The representative average trial of each subject was used to determine differences between the limbs and sexes within and between each movement. Individual trials were used to explore the relationships between the initial foot progression angle and traction. Due to the limited number of forefoot landings for the two analyzed movements, correlations were only performed on the initial foot progression angles ranging from rearfoot to flatfoot. Results: This investigation is especially novel since most reported literature on foot orientation has been conducted on hardcourt surfaces, not on AT. We found that initial foot progression angle was strongly correlated with the horizontal displacement of the foot before the cleat fully engaged with the AT but had limited influence on early ground reaction forces. We found no differences in initial foot progression angle between sexes or between movements, although horizontal ground reaction forces were greater for males than females and greater for the stop task compared to the stop-jump task. Conclusion: Landing foot orientations, ranging from rearfoot to flatfoot, contribute to the horizontal movement across AT. The relationship between horizontal foot movement on AT and injury risk needs to be further analyzed, specifically by examining the joint loading mechanics at the ankle, knee, and hip.