Browsing by Author "Krishnaswamy, Nikhil, committee member"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
Item Open Access Exploring remote sensing data with high temporal resolutions for wildfire spread prediction(Colorado State University. Libraries, 2024) Fitzgerald, Jack, author; Blanchard, Nathaniel, advisor; Krishnaswamy, Nikhil, committee member; Zimmerle, Dan, committee memberThe severity of wildfires has been steadily increasing in the United States over the past few decades, burning up many millions of acres and costing billions of dollars in suppression efforts each year. However, in the same few decades there have been great strides made to advance our technological capabilities. Machine learning is one such technology that has seen spectacular improvements in many areas such as computer vision and natural language processing, and is now being used extensively to model spatiotemporal phenomena such as wildfires via deep learning. Leveraging deep learning to model how wildfires spread can help facilitate evacuation efforts and assist wildland firefighters by highlighting key areas where containment and suppression efforts should be focused. Many recent works have examined the feasibility of using deep learning models to predict when and where wildfires will spread to, which has been enabled in part due to the wealth of geospatial information that is now publicly available and easily accessible on platforms such as Google Earth Engine. In this work, the First Week Wildfire Spread dataset is introduced, which seeks to address some of the limitations with previously released datasets by having an increased focus on geospatial data with high temporal resolutions. The new dataset contains weather, fuel, topography, and fire location data for the first 7 days of 56 megafires that occurred in the Contiguous United States from 2020 to 2024. Fire location data is collected by the Advanced Baseline Imager aboard the GOES-16 satellite, which provides updates every 5 minutes. Baseline experiments are performed using U-Net and ConvLSTM models to demonstrate some of the various ways that the First Week Wildfire Spread dataset can be used and to highlight its versatility.Item Open Access From neuro-inspired attention methods to generative diffusion: applications to weather and climate(Colorado State University. Libraries, 2024) Stock, Jason, author; Anderson, Chuck, advisor; Ebert-Uphoff, Imme, committee member; Krishnaswamy, Nikhil, committee member; Sreedharan, Sarath, committee memberMachine learning presents new opportunities for addressing the complexities of atmospheric science, where high-dimensional, sparse, and variable data challenge traditional methods. This dissertation introduces a range of algorithms, motivated specifically by the intricacies of weather and climate applications. These challenges complement those that are fundamental in machine learning, such as extracting relevant features, generating high-quality imagery, and providing interpretable model predictions. To this end, we propose methods to integrate adaptive wavelets and spatial attention into neural networks, showing improvements on tasks with limited data. We design a memory-based model of sequential attention to expressively contextualize a subset of image regions. Additionally, we explore transformer models for image translation, with an emphasis on explainability, that overcome the limitations of convolutional networks. Lastly, we discover meaningful long-range dynamics in oscillatory data from an autoregressive generative diffusion model---a very different approach from the current physics-based models. These methods collectively improve predictive performance and deepen our understanding of both the underlying algorithmic and physical processes. The generality of most of these methods is demonstrated on synthetic data and classical vision tasks, but we place a particular emphasis on their impact in weather and climate modeling. Some notable examples include an application to estimate synthetic radar from satellite imagery, predicting the intensity of tropical cyclones, and modeling global climate variability from observational data for intraseasonal predictability. These approaches, however, are flexible and hold potential for adaptation across various application domains and data modalities.Item Embargo Learning technical Spanish with virtual environments(Colorado State University. Libraries, 2024) Siebert, Caspian, author; Ortega, Francisco R., advisor; Miller De Rutté, Alyssia, committee member; Krishnaswamy, Nikhil, committee memberAs the world becomes increasingly interconnected through the internet and travel, foreign language learning is essential for accurate communication and a deeper appreciation of diverse cultures. This study explores the effectiveness of a virtual learning environment employing Artificial Intelligence (AI) designed to facilitate Spanish language acquisition among veterinary students in the context of diagnosing a pet. Students' engagement with virtual scenarios that simulate real-life veterinary consultations in Spanish is examined using a qualitative thematic analysis. Participants have conversations with a virtual pet owner, discussing symptoms, diagnosing conditions, and recommending treatments, all in Spanish. Data was collected through recorded interactions with the application and a semi-structured interview. Findings suggest that immersive virtual environments enhance user engagement and interest, and several suggestions were made to improve the application's features. The study highlights the potential for virtual simulations to bridge the gap between language learning and professional training in specialized fields such as veterinary medicine. Finally, a set of implications of design for future systems is provided.Item Open Access Machine learning prediction of deepwater slope-channel facies using core-analogous outcrop observations(Colorado State University. Libraries, 2024) Ronnau, Patrick, author; Stright, Lisa, advisor; Ronayne, Michael, committee member; Gallen, Sean, committee member; Krishnaswamy, Nikhil, committee memberSedimentological (SED) data is often qualitative, making combining it with Machine Learning (ML) workflows challenging. SED data in subsurface exploration incorporates qualitative interpretations that remain valuable to subsequent exploration efforts. These exploration projects often have access to geologic core data that is limited spatially, making subsurface interpretation difficult and highly uncertain. Incorporating core-like data into ML workflows provides a framework to generate consistent interpretation over large datasets. ML, a technique already employed in well-log interpretation, represents an advantage over manual interpretation methods which are time intensive and introduce errors and bias. This research investigates methods to automate geologic interpretation (specifically of sedimentary facies) through ML techniques. Sedimentological observations (grain size, bed thickness) from outcrop measured sections in the deepwater slope strata of the Magallanes Basin provide training and testing features to make ML predictions (classifications) of human-interpreted geologic facies. The study employs seven ML techniques (K-Means, Least Squares Regression, Logistic Regression, Linear Discriminant Analysis, Quadratic Discriminant Analysis, Random Forest, and Neural Networks) to investigate the problem of facies prediction from multiple methodological angles. The results show that some ML methods are not suitable for this classification problem due to their architecture or the qualitative aspects of manually collected SED data. Supervised methods generally provide better results than unsupervised methods (PCA and K-Means). Supervised ML both produces better raw performance metrics (Accuracy, BedThickness Normalized, Accuracy, Recall) than K-Means, and generates qualitatively better predictions of measured sections (Fig. 48; Fig. 49). Among methods that are suitable, a random forest model generates the best facies prediction performance.Item Open Access Optimizing text analytics and document automation with meta-algorithmic systems engineering(Colorado State University. Libraries, 2023) Villanueva, Arturo N., Jr., author; Simske, Steven J., advisor; Hefner, Rick D., committee member; Krishnaswamy, Nikhil, committee member; Miller, Erika, committee member; Roberts, Nicholas, committee memberNatural language processing (NLP) has seen significant advances in recent years, but challenges remain in making algorithms both efficient and accurate. In this study, we examine three key areas of NLP and explore the potential of meta-algorithmics and functional analysis for improving analytic and machine learning performance and conclude with expansions for future research. The first area focuses on text classification for requirements engineering, where stakeholder requirements must be classified into appropriate categories for further processing. We investigate multiple combinations of algorithms and meta-algorithms to optimize the classification process, confirming the optimality of Naïve Bayes and highlighting a certain sensitivity to the Global Vectors (GloVe) word embeddings algorithm. The second area of focus is extractive summarization, which offers advantages to abstractive summarization due to its lossless nature. We propose a second-order meta-algorithm that uses existing algorithms and selects appropriate combinations to generate more effective summaries than any individual algorithm. The third area covers document ordering, where we propose techniques for generating an optimal reading order for use in learning, training, and content sequencing. We propose two main methods: one using document similarities and the other using entropy against topics generated through Latent Dirichlet Allocation (LDA).Item Open Access Subnetwork ensembles(Colorado State University. Libraries, 2023) Whitaker, Timothy J., author; Whitley, Darrell, advisor; Anderson, Charles, committee member; Krishnaswamy, Nikhil, committee member; Kirby, Michael, committee memberNeural network ensembles have been effectively used to improve generalization by combining the predictions of multiple independently trained models. However, the growing scale and complexity of deep neural networks have led to these methods becoming prohibitively expensive and time consuming to implement. Low-cost ensemble methods have become increasingly important as they can alleviate the need to train multiple models from scratch while retaining the generalization benefits that traditional ensemble learning methods afford. This dissertation introduces and formalizes a low-cost framework for constructing Subnetwork Ensembles, where a collection of child networks are formed by sampling, perturbing, and optimizing subnetworks from a trained parent model. We explore several distinct methodologies for generating child networks and we evaluate their efficacy through a variety of ablation studies and established benchmarks. Our findings reveal that this approach can greatly improve training efficiency, parametric utilization, and generalization performance while minimizing computational cost. Subnetwork Ensembles offer a compelling framework for exploring how we can build better systems by leveraging the unrealized potential of deep neural networks.