Browsing by Author "Seefried, Ethan, author"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access Paying attention to wildfire: using U-Net with attention blocks on multimodal data for next day prediction(Colorado State University. Libraries, 2023-10-09) Fitzgerald, Jack, author; Seefried, Ethan, author; Yost, James, author; Pallickara, Sangmi, author; Blanchard, Nathaniel, author; ACM, publisherPredicting where wildfires will spread provides invaluable information to firefighters and scientists, which can save lives and homes. However, doing so requires a large amount of multimodal data e.g., accurate weather predictions, real-time satellite data, and environmental descriptors. In this work, we utilize 12 distinct features from multiple modalities in order to predict where wildfires will spread over the next 24 hours. We created a custom U-Net architecture designed to train as efficiently as possible, while still maximizing accuracy, to facilitate quickly deploying the model when a wildfire is detected. Our custom architecture demonstrates state-of-the-art performance and trains an order of magnitude more quickly than prior work, while using fewer computational resources. We further evaluated our architecture with an ablation study to identify which features were key for prediction and which provided negligible impact on performance.Item Open Access SMOKE+: a video dataset for automated fine-grained assessment of smoke opacity(Colorado State University. Libraries, 2024) Seefried, Ethan, author; Blanchard, Nathaniel, advisor; Sreedharan, Sarath, committee member; Roberts, Jacob, committee memberComputer vision has traditionally faced difficulties when applied to amorphous objects like smoke, owing to their ever-changing shape, texture, and dependence on background conditions. While recent advancements have enabled simple tasks such as smoke detection and basic classification (black or white), quantitative opacity estimation in line with the assessments made by certified professionals remains unexplored. To address this gap, I introduce the SMOKE+ dataset, which features opacity labels verified by three certified experts. My dataset encompasses five distinct testing days, two data collection sites in different regions, and a total of 13,632 labeled clips. Leveraging this data, we develop a state-of-the-art smoke opacity estimation method that employs a small number of Residual 3D blocks for efficient opacity estimation. Additionally I explore the use of MAMBA blocks in a video based architecture, exploiting their ability to handle spatial and temporal data in a linear fashion. Techniques developed during the SMOKE+ dataset creation were then refined and applied to a new dataset titled CSU101, designed for educational use in Computer Vision. In the future I intend to expand further into synthetic data, incorporating techniques into Unreal Engine or Unity to add accurate opacity labels.