Publications
Permanent URI for this collection
Browse
Browsing Publications by Subject "classification"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access A framework for profiling spatial variability in the performance of classification models(Colorado State University. Libraries, 2024-04-03) Warushavithana, Menuka, author; Barram, Kassidy, author; Carlson, Caleb, author; Mitra, Saptashwa, author; Ghosh, Sudipto, author; Breidt, Jay, author; Pallickara, Sangmi Lee, author; Pallickara, Shrideep, author; ACM, publisherScientists use models to further their understanding of phenomena and inform decision-making. A confluence of factors has contributed to an exponential increase in spatial data volumes. In this study, we describe our methodology to identify spatial variation in the performance of classification models. Our methodology allows tracking a host of performance measures across different thresholds for the larger, encapsulating spatial area under consideration. Our methodology ensures frugal utilization of resources via a novel validation budgeting scheme that preferentially allocates observations for validations. We complement these efforts with a browser-based, GPU-accelerated visualization scheme that also incorporates support for streaming to assimilate validation results as they become available.Item Open Access Sparse binary transformers for multivariate time series modeling(Colorado State University. Libraries, 2023-08-04) Gorbett, Matt, author; Shirazi, Hossein, author; Ray, Indrakshi, author; ACM, publisherCompressed Neural Networks have the potential to enable deep learning across new applications and smaller computational environments. However, understanding the range of learning tasks in which such models can succeed is not well studied. In this work, we apply sparse and binary-weighted Transformers to multivariate time series problems, showing that the lightweight models achieve accuracy comparable to that of dense floating-point Transformers of the same structure. Our model achieves favorable results across three time series learning tasks: classification, anomaly detection, and single-step forecasting. Additionally, to reduce the computational complexity of the attention mechanism, we apply two modifications, which show little to no decline in model performance: 1) in the classification task, we apply a fixed mask to the query, key, and value activations, and 2) for forecasting and anomaly detection, which rely on predicting outputs at a single point in time, we propose an attention mask to allow computation only at the current time step. Together, each compression technique and attention modification substantially reduces the number of non-zero operations necessary in the Transformer. We measure the computational savings of our approach over a range of metrics including parameter count, bit size, and floating point operation (FLOPs) count, showing up to a 53x reduction in storage size and up to 10.5x reduction in FLOPs.