Repository logo
 

Sparse binary transformers for multivariate time series modeling

dc.contributor.authorGorbett, Matt, author
dc.contributor.authorShirazi, Hossein, author
dc.contributor.authorRay, Indrakshi, author
dc.contributor.authorACM, publisher
dc.date.accessioned2024-11-11T19:34:33Z
dc.date.available2024-11-11T19:34:33Z
dc.date.issued2023-08-04
dc.description.abstractCompressed Neural Networks have the potential to enable deep learning across new applications and smaller computational environments. However, understanding the range of learning tasks in which such models can succeed is not well studied. In this work, we apply sparse and binary-weighted Transformers to multivariate time series problems, showing that the lightweight models achieve accuracy comparable to that of dense floating-point Transformers of the same structure. Our model achieves favorable results across three time series learning tasks: classification, anomaly detection, and single-step forecasting. Additionally, to reduce the computational complexity of the attention mechanism, we apply two modifications, which show little to no decline in model performance: 1) in the classification task, we apply a fixed mask to the query, key, and value activations, and 2) for forecasting and anomaly detection, which rely on predicting outputs at a single point in time, we propose an attention mask to allow computation only at the current time step. Together, each compression technique and attention modification substantially reduces the number of non-zero operations necessary in the Transformer. We measure the computational savings of our approach over a range of metrics including parameter count, bit size, and floating point operation (FLOPs) count, showing up to a 53x reduction in storage size and up to 10.5x reduction in FLOPs.
dc.format.mediumborn digital
dc.format.mediumarticles
dc.identifier.bibliographicCitationMatt Gorbett, Hossein Shirazi, and Indrakshi Ray. 2023. Sparse Binary Transformers for Multivariate Time Series Modeling. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '23), August 6–10, 2023, Long Beach, CA, USA. ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/3580305.3599508
dc.identifier.doihttps://doi.org/10.1145/3580305.3599508
dc.identifier.urihttps://hdl.handle.net/10217/239532
dc.languageEnglish
dc.language.isoeng
dc.publisherColorado State University. Libraries
dc.relation.ispartofPublications
dc.relation.ispartofACM DL Digital Library
dc.rights©Matt Gorbett, et al. ACM 2023. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in KDD '23, https://dx.doi.org/10.1145/3580305.3599508.
dc.subjecttransformer
dc.subjectsparse
dc.subjectpruned
dc.subjectbinary
dc.subjectdeep learning
dc.subjectmultivariate time series
dc.subjectanomaly detection
dc.subjectclassification
dc.subjectforecasting
dc.subjectlottery ticket hypothesis
dc.titleSparse binary transformers for multivariate time series modeling
dc.typeText

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
FACF_ACMOA_3580305.3599508.pdf
Size:
1.24 MB
Format:
Adobe Portable Document Format

Collections