Repository logo
 

Tiled bit networks: sub-bit neural network compression through reuse of learnable binary vectors

dc.contributor.authorGorbett, Matt, author
dc.contributor.authorShirazi, Hossein, author
dc.contributor.authorRay, Indrakshi, author
dc.contributor.authorACM, publisher
dc.date.accessioned2024-11-11T19:34:34Z
dc.date.available2024-11-11T19:34:34Z
dc.date.issued2024-10-21
dc.description.abstractBinary Neural Networks (BNNs) enable efficient deep learning by saving on storage and computational costs. However, as the size of neural networks continues to grow, meeting computational requirements remains a challenge. In this work, we propose a new form of quantization to tile neural network layers with sequences of bits to achieve sub-bit compression of binary-weighted neural networks. The method learns binary vectors (i.e. tiles) to populate each layer of a model via aggregation and reshaping operations. During inference, the method reuses a single tile per layer to represent the full tensor. We employ the approach to both fully-connected and convolutional layers, which make up the breadth of space in most neural architectures. Empirically, the approach achieves near full-precision performance on a diverse range of architectures (CNNs, Transformers, MLPs) and tasks (classification, segmentation, and time series forecasting) with up to an 8x reduction in size compared to binary-weighted models. We provide two implementations for Tiled Bit Networks: 1) we deploy the model to a microcontroller to assess its feasibility in resource-constrained environments, and 2) a GPU-compatible inference kernel to facilitate the reuse of a single tile per layer in memory.
dc.format.mediumborn digital
dc.format.mediumarticles
dc.identifier.bibliographicCitationMatt Gorbett, Hossein Shirazi, and Indrakshi Ray. 2024. Tiled Bit Networks: Sub-Bit Neural Network Compression Through Reuse of Learnable Binary Vectors. In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management (CIKM '24), October 21–25, 2024, Boise, ID, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3627673.3679603
dc.identifier.doihttps://doi.org/10.1145/3627673.3679603
dc.identifier.urihttps://hdl.handle.net/10217/239540
dc.languageEnglish
dc.language.isoeng
dc.publisherColorado State University. Libraries
dc.relation.ispartofPublications
dc.relation.ispartofACM DL Digital Library
dc.rights©Matt Gorbett, et al. ACM 2024. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in CIKM '24, https://dx.doi.org/10.1145/3627673.3679603.
dc.subjectneural network quantization
dc.subjectcompression
dc.subjectefficiency
dc.subjecton-device machine learning
dc.subjectedge machine learning
dc.subjectIoT
dc.titleTiled bit networks: sub-bit neural network compression through reuse of learnable binary vectors
dc.typeText

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
FACF_ACMOA_3627673.3679603.pdf
Size:
1.53 MB
Format:
Adobe Portable Document Format

Collections