Cracking open the black box: a geometric and topological analysis of neural networks
dc.contributor.author | Cole, Christina, author | |
dc.contributor.author | Kirby, Michael, advisor | |
dc.contributor.author | Peterson, Chris, advisor | |
dc.contributor.author | Cheney, Margaret, committee member | |
dc.contributor.author | Draper, Bruce, committee member | |
dc.date.accessioned | 2024-09-09T20:52:02Z | |
dc.date.available | 2024-09-09T20:52:02Z | |
dc.date.issued | 2024 | |
dc.description.abstract | Deep learning is a subfield of machine learning that has exploded in recent years in terms of publications and commercial consumption. Despite their increasing prevalence in performing high-risk tasks, deep learning algorithms have outpaced our understanding of them. In this work, we hone in on neural networks, the backbone of deep learning, and reduce them to their scaffolding defined by polyhedral decompositions. With these decompositions explicitly defined for low-dimensional examples, we utilize novel visualization techniques to build a geometric and topological understanding of them. From there, we develop methods of implicitly accessing neural networks' polyhedral skeletons, which provide substantial computational and memory savings compared to those requiring explicit access. While much of the related work using neural network polyhedral decompositions is limited to toy models and datasets, the savings provided by our method allow us to use state-of-the-art neural networks and datasets in our analyses. Our experiments alone demonstrate the viability of a polyhedral view of neural networks and our results show its usefulness. More specifically, we show that the geometry that a polyhedral decomposition imposes on its neural network's domain contains signals that distinguish between original and adversarial images. We conclude our work with suggested future directions. Therefore, we (1) contribute toward closing the gap between our use of neural networks and our understanding of them through geometric and topological analyses and (2) outline avenues for extensions upon this work. | |
dc.format.medium | born digital | |
dc.format.medium | doctoral dissertations | |
dc.identifier | Cole_colostate_0053A_18393.pdf | |
dc.identifier.uri | https://hdl.handle.net/10217/239205 | |
dc.language | English | |
dc.language.iso | eng | |
dc.publisher | Colorado State University. Libraries | |
dc.relation.ispartof | 2020- | |
dc.rights | Copyright and other restrictions may apply. User is responsible for compliance with all applicable laws. For information about copyright law, please see https://libguides.colostate.edu/copyright. | |
dc.subject | deep learning | |
dc.subject | machine learning | |
dc.subject | polyhedral decomposition | |
dc.subject | learning representation | |
dc.subject | data science | |
dc.subject | neural networks | |
dc.title | Cracking open the black box: a geometric and topological analysis of neural networks | |
dc.type | Text | |
dcterms.rights.dpla | This Item is protected by copyright and/or related rights (https://rightsstatements.org/vocab/InC/1.0/). You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s). | |
thesis.degree.discipline | Mathematics | |
thesis.degree.grantor | Colorado State University | |
thesis.degree.level | Doctoral | |
thesis.degree.name | Doctor of Philosophy (Ph.D.) |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Cole_colostate_0053A_18393.pdf
- Size:
- 35.47 MB
- Format:
- Adobe Portable Document Format