Mahdipour Saravani, Sina, authorRay, Indrakshi, advisorBanerjee, Ritwik, advisorSimske, Steven, committee member2022-08-292022-08-292022https://hdl.handle.net/10217/235603While deep learning is prevalent and successful, partly due to its extensive expressive power with less human intervention, it may inherently promote a naive and negatively simplistic employment, giving rise to problems in sustainability, reproducibility, and design. Larger, more compute-intensive models entail costs in these areas. In this thesis, we probe the effect of a neural component -- specifically, an architecture called NeXtVLAD -- on predictive accuracy for two downstream natural language processing tasks -- context-dependent sarcasm detection and deepfake text detection, and find it ineffective and redundant. We specifically investigate the extent to which this novel architecture contributes to the results, and find that it does not provide statistically significant benefits. This is only one of the several directions in efficiency-aware research in deep learning, but is especially important due to introducing an aspect of interpretability that targets design and efficiency, ergo, promotes studying architectures and topologies in deep learning to both ablate the redundant components for enhancement in sustainability, and to earn further insights into the information flow in deep neural architectures, and into the role of each and every component. We hope our insights highlighting the lack of benefits from introducing a resource-intensive component will aid future research to distill the effective elements from long and complex pipelines, thereby providing a boost to the wider research community.born digitalmasters thesesengCopyright and other restrictions may apply. User is responsible for compliance with all applicable laws. For information about copyright law, please see https://libguides.colostate.edu/copyright.NeXtVLADredundancyNLPdeep learningRedundant complexity in deep learning: an efficacy analysis of NeXtVLAD in NLPText