Repository logo
 

GAN you train your network

Date

2022

Authors

Pamulapati, Venkata Sai Sudeep, author
Blanchard, Nathaniel, advisor
Beveridge, Ross, advisor
King, Emily, committee member

Journal Title

Journal ISSN

Volume Title

Abstract

Zero-shot classifiers identify unseen classes — classes not seen during training. Specifically, zero-shot models classify attribute information associated with classes (e.g., a zebra has stripes but a lion does not). Lately, the usage of generative adversarial networks (GAN) for zero-shot learning has significantly improved the recognition accuracy of unseen classes by producing visual features on any class. Here, I investigate how similar visual features obtained from images of a class are to the visual features generated by a GAN. I find that, regardless of metric, both sets of visual features are disjointed. I also fine-tune a ResNet so that it produces visual features that are similar to the visual features generated by a GAN — this is novel because all standard approaches do the opposite: they train the GAN to match the output of the model. I conclude that these experiments emphasize the need to establish a standard input pipeline in zero-shot learning because of the mismatch of generated and real features, as well as the variation in features (and subsequent GAN performance) from different implementations of models such as ResNet-101.

Description

Rights Access

Subject

ResNet
generative adversarial networks
zero shot learning

Citation

Associated Publications