Supervised and unsupervised training of deep autoencoder
dc.contributor.author | Ghosh, Tomojit, author | |
dc.contributor.author | Anderson, Charles, advisor | |
dc.contributor.author | Kirby, Michael, committee member | |
dc.contributor.author | Rojas, Don, committee member | |
dc.date.accessioned | 2018-01-17T16:45:44Z | |
dc.date.available | 2018-01-17T16:45:44Z | |
dc.date.issued | 2017 | |
dc.description.abstract | Deep learning has proven to be a very useful approach to learn complex data. Recent research in the fields of speech recognition, visual object recognition, natural language processing shows that deep generative models, which contain many layers of latent features, can learn complex data very efficiently. An autoencoder neural network with multiple layers can be used as a deep network to learn complex patterns in data. As training a multiple layer neural network is time consuming, a pre-training step has been employed to initialize the weights of a deep network to speed up the training process. In the pre-training step, each layer is trained individually and the output of each layer is wired to the input of the successive layers. After the pre-training, all the layers are stacked together to form the deep network, and then post training, also known as fine tuning, is done on the whole network to further improve the solution. The aforementioned way of training a deep network is known as stacked autoencoding and the deep neural network architecture is known as stack autoencoder. It is a very useful tool for classification as well as low dimensionality reduction. In this research we propose two new approaches to pre-train a deep autoencoder. We also propose a new supervised learning algorithm, called Centroid-encoding, which shows promising results in low dimensional embedding and classification. We use EEG data, gene expression data and MNIST hand written data to demonstrate the usefulness of our proposed methods. | |
dc.format.medium | born digital | |
dc.format.medium | masters theses | |
dc.identifier | Ghosh_colostate_0053N_14496.pdf | |
dc.identifier.uri | https://hdl.handle.net/10217/185680 | |
dc.language | English | |
dc.language.iso | eng | |
dc.publisher | Colorado State University. Libraries | |
dc.relation.ispartof | 2000-2019 | |
dc.rights | Copyright and other restrictions may apply. User is responsible for compliance with all applicable laws. For information about copyright law, please see https://libguides.colostate.edu/copyright. | |
dc.title | Supervised and unsupervised training of deep autoencoder | |
dc.type | Text | |
dcterms.rights.dpla | This Item is protected by copyright and/or related rights (https://rightsstatements.org/vocab/InC/1.0/). You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s). | |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | Colorado State University | |
thesis.degree.level | Masters | |
thesis.degree.name | Master of Science (M.S.) |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Ghosh_colostate_0053N_14496.pdf
- Size:
- 5.32 MB
- Format:
- Adobe Portable Document Format