Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations, which can be used to improve supervised training by initializing a deep network. We investigate a training scheme of a deep DAE, where DAE layers are gradually added and keep adapting as additional layers are added. We show that in the regime of mid-sized datasets, this gradual training provides a small but consistent improvement over stacked training in both reconstruction quality and classification error over stacked training on MNIST and CIFAR datasets.
|State||Published - 2015|
|Event||3rd International Conference on Learning Representations, ICLR 2015 - San Diego, United States|
Duration: 7 May 2015 → 9 May 2015
|Conference||3rd International Conference on Learning Representations, ICLR 2015|
|Period||7/05/15 → 9/05/15|
Bibliographical notePublisher Copyright:
© 2015 International Conference on Learning Representations, ICLR. All rights reserved.