Learning discrete weights using the local reparameterization trick

Oran Shayer, Dan Levi, Ethan Fetaya

Research output: Contribution to conferencePaperpeer-review

36 Scopus citations

Abstract

Recent breakthroughs in computer vision make use of large deep neural networks, utilizing the substantial speedup offered by GPUs. For applications running on limited hardware, however, high precision real-time processing can still be a challenge. One approach to solving this problem is training networks with binary or ternary weights, thus removing the need to calculate multiplications and significantly reducing memory size. In this work, we introduce LR-nets (Local reparameterization networks), a new method for training neural networks with discrete weights using stochastic parameters. We show how a simple modification to the local reparameterization trick, previously used to train Gaussian distributed weights, enables the training of discrete weights. Using the proposed training we test both binary and ternary models on MNIST, CIFAR-10 and ImageNet benchmarks and reach state-of-the-art results on most experiments.

Original languageEnglish
StatePublished - 2018
Externally publishedYes
Event6th International Conference on Learning Representations, ICLR 2018 - Vancouver, Canada
Duration: 30 Apr 20183 May 2018

Conference

Conference6th International Conference on Learning Representations, ICLR 2018
Country/TerritoryCanada
CityVancouver
Period30/04/183/05/18

Bibliographical note

Publisher Copyright:
© Learning Representations, ICLR 2018 - Conference Track Proceedings.All right reserved.

Fingerprint

Dive into the research topics of 'Learning discrete weights using the local reparameterization trick'. Together they form a unique fingerprint.

Cite this