RandNet: deep learning with compressed measurements of images


Thomas Chang, Bahareh Tolooshams, and Demba Ba. 10/13/2019. “RandNet: deep learning with compressed measurements of images.” In IEEE 29th International Workshop on Machine Learning and Signal Processing (MLSP 2019). Pittsburgh, PA. arXiv Version


Principal component analysis, dictionary learning, and auto-encoders are all unsupervised methods for learning representations from a large amount of training data. In all these methods, the higher the dimensions of the input data, the longer it takes to learn. We introduce a class of neural networks, termed RandNet, for learning representations using compressed random measurements of data of interest, such as images. RandNet extends the convolutional recurrent sparse auto-encoder architecture to dense networks and, more importantly, to the case when the input data are compressed random measurements of the original data. Compressing the input data makes it possible to fit a larger number of batches in memory during training. Moreover, in the case of sparse measurements,training is more efficient computationally. We demonstrate that, in unsupervised settings, RandNet performs dictionary learning using compressed data. In supervised settings, we show that RandNet can classify MNIST images with minimal loss in accuracy, despite being trained with random projections of the images that result in a 50% reduction in size. Overall, our results provide a general principled framework for training neural networks using compressed data.


The first two authors contributed equally to this work.
Last updated on 08/29/2019