12/07/2020

On the Power of Compressed Sensing with Generative Models

Akshay Kamath, Eric Price, Sushrut Karmalkar

Keywords: Optimization - General

Abstract: The goal of compressed sensing is to learn a structured signal $x$ from a limited number of noisy linear measurements $y \approx Ax$. In traditional compressed sensing, ``structure'' is represented by sparsity in some known basis. Inspired by the success of deep learning in modeling images, recent work starting with Bora et.al has instead considered structure to come from a generative model $G: \R^k \to \R^n$. In this paper, we prove results that (i)establish the difficulty of this task and show that existing bounds are tight and (ii) demonstrate that the latter task is a generalization of the former. First, we provide a lower bound matching the upper bound of Bora et.al. for compressed sensing from $L$-Lipschitz generative models $G$. In particular, there exists such a function that requires roughly $\Omega(k \log L)$ linear measurements for sparse recovery to be possible. This holds even for the more relaxed goal of \emph{nonuniform} recovery. Second, we show that generative models generalize sparsity as a representation of structure. In particular, we construct a ReLU-based neural network $G: \R^{k} \to \R^n$ with $O(1)$ layers and $O(n)$ activations per layer, such that the range of $G$ contains all $k$-sparse vectors.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ICML 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers