TY - GEN
T1 - Approximate sparse recovery: optimizing time and measurements
AU - Gilbert, A. C
AU - Li, Y
AU - Porat, E
AU - Strauss, M. J
N1 - Place of conference:USA
PY - 2010
Y1 - 2010
N2 - A Euclidean approximate sparse recovery system consists of parameters $k,N$, an m-by-N measurement matrix, $\bm{\Phi}$, and a decoding algorithm, $\mathcal{D}$. Given a vector, ${\mathbf x}$, the system approximates ${\mathbf x}$ by $\widehat {\mathbf x}=\mathcal{D}(\bm{\Phi} {\mathbf x})$, which must satisfy $|\widehat {\mathbf x} - {\mathbf x}|_2\le C |{\mathbf x} - {\mathbf x}_k|_2$, where ${\mathbf x}_k$ denotes the optimal k-term approximation to ${\mathbf x}$. (The output $\widehat{\mathbf x}$ may have more than k terms.) For each vector ${\mathbf x}$, the system must succeed with probability at least 3/4. Among the goals in designing such systems are minimizing the number m of measurements and the runtime of the decoding algorithm, $\mathcal{D}$. In this paper, we give a system with $m=O(k \log(N/k))$ measurements—matching a lower bound, up to a constant factor—and decoding time $k\log^{O(1)} N$, matching a lower bound up to a polylog$(N)$ factor. We also consider the encode time (i.e., the time to multiply $\bm{\Phi}$ by x), the time to update measurements (i.e., the time to multiply $\bm{\Phi}$ by a 1-sparse x), and the robustness and stability of the algorithm (resilience to noise before and after the measurements). Our encode and update times are optimal up to $\log(k)$ factors. The columns of $\bm{\Phi}$ have at most $O(\log^2(k)\log(N/k))$ nonzeros, each of which can be found in constant time. Our full result, a fully polynomial randomized approximation scheme, is as follows. If ${\mathbf x}={\mathbf x}_k+\nu_1$, where $\nu_1$ and $\nu_2$ (below) are arbitrary vectors (regarded as noise), then setting $\widehat {\mathbf x} = \mathcal{D}(\Phi {\mathbf x} + \nu_2)$, and for properly normalized $\bm{\Phi}$, we get $\left|{\mathbf x} - \widehat {\mathbf x}\right|_2^2 \le (1+\epsilon)\left|\nu_1\right|_2^2 + \epsilon\left|\nu_2\right|_2^2$ using $O((k/\epsilon)\log(N/k))$ measurements and $(k/\epsilon)\log^{O(1)}(N)$ time for decoding.
AB - A Euclidean approximate sparse recovery system consists of parameters $k,N$, an m-by-N measurement matrix, $\bm{\Phi}$, and a decoding algorithm, $\mathcal{D}$. Given a vector, ${\mathbf x}$, the system approximates ${\mathbf x}$ by $\widehat {\mathbf x}=\mathcal{D}(\bm{\Phi} {\mathbf x})$, which must satisfy $|\widehat {\mathbf x} - {\mathbf x}|_2\le C |{\mathbf x} - {\mathbf x}_k|_2$, where ${\mathbf x}_k$ denotes the optimal k-term approximation to ${\mathbf x}$. (The output $\widehat{\mathbf x}$ may have more than k terms.) For each vector ${\mathbf x}$, the system must succeed with probability at least 3/4. Among the goals in designing such systems are minimizing the number m of measurements and the runtime of the decoding algorithm, $\mathcal{D}$. In this paper, we give a system with $m=O(k \log(N/k))$ measurements—matching a lower bound, up to a constant factor—and decoding time $k\log^{O(1)} N$, matching a lower bound up to a polylog$(N)$ factor. We also consider the encode time (i.e., the time to multiply $\bm{\Phi}$ by x), the time to update measurements (i.e., the time to multiply $\bm{\Phi}$ by a 1-sparse x), and the robustness and stability of the algorithm (resilience to noise before and after the measurements). Our encode and update times are optimal up to $\log(k)$ factors. The columns of $\bm{\Phi}$ have at most $O(\log^2(k)\log(N/k))$ nonzeros, each of which can be found in constant time. Our full result, a fully polynomial randomized approximation scheme, is as follows. If ${\mathbf x}={\mathbf x}_k+\nu_1$, where $\nu_1$ and $\nu_2$ (below) are arbitrary vectors (regarded as noise), then setting $\widehat {\mathbf x} = \mathcal{D}(\Phi {\mathbf x} + \nu_2)$, and for properly normalized $\bm{\Phi}$, we get $\left|{\mathbf x} - \widehat {\mathbf x}\right|_2^2 \le (1+\epsilon)\left|\nu_1\right|_2^2 + \epsilon\left|\nu_2\right|_2^2$ using $O((k/\epsilon)\log(N/k))$ measurements and $(k/\epsilon)\log^{O(1)}(N)$ time for decoding.
UR - https://scholar.google.co.il/scholar?q=Anna+C.+Gilbert%2C+Yi+Li%2C+Ely+Porat%2C+Martin+J.+Strauss%3A+Approximate+sparse+recovery%3A+optimizing+time+and+measurements&btnG=&hl=en&as_sdt=0%2C5
M3 - Conference contribution
BT - STOC 2010
ER -