Abstract
An approximate sparse recovery system in ℓ1 norm consists of parameters k, ε, N; an m-by-N measurement Φ; and a recovery algorithm R. Given a vector, x, the system approximates x by x̂ = R(Ωx), which must satisfy ||x̂-x||1 ≤ (1+ε)||x-xk||1. We consider the "for all" model, in which a single matrix Φ, possibly "constructed" non-explicitly using the probabilistic method, is used for all signals x. The best existing sublinear algorithm by Porat and Strauss [2012] uses O(ε-3k log(N/k)) measurements and runs in time O(k1-αNα) for any constant α > 0. In this article, we improve the number of measurements to O(ε-2k log(N/k)), matching the best existing upper bound (attained by super-linear algorithms), and the runtime to O(k1+β poly(log N, 1/ε)), with a modest restriction that k ≤ N1-α and ε ≤ (log k/ log N)γ for any constants α, β, γ > 0. When k ≤ logc N for some c > 0, the runtime is reduced to O(k poly(N, 1/ε)). With no restrictions on ε, we have an approximation recovery system with m = O(k/ε log(N/k)((log N/ log k)γ + 1/ε)) measurements. The overall architecture of this algorithm is similar to that of Porat and Strauss [2012] in that we repeatedly use a weak recovery system (with varying parameters) to obtain a top-level recovery algorithm. The weak recovery system consists of a two-layer hashing procedure (or with two unbalanced expanders for a deterministic algorithm). The algorithmic innovation is a novel encoding procedure that is reminiscent of network coding and that reflects the structure of the hashing stages. The idea is to encode the signal position index i by associating it with a unique message mi, which will be encoded to a longer message mi′ (in contrast to Porat and Strauss [2012] in which the encoding is simply the identity). Portions of the message mi′ correspond to repetitions of the hashing, and we use a regular expander graph to encode the linkages among these portions. The decoding or recovery algorithm consists of recovering the portions of the longer messages mi′ and then decoding to the original messages mi, all the while ensuring that corruptions can be detected and/or corrected. The recovery algorithm is similar to list recovery introduced in Indyk et al. [2010] and used in Gilbert et al. [2013]. In our algorithm, the messages {mi} are independent of the hashing, which enables us to obtain a better result.
| Original language | English |
|---|---|
| Article number | 32 |
| Journal | ACM Transactions on Algorithms |
| Volume | 13 |
| Issue number | 3 |
| DOIs | |
| State | Published - Mar 2017 |
Bibliographical note
Publisher Copyright:© 2017 ACM.
Funding
A. C. Gilbert was supported in part by DARPA/ONR N66001-08-1-2065. Y. Li was supported by NSF CCF 0743372 when he was at University of Michigan. M. J. Strauss was supported in part by NSF CCF 0743372 and DARPA/ONR N66001-08-1-2065. We thank the anonymous reviewer for the valuable comments and suggestions that greatly contributed to improving this article.
| Funders | Funder number |
|---|---|
| DARPA/ONR | N66001-08-1-2065 |
| NSF CCF | |
| Directorate for Computer and Information Science and Engineering | 0743372 |
| University of Michigan |
Keywords
- Compressive sensing
- List decoding
- Sparse recovery