## Abstract

An approximate sparse recovery system in ℓ_{1} norm consists of parameters k, ε, N; an m-by-N measurement Φ; and a recovery algorithm R. Given a vector, x, the system approximates x by x̂ = R(Ωx), which must satisfy ||x̂-x||_{1} ≤ (1+ε)||x-x_{k}||_{1}. We consider the "for all" model, in which a single matrix Φ, possibly "constructed" non-explicitly using the probabilistic method, is used for all signals x. The best existing sublinear algorithm by Porat and Strauss [2012] uses O(ε^{-3}k log(N/k)) measurements and runs in time O(k^{1-α}N^{α}) for any constant α > 0. In this article, we improve the number of measurements to O(ε^{-2}k log(N/k)), matching the best existing upper bound (attained by super-linear algorithms), and the runtime to O(k^{1+β} poly(log N, 1/ε)), with a modest restriction that k ≤ N^{1-α} and ε ≤ (log k/ log N)^{γ} for any constants α, β, γ > 0. When k ≤ log^{c} N for some c > 0, the runtime is reduced to O(k poly(N, 1/ε)). With no restrictions on ε, we have an approximation recovery system with m = O(k/ε log(N/k)((log N/ log k)^{γ} + 1/ε)) measurements. The overall architecture of this algorithm is similar to that of Porat and Strauss [2012] in that we repeatedly use a weak recovery system (with varying parameters) to obtain a top-level recovery algorithm. The weak recovery system consists of a two-layer hashing procedure (or with two unbalanced expanders for a deterministic algorithm). The algorithmic innovation is a novel encoding procedure that is reminiscent of network coding and that reflects the structure of the hashing stages. The idea is to encode the signal position index i by associating it with a unique message m_{i}, which will be encoded to a longer message m_{i}′ (in contrast to Porat and Strauss [2012] in which the encoding is simply the identity). Portions of the message m_{i}′ correspond to repetitions of the hashing, and we use a regular expander graph to encode the linkages among these portions. The decoding or recovery algorithm consists of recovering the portions of the longer messages m_{i}′ and then decoding to the original messages m_{i}, all the while ensuring that corruptions can be detected and/or corrected. The recovery algorithm is similar to list recovery introduced in Indyk et al. [2010] and used in Gilbert et al. [2013]. In our algorithm, the messages {m_{i}} are independent of the hashing, which enables us to obtain a better result.

Original language | English |
---|---|

Article number | 32 |

Journal | ACM Transactions on Algorithms |

Volume | 13 |

Issue number | 3 |

DOIs | |

State | Published - Mar 2017 |

### Bibliographical note

Publisher Copyright:© 2017 ACM.

## Keywords

- Compressive sensing
- List decoding
- Sparse recovery