TY - GEN
T1 - Sublinear time, measurement-optimal, sparse recovery for all
AU - Porat, Ely
AU - Strauss, Martin J.
PY - 2012
Y1 - 2012
N2 - An approximate sparse recovery system in ℓ1 norm makes a small number of measurements of a noisy vector with at most k large entries and recovers those heavy hitters approximately. Formally, it consists of parameters N, k, ε, an m-by-N measurement matrix, Φ, and a decoding algorithm, D. Given a vector, x, where xk denotes the optimal k-term approximation to x, the system approximates x by x̂ = D(Φx), which must satisfy ∥x̂ - x∥1 ≤ (1 + ε) ∥x - x k∥1. Among the goals in designing such systems are minimizing the number m of measurements and the runtime of the decoding algorithm, V. We consider the "forall" model, in which a single matrix Φ, possibly "constructed" non-explicitly using the probabilistic method, is used for all signals x. Many previous papers have provided algorithms for this problem. But all such algorithms that use the optimal number m = O(k log(N/k)) of measurements require superlinear time Ω(N log(N/k)). In this paper, we give the first algorithm for this problem that uses the optimum number of measurements (up to constant factors) and runs in sublinear time o(N) when k is sufficiently less than N. Specifically, for any positive integer ℓ, our approach uses time O(ℓ5 ε-3 k(N/k) 1/ℓ) and uses m = O(ℓ8 ε-3 k log(N/k)) measurements, with access to a data structure requiring space and preprocessing time O(ℓN k0.2/ε).
AB - An approximate sparse recovery system in ℓ1 norm makes a small number of measurements of a noisy vector with at most k large entries and recovers those heavy hitters approximately. Formally, it consists of parameters N, k, ε, an m-by-N measurement matrix, Φ, and a decoding algorithm, D. Given a vector, x, where xk denotes the optimal k-term approximation to x, the system approximates x by x̂ = D(Φx), which must satisfy ∥x̂ - x∥1 ≤ (1 + ε) ∥x - x k∥1. Among the goals in designing such systems are minimizing the number m of measurements and the runtime of the decoding algorithm, V. We consider the "forall" model, in which a single matrix Φ, possibly "constructed" non-explicitly using the probabilistic method, is used for all signals x. Many previous papers have provided algorithms for this problem. But all such algorithms that use the optimal number m = O(k log(N/k)) of measurements require superlinear time Ω(N log(N/k)). In this paper, we give the first algorithm for this problem that uses the optimum number of measurements (up to constant factors) and runs in sublinear time o(N) when k is sufficiently less than N. Specifically, for any positive integer ℓ, our approach uses time O(ℓ5 ε-3 k(N/k) 1/ℓ) and uses m = O(ℓ8 ε-3 k log(N/k)) measurements, with access to a data structure requiring space and preprocessing time O(ℓN k0.2/ε).
UR - http://www.scopus.com/inward/record.url?scp=84860176453&partnerID=8YFLogxK
U2 - 10.1137/1.9781611973099.96
DO - 10.1137/1.9781611973099.96
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:84860176453
SN - 9781611972108
T3 - Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms
SP - 1215
EP - 1227
BT - Proceedings of the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012
PB - Association for Computing Machinery
T2 - 23rd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012
Y2 - 17 January 2012 through 19 January 2012
ER -