TY - GEN
T1 - Approximate sparse recovery
T2 - 42nd ACM Symposium on Theory of Computing, STOC 2010
AU - Gilbert, Anna C.
AU - Li, Yi
AU - Porat, Ely
AU - Strauss, Martin J.
PY - 2010
Y1 - 2010
N2 - A Euclidean approximate sparse recovery system consists of parameters k,N, an m-by-N measurement matrix, Φ, and a decoding algorithm, D. Given a vector, x, the system approximates x by x̂=D(Φ x), which must satisfy ∥x̂ - x∥2≤ C ∥x - xk∥2, where xk denotes the optimal k-term approximation to x. (The output x̂ may have more than k terms). For each vector x, the system must succeed with probability at least 3/4. Among the goals in designing such systems are minimizing the number m of measurements and the runtime of the decoding algorithm, D. In this paper, we give a system with m=O(k log(N/k)) measurements - matching a lower bound, up to a constant factor - and decoding time k log {O(1) N, matching a lower bound up to log(N) factors. We also consider the encode time (i.e., the time to multiply Φ by x), the time to update measurements (i.e., the time to multiply Φ by a 1-sparse x), and the robustness and stability of the algorithm (adding noise before and after the measurements). Our encode and update times are optimal up to log(k) factors. The columns of Φ have at most O(log2(k)log(N/k)) non-zeros, each of which can be found in constant time. Our full result, an FPRAS, is as follows. If x=xk+ν1, where ν1 and ν2 (below) are arbitrary vectors (regarded as noise), then, setting x̂ = D(Φ x + ν2), and for properly normalized ν, we get [∥x̂ - x∥22 ≤ (1+∈) ∥ν1∥22 + ∈∥ν 2∥22,] using O((k/∈)log(N/k)) measurements and (k/∈)logO(1)(N) time for decoding.
AB - A Euclidean approximate sparse recovery system consists of parameters k,N, an m-by-N measurement matrix, Φ, and a decoding algorithm, D. Given a vector, x, the system approximates x by x̂=D(Φ x), which must satisfy ∥x̂ - x∥2≤ C ∥x - xk∥2, where xk denotes the optimal k-term approximation to x. (The output x̂ may have more than k terms). For each vector x, the system must succeed with probability at least 3/4. Among the goals in designing such systems are minimizing the number m of measurements and the runtime of the decoding algorithm, D. In this paper, we give a system with m=O(k log(N/k)) measurements - matching a lower bound, up to a constant factor - and decoding time k log {O(1) N, matching a lower bound up to log(N) factors. We also consider the encode time (i.e., the time to multiply Φ by x), the time to update measurements (i.e., the time to multiply Φ by a 1-sparse x), and the robustness and stability of the algorithm (adding noise before and after the measurements). Our encode and update times are optimal up to log(k) factors. The columns of Φ have at most O(log2(k)log(N/k)) non-zeros, each of which can be found in constant time. Our full result, an FPRAS, is as follows. If x=xk+ν1, where ν1 and ν2 (below) are arbitrary vectors (regarded as noise), then, setting x̂ = D(Φ x + ν2), and for properly normalized ν, we get [∥x̂ - x∥22 ≤ (1+∈) ∥ν1∥22 + ∈∥ν 2∥22,] using O((k/∈)log(N/k)) measurements and (k/∈)logO(1)(N) time for decoding.
KW - approximation
KW - embedding
KW - sketching
KW - sparse approximation
KW - sublinear algorithms
UR - http://www.scopus.com/inward/record.url?scp=77954693745&partnerID=8YFLogxK
U2 - 10.1145/1806689.1806755
DO - 10.1145/1806689.1806755
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:77954693745
SN - 9781605588179
T3 - Proceedings of the Annual ACM Symposium on Theory of Computing
SP - 475
EP - 484
BT - STOC'10 - Proceedings of the 2010 ACM International Symposium on Theory of Computing
Y2 - 5 June 2010 through 8 June 2010
ER -