## Abstract

Consider two parties, Alice and Bob, who hold private inputs x and y, and wish to compute a function f(x, y) privately in the information theoretic sense; that is, each party should learn nothing beyond f(x, y). However, the communication channel available to them is noisy. This means that the channel can introduce errors in the transmission between the two parties. Moreover, the channel is adversarial in the sense that it knows the protocol that Alice and Bob are running, and maliciously introduces errors to disrupt the communication, subject to some bound on the total number of errors. A fundamental question in this setting is to design a protocol that remains private in the presence of large number of errors. If Alice and Bob are only interested in computing f(x, y) correctly, and not privately, then quite robust protocols are known that can tolerate a constant fraction of errors. However, none of these solutions is applicable in the setting of privacy, as they inherently leak information about the parties' inputs. This leads to the question whether we can simultaneously achieve privacy and error-resilience against a constant fraction of errors. We show that privacy and error-resilience are contradictory goals. In particular, we show that for every constant c > 0, there exists a function f which is privately computable in the error-less setting, but for which no private and correct protocol is resilient against a c-fraction of errors.

Original language | English |
---|---|

Article number | 7279150 |

Pages (from-to) | 6860-6875 |

Number of pages | 16 |

Journal | IEEE Transactions on Information Theory |

Volume | 61 |

Issue number | 12 |

DOIs | |

State | Published - 1 Dec 2015 |

Externally published | Yes |

### Bibliographical note

Publisher Copyright:© 1963-2012 IEEE.

## Keywords

- Coding for interactive communication
- information-theoretic security
- privacy adversarial noise