TY - GEN
T1 - Security Against Covert Adversaries: Efficient Protocols for Realistic Adversaries
AU - Aumann, Y.
AU - Lindell, Y.
N1 - Place of conference:Amsterdam, The Netherlands
PY - 2007
Y1 - 2007
N2 - In the setting of secure multiparty computation, a set of mutually distrustful parties wish to securely compute some joint function of their private inputs. The computation should be carried out in a secure way, meaning that no coalition of corrupted parties should be able to learn more than specified or somehow cause the result to be “incorrect”. Typically, corrupted parties
are either assumed to be semi-honest (meaning that they follow the protocol specification) or malicious (meaning that they may deviate arbitrarily from the protocol). However, in many settings, the assumption regarding semi-honest behavior does not suffice and security in the
presence of malicious adversaries is excessive and expensive to achieve.
In this paper, we introduce the notion of covert adversaries, which we believe faithfully models the adversarial behavior in many commercial, political, and social settings. Covert adversaries have the property that they may deviate arbitrarily from the protocol specification in an attempt to cheat, but do not wish to be “caught” doing so. We provide a definition of security for covert adversaries and show that it is possible to obtain highly efficient protocols that are secure against such adversaries. We stress that in our definition, we quantify over all (possibly malicious) adversaries and do not assume that the adversary behaves in any particular
way. Rather, we guarantee that if an adversary deviates from the protocol in a way that would enable it to “cheat” (meaning that it can achieve something that is impossible in an ideal model where a trusted party is used to compute the function), then the honest parties are guaranteed to detect this cheating with good probability. We argue that this level of security is sufficient
in many settings.
AB - In the setting of secure multiparty computation, a set of mutually distrustful parties wish to securely compute some joint function of their private inputs. The computation should be carried out in a secure way, meaning that no coalition of corrupted parties should be able to learn more than specified or somehow cause the result to be “incorrect”. Typically, corrupted parties
are either assumed to be semi-honest (meaning that they follow the protocol specification) or malicious (meaning that they may deviate arbitrarily from the protocol). However, in many settings, the assumption regarding semi-honest behavior does not suffice and security in the
presence of malicious adversaries is excessive and expensive to achieve.
In this paper, we introduce the notion of covert adversaries, which we believe faithfully models the adversarial behavior in many commercial, political, and social settings. Covert adversaries have the property that they may deviate arbitrarily from the protocol specification in an attempt to cheat, but do not wish to be “caught” doing so. We provide a definition of security for covert adversaries and show that it is possible to obtain highly efficient protocols that are secure against such adversaries. We stress that in our definition, we quantify over all (possibly malicious) adversaries and do not assume that the adversary behaves in any particular
way. Rather, we guarantee that if an adversary deviates from the protocol in a way that would enable it to “cheat” (meaning that it can achieve something that is impossible in an ideal model where a trusted party is used to compute the function), then the honest parties are guaranteed to detect this cheating with good probability. We argue that this level of security is sufficient
in many settings.
UR - http://u.cs.biu.ac.il/~lindell/PAPERS/covert.pdf
UR - http://link.springer.com/chapter/10.1007/978-3-540-70936-7_8
M3 - Conference contribution
BT - 4th annual Theory of Cryptography Conference (TCC)
ER -