Malicious Security. A New Take on Dual Execution with Privacy- Correctness Tradeoffs While Yao’s garbled circuits are naturally secure against a malicious evaluator, they have the drawback of being insecure against a malicious garbler. A garbler can “mis-garble” the function, either replacing it with a different function entirely or causing an error to occur in an informative way (this is known as “selective failure”). Typically, malicious security is introduced to Yao’s garbled circuits by using the cut- and-choose transformation [LP15,Lin13,HKE13]. To achieve a 2−λ probability of cheating without detection, the parties need to exchange λ garbled circuits [Lin13].4 Some of the garbled circuits are “checked”, and the rest of them are evaluated, their outputs checked against one another for consistency. Because of the factor of λ computational overhead, though, cut-and-choose is expensive, and too heavy a tool for fPAKE. Other, more efficient transformations such as LEGO [NO09] and authenticated garbling [WRK17] exist as well, but those rely heavily on pre-processing, which cannot be used in fPAKE since it requires advance interaction between the parties. Mohassel et al. [MF06] and ▇▇▇▇▇ et al. [HKE12] suggest an efficient transformation known as “dual execution”: each party plays each role (garbler and evaluator) once, and then the two perform a comparison step on their outputs in a secure fashion. Dual execution incurs only a factor of 2 overhead over semi-honest garbled circuits. However, 4 There are techniques [LR14] that improve this number in the amortized case when many computations are done — however, this does not fit our setting. it does not achieve fully malicious security. It guarantees correctness, but reduces the privacy guarantee by allowing a malicious garbler to learn one bit of information of her choice. Specifically, if a malicious garbler garbles a wrong circuit, she can use the comparison step to learn one bit about the output of this wrong circuit on the other party’s input. This one extra bit of information could be crucially important, violating the privacy of the evaluator’s input in a significant way. We introduce a tradeoff between correctness and privacy for boolean functions. For one of the two possible outputs (without loss of generality, ‘0’), we restore full privacy at the cost of correctness. The new privacy guarantee is that if the correct output is ‘0’, then a malicious adversary cannot learn anything beyond this output, but if the correct output is ‘1’, then she can learn a single bit of her choice. The new correctness guarantee is that a malicious adversary can cause the computation that should output ‘1’ to output ‘0’ instead, but not the other way around. Our privacy–correctness tradeoff is summarized in Figure 3. [MF06][HKE12] Correct Output Computed Output Privacy The main idea of dual execution is to have the two parties independently evaluate one another’s circuits, learn the output values, and compare the output labels using a secure comparison protocol. This comparison step is simply a check for malicious behavior; if comparison fails, then honest party Pi learns that P1−i cheated. If the comparison step succeeded, 1−i might still have cheated — and gleaned an extra bit of information — but i is assured that she has the correct output. In our construction, however, the parties need not learn the output values before the comparison. Instead, the parties can compare output labels assuming an output of ‘1’, and if the comparison fails, the output is determined to be ‘0’. More formally, let d0[0], d0[1] be the two output labels corresponding to 0’s garbled circuit, and d1[0], d1[1] be the two output labels corresponding to 1’s circuit. Let Y1 [d1[0], d1[1]] be the output label learned by 0 as a result of evaluation, and ▇▇ [▇▇[▇], d0[1]] be the label learned by 1. The two parties securely compare (d0[1], Y1) to (Y0, d1[1]); if the comparison succeeds, the output is “1”.
Appears in 1 contract
Malicious Security. A New Take on Dual Execution with Privacy- Correctness Tradeoffs While Yao’s garbled circuits are naturally secure against a malicious evaluator, they have the drawback of being insecure against a malicious ma- licious garbler. A garbler can “mis-garble” the function, either replacing it with with a different function entirely or causing an error to occur in an informative way (this is known as “selective failure”). failure”). Typically, malicious security is introduced to Yao’s garbled circuits by using the cut- cut-and-choose transformation [LP15,Lin13,HKE1335, 41, 43]. To achieve a 2−λ probability of cheating without detection, the parties need to exchange λ garbled circuits [Lin13].4 41].8 Some of the garbled circuits are “checked”, and the rest of them are evaluated, their outputs checked against one another for consistency. Because of the factor of λ computational overhead, though, cut-and-choose is expensive, and too heavy a tool for fPAKE. Other, more efficient transformations such as LEGO [NO0950] and authenticated garbling [WRK1759] exist as well, but those rely heavily on pre-processing, which cannot be used in fPAKE since it requires advance interaction between the parties. Mohassel et al. [MF0648] and ▇▇▇▇▇ et al. [HKE1234] suggest an efficient transformation known as “dual execution”: each party plays each role (garbler and evaluator) once, and then the two perform a comparison step on their outputs in a secure fashion. Dual execution incurs only a factor of 2 overhead over semi-honest garbled gar- bled circuits. However, 4 There are techniques [LR14] that improve this number in the amortized case when many computations are done — however, this does not fit our setting. it does not achieve fully malicious security. It guarantees correctness, but reduces the privacy guarantee by allowing a malicious garbler to learn one bit of information of her choice. Specifically, if a malicious garbler garbles a wrong circuit, she can use the comparison step to learn one bit about the output of this wrong circuit on the other party’s input. This one extra bit of information could be crucially important, violating the privacy of the evaluator’s input in a significant way. We introduce a tradeoff between correctness and privacy for boolean functionsfunc- tions. For one of the two possible outputs (without loss of generality, ‘0’), we restore full privacy at the cost of correctness. The new privacy guarantee is that if the correct output is ‘0’, then a malicious adversary cannot learn anything 8 There are techniques [44] that improve this number in the amortized case when many computations are done — however, this does not fit our setting. beyond this output, but if the correct output is ‘1’, then she can learn a single bit of her choice. The new correctness guarantee is that a malicious adversary can cause the computation that should output ‘1’ to output ‘0’ instead, but not the other way around. Our privacy–correctness tradeoff is summarized in Figure 3. [MF06][HKE12] Correct Output Computed Output Privacy The main idea of dual execution is to have the two parties independently evaluate one another’s circuits, learn the output values, and compare the output labels using a secure comparison protocol. This comparison step is simply a check for malicious behavior; if comparison fails, then honest party Pi learns that P1−i cheated. If the comparison step succeeded, 1−i might still have cheated — and gleaned an extra bit of information — but i is assured that she has the correct output. In our construction, however, the parties par- ties need not learn the output values before the comparison. Instead, the parties can compare output labels assuming an output of ‘1’, and if the comparison fails, the output is determined to be ‘0’. More formally, let d0[0], d0[1] be the two output labels corresponding to 0’s garbled circuit, and d1[0], d1[1] be the two output labels corresponding to 1’s circuit. Let Y1 [d1[0], d1[1]] Y0 be the output label learned by 0 1 as a result of evaluation, and ▇▇ [▇▇[▇], d0[1]] Y1 be the label learned by 10. The two parties securely compare (d0[1], Y1) to (Y0, d1[1]); if the comparison succeeds, the output is “1”. Our privacy–correctness tradeoff is perfect for fPAKE. If the parties’ inputs are similar, learning a bit of information about each other’s inputs is not prob- lematic, since arguably the small amount of noise in the inputs is a bug, not a feature. If the parties’ inputs are not similar, however, we are guaranteed to have no leakage at all. We pay for the lack of leakage by allowing a malicious party to force an authentication failure even when authentication should succeed. However, either party can do so anyway by providing an incorrect input. In Section 3.2.2, we describe our Yao’s garbled circuit-based fPAKE protocol. Note that in this protocol, we omit the final comparison step; instead, we use the output lables ((d0[1], Y1) and (Y0, d1[1])) to compute the agreed-upon key directly.
Appears in 1 contract