A procedure for deciding symbolic equivalence between sets of constraint systems

Vincent Cheval, Hubert Comon-Lundh, Stéphanie Delaune
2017 Information and Computation  
Stéphanie (2017) A procedure for deciding symbolic equivalence between sets of constraint systems. Information and Computation, 255 (part 1). pp. 94-125. ISSN 0890-5401. Abstract We consider security properties of cryptographic protocols that can be modeled using the notion of trace equivalence. The notion of equivalence is crucial when specifying privacy-type properties, like anonymity, vote-privacy, and unlinkability. Infinite sets of possible traces are symbolically represented using
more » ... lity constraints. We describe an algorithm that decides trace equivalence for protocols that use standard primitives (e.g., signatures, symmetric and asymmetric encryptions) and that can be represented using such constraints. More precisely, we consider symbolic equivalence between sets of constraint systems, and we also consider disequations. Considering sets and disequations is actually crucial to decide trace equivalence for a general class of processes that may involve else branches and/or private channels (for a bounded number of sessions). Our algorithm for deciding symbolic equivalence between sets of constraint systems is implemented and performs well in practice. Unfortunately, it does not scale up well for deciding trace equivalence between processes. This is however the first implemented algorithm deciding trace equivalence on such a large class of processes. Our contribution. Our aim was to design a procedure, which is general enough and efficient enough, so as to automatically verify the security of some simple protocols, such as the private authentication protocol (see Example 1) or the e-passport protocol analysed e.g., in [17] . Both protocols are beyond the scope of any above mentioned results. Recently, an extension of ProVerif has been developed allowing one to analyse the private authentication protocol [18] . However, ProVerif is still unable for instance to deal with the e-passport protocol. Example 1. We consider the protocol given in [19] designed for transmitting a secret, while not disclosing the identity of the sender. In this protocol, a is willing to engage a communication with b. However, a does not want to disclose her identity (nor the identity of b) to the outside world. Consider for instance the following protocol: In words, the agent a (playing the role A) generates a new name n a and sends it, together with her identity (here here public key), encrypted with the public key of b. The agent b (playing the role B) replies by generating a new name n b , sending it, together with n a and his identity pub(skb), encrypted with the public key of a. More formally, using pattern-matching, and assuming that each agent a holds a private key ska and a public key pub(ska), which is publicly available, the protocol could be written as follows: This is fine, as long as only mutual authentication is concerned. Now, if we want to ensure in addition privacy, an attacker should not get any information on who is trying to set up the agreement: B(b, a) and B(b, c) must be indistinguishable. This is not the case in the above protocol. Indeed, an attacker can forge e.g., the message aenc( pub(ska), pub(ska) , pub(skb)) and find out whether c = a or not by observing whether b replies or not. The solution proposed in [19] consists in modifying the process B in such a way that a "decoy" message: aenc(n b , pub(skb)) is sent when the received message is not as expected. This message should look like B's other message from the point of view of an outsider. More formally, this can be modelled using the following process: A(a, b) : ν n a . out(aenc( n a , pub(ska) , pub(skb))) then out(aenc( proj 1 (adec(x, skb)), n b , pub(skb) , pub(ska))) else out(aenc(n b , pub(skb))) This example shows that the conditional branching in the process B ′ is necessary. However, such a conditional branching is beyond the scope of any method that we mentioned so far. Another example is the e-passport protocol, that was analysed in [17], for which, also, conditional branchings are essential for privacy purposes. Another limitation of the existing works is the determinacy condition: for each attacker's message, there is at most one possible move of the protocol. This condition forces each message 3 to contain the recipient's name, which is a natural restriction, but it also prevents from using private channels (which occur in some natural formalisations). The results presented in the current paper yield a decision procedure for bounded processes, with conditional branching and non-determinism. It has been implemented and the above examples were automatically analysed. Some difficulties. One of the main difficulties in the automated analysis of cryptographic protocols is the unbounded possible actions of an attacker: the transition system defined by a protocol is infinitely branching (and also infinite in depth when the protocols under study contain replications -which is not the case here). One of the solutions consists in symbolically representing this infinite set of possible transitions, using symbolic constraint systems. More precisely, deducibility constraints [20, 21, 22 ] allow one to split the possible attacker's actions in finitely many sets of actions yielding the same output of the protocol. Each of these sets is represented by a set of deducibility constraints. In this framework, attackers inputs are represented by variables, that must be deducible from the messages available at the time the input is generated and satisfying the conditions that trigger a new message output. Example 2. Consider the protocol given in Example 1. Assume a has sent her message. The message aenc( n a , n b , pub(skb) , pub(ska)) is output only if the attacker's input x can be computed from the messages available and satisfies the test. Formally, x is a solution of the constraint system: pub(ska), pub(skb), aenc( n a , pub(ska) , pub(skb))) ⊢ ? x proj 2 (adec(x, skb)) = ? pub(ska) The symbol ⊢ ? is interpreted as the attacker's computing capability. In our case (perfect cryptography), the attacker may only apply function symbols to known messages. This is followed by a normalisation step, in which, for instance, the second projection of a pair gives back the second component, according to the rule proj 2 ( x, y ) → y. Similarly, the message aenc(n b , pub(skb)) is output if x is a solution of the constraint system: pub(ska), pub(skb), aenc( n a , pub(ska) , pub(skb))) ⊢ ? x proj 2 (adec(x, skb)) = ? pub(ska) Hence, though the variable x may take infinitely many values, only two relevant sets of messages have to be considered, that are respectively the solutions of the first and the second constraint systems. Now, let us consider the trace equivalence problem. Given two processes P and Q, we have to decide whether or not, for every attacker's sequences of actions, the sequences of outputs of P and Q respectively are indistinguishable. Again, since there are infinitely many possible attacker's actions, we split them into sets that are symbolically represented using constraint systems, in such a way that the operations that are performed by, say, the process P are the same for any two solutions of the same constraint system C P . Assume first that there is a constraint system C Q that represents the same set of attacker's actions and for which Q performs the same operations. Then P and Q are trace equivalent if and only if (at each output step) C P and C Q are equivalent constraint systems: C P and C Q have the same solutions and, for each solution of C P the output messages of P are 4 We define below the static equivalence in a way similar to [3] . We make explicit the success (or the failure) of decrypting or checking a signature. Definition 2 (static equivalence). Two closed frames Φ and Φ ′ are statically equivalent, written Φ ∼ Φ ′ , if they have the same size m and 1. S 1 (resp. S 2 ) is a set of variables in X 1 (resp. X 2 );
doi:10.1016/j.ic.2017.05.004 fatcat:4bbrwmeztbdijoeql4it7iqqhq