MDPC-McEliece: New McEliece variants from Moderate Density Parity-Check codes

Rafael Misoczki, Jean-Pierre Tillich, Nicolas Sendrier, Paulo S. L. M. Barreto
2013 2013 IEEE International Symposium on Information Theory  
Recently, several variants of the McEliece cryptosystem [1] based on low-density parity-check (LDPC) codes have been proposed [2] [3] [4] [5] [6] . When combined with quasi-cyclic structure, these proposals provide very small key sizes. LDPC codes are characterized by the existence of low weight dual codewords, used to perform an efficient iterative decoding. In order to avoid attacks aimed at recovering such codewords, these last proposals have suggested to replace the permutation matrix used
more » ... n the original McEliece cryptosystem by an n × n matrix Q of small constant row and column weight m, in order to increase the dual codeword weight. In summary, the last QC-LDPC proposal [6] can be described as follows. Let H be an r × n sparse parity-check matrix of an (n, r, w)-LDPC code C of length n, codimension r, dimension k = n − r and row weight w able to correct t errors using LDPC decoding techniques, S be a k × k invertible matrix and Q be as described above. The private key is the triple (S, H, Q) and the public key is the k × n matrix G = S −1 G Q −1 , where G is a generator matrix of C. All matrices are block-circulant. The encryption process for a message m of length k is: x = m G +e, where e is an error vector randomly chosen of length n and weight t. The decryption process starts with: x = x Q = m S −1 G + e Q, followed by the correction of mt errors in x and finally by the computation of m from a right multiplication by S. The crucial step during the decryption refers to the product eQ, where the number of errors is increased by a factor of m. Therefore the private code must be able to correct up to mt errors. Our proposal comes from two simple but not trivial observations over this cryptosystem. 1. The private code has an error correction capability very close to mt, whilst the public code has an error correction capability much higher than t. 2. When increasing t by λ errors, this aforementioned scheme must provide a private code able to correct mλ more errors, due to the propagation of the errors performed by the matrix Q. These two facts imply that we have much more freedom to increase t (the number of errors added during the encryption process) using a code with higher density, as exemplified through the public code of this scheme, than the aforementioned variant. To be more precise, this moderate density must be high enough to avoid key recovery attacks, while still allows decoding for a secure amount of errors. In this sense, we introduce the moderate density parity-check codes (MPDC, for short) which respect all of these constraints, resulting in a better decoding process and allowing the private and public codes being permutation equivalent. Therefore we present two new McEliece variants: one using MDPC codes and other employing quasi-cyclic MDPC codes. The private key is the code representation which allows for efficient decoding (i.e, the sparse parity-check matrix) and the public key is simply the dense generation matrix of the code. The encryption and decryption is as usual for the original McEliece cryptosystem. One of the main benefits of our work refers to its security reduction. Basically, the security of the McEliece cryptosystem is based on two assumptions: the pseudo-randomness of the employed code family and the hardness of decoding a generic linear code. It is namely proved in [7] and further discussed in [8] that if an attacker is not able to distinguish the underlying code from a random code, then he is challenged to decode a generic linear code, a problem proved to be NP-complete [9] . The pseudo-randomness, in fact the key security, is often the weak spot even for Goppa codes. In [10] a distinguisher for high rate Goppa codes is presented. However, regarding our proposal, it is quite natural to consider that the only way of distinguishing MDPC codes from random codes is by being able to find dual codewords of moderate weight (these are precisely the rows of the secret parity-check matrix used in the decoding process). If we make such an hypothesis, since algorithms for decoding a random linear code and finding low weight codewords are basically of the same nature and complexity, both key and message security of our scheme
doi:10.1109/isit.2013.6620590 dblp:conf/isit/MisoczkiTSB13 fatcat:hkeuja3b6rhgtm3xruezsfx244