##
###
The Magic of ELFs
[chapter]

Mark Zhandry

2016
*
Lecture Notes in Computer Science
*

We introduce the notion of an Extremely Lossy Function (ELF). An ELF is a family of functions with an image size that is tunable anywhere from injective to having a polynomial-sized image. Moreover, for any efficient adversary, for a sufficiently large polynomial r (necessarily chosen to be larger than the running time of the adversary), the adversary cannot distinguish the injective case from the case of image size r. We develop a handful of techniques for using ELFs, and show that such
## more »

... lossiness is useful for instantiating random oracles in several settings. In particular, we show how to use ELFs to build secure point function obfuscation with auxiliary input, as well as polynomially-many hardcore bits for any one-way function. Such applications were previously known from strong knowledge assumptions -for example polynomially-many hardcore bits were only know from differing inputs obfuscation, a notion whose plausibility has been seriously challenged. We also use ELFs to build a simple hash function with output intractability, a new notion we define that may be useful for generating common reference strings. Next, we give a construction of ELFs relying on the exponential hardness of the decisional Diffie-Hellman problem, which is plausible in elliptic curve groups. Combining with the applications above, our work gives several practical constructions relying on qualitatively different -and arguably better -assumptions than prior works. Some attempts have been made to answer this question; however, many such attempts have serious limitations. For example Canetti, Goldreich, and Halevi [CGH98] propose the notion of correlation intractability as a specific feature of random oracles that could potentially have a standard model instantiation. However, they show that for some parameter settings such standard model hash functions cannot exist. The only known positive example [CCR16] relies on extremely strong cryptographic assumptions such as general-purpose program obfuscation. For another example, Bellare, Hoang, and Keelveedhi [BHK13] define a security property for hash functions called Universal Computational Extractors (UCE), and show that hash functions with UCE security suffice for several uses of the random oracle model. While UCE's present an important step toward understanding which hash function properties might be achievable and which are not, UCE's have several limitations. For example, the formal definition of a UCE is somewhat complicated to even define. Moreover, UCE is not a single property, but a family or "framework" of assumptions. The most general form of UCE is trivially unattainable, and some of the natural restricted classes of UCE have been challenged [BFM14, BST16] . Therefore, it is unclear which versions of UCE should be trusted and which untrusted. Similar weaknesses have been shown for other strong assumptions that can be cast as families of assumptions or as knowledge/extracting assumptions, such as extractable one-way functions (eOWFs) [BCPR14] and differing inputs obfuscation (diO) [BCP14, ABG + 13, GGHW14]. These weakness are part of a general pattern for strong assumptions such as UCE, eOWFs, and diO that are not specified by a cryptographic game. In particular, these assumptions do not meet standard notions of falsifiability ([Nao03, GW11]), and are not complexity assumptions in the sense of Goldwasser and Kalai [GK16]. (We stress that such knowledge/extracting/framework assumptions are desirable as security properties. However, in order to trust that the property actually holds, it should be derived from a "nice" and trusted assumption.) Therefore, an important question in this space is the following: Are there primitives with "nice" (e.g. simple, well-established, game-based, falsifiable, complexity assumption, etc) security properties that can be used to build hash functions suitable for instantiating random oracles for many protocols. obfuscating a (puncturable) pseudorandom function composed with the permutation is sufficient for FDH signatures. However, their proof has two important limitations. First, the resulting signature scheme is only selectively secure. Second, the instantiation depends on the particular trapdoor permutation used, as well as the public key of the signer. Thus, each signer needs a separate hash function, which needs to be appended to the signer's public keys. To use their protocol, everyone will therefore need to publish new keys, even if they already have published keys for the trapdoor permutation. Our approach. We take a novel approach to addressing the questions above. We isolate a (generally ignored) property of random oracles, namely that random oracles are indistinguishable from functions that are extremely lossy. More precisely, the following is possible in the random oracle model. Given any polynomial time oracle adversary A and an inverse polynomial δ, we can choose the oracle such that (1) the image size of the oracle is a polynomial r (even for domain/range sizes where a truly random oracle will have exponential image size w.h.p.), and (2) A cannot tell the difference between such a lossy oracle and a truly random oracle, except with advantage smaller than δ. Note that the tuning of the image size must be done with knowledge of the adversary's running time -an adversary running in time O( √ r) can with high probability find a collision, thereby distinguishing the lossy function from a truly random oracle. However, by setting √ r to be much larger than the adversary's running time, the probability of finding a collision diminishes. We stress that any protocol would still use a truly random oracle and hence not depend on the adversary; the image size tuning would only appear in the security proof. Our observation of this property is inspired by prior works of Boneh and Zhandry [Zha12, BZ13], who use it for the entirely different goal of giving security proofs in the so-called quantum random oracle model (random oracle instantiation was not a goal nor accomplishment of these prior works). We next propose the notion of an Extremely Lossy Function (ELF) as a standard-model primitive that captures this tunable image size property. The definition is related to the notion of a lossy trapdoor function due to Peikert and Waters [PW08], with two important differences: we do not need any trapdoor, giving hope that ELFs could be constructed from symmetric primitives. On the other hand, we need the functions to be much, much more lossy, as standard lossy functions still have exponential image size. On the surface, extreme lossiness without a trapdoor does not appear incredibly useful, since many interesting applications of standard lossy functions (e.g. (CCA-secure) public key encryption) require a trapdoor. Indeed, using an ELF as a hash function directly does not solve most of the tasks outlined above. Perhaps surprisingly, we show that this extremely lossy property, in conjunction with other tools -usually pairwise independence -can in fact quite powerful, and we use this power to give new solutions to each of the tasks above. Our results are as follows: • (Section 3) We give a practical construction of ELFs assuming the exponential hardness of the decisional Diffie-Hellman (DDH) problem: roughly, that the best attack on DDH for groups of order p takes time O(p c ) for some constant c. Our construction is based on the lossy trapdoor functions due to Peikert and Waters [PW08] and Freeman et al. [FGK + 10], though we do not need the trapdoor from those works. Our construction starts from a trapdoor-less version of the DDH-based construction of [FGK + 10], and iterates it many times at different security levels, together with pairwise independent hashing to keep the output length from growing too large. Having many different security levels allows us to do the following: when switching the function to be lossy, we can do so at a security level that is just high enough to prevent

doi:10.1007/978-3-662-53018-4_18
fatcat:xaay2rqylvdalagbnibfol76va