IA Scholar Query: On the Connection Between Quantum Pseudorandomness and Quantum Hardware Assumptions.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgThu, 29 Sep 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Global Time Distribution via Satellite-Based Sources of Entangled Photons
https://scholar.archive.org/work/nujgt6ihyfe65f264r2stxaqte
We propose a satellite-based scheme to perform clock synchronization between ground stations spread across the globe using quantum resources. We refer to this as a quantum clock synchronization (QCS) network. Through detailed numerical simulations, we assess the feasibility and capabilities of a near-term implementation of this scheme. We consider a small constellation of nanosatellites equipped only with modest resources. These include quantum devices such as spontaneous parametric down conversion (SPDC) sources, avalanche photo-detectors (APDs), and moderately stable on-board clocks such as chip scale atomic clocks (CSACs). In our simulations, the various performance parameters describing the hardware have been chosen such that they are either already commercially available, or require only moderate advances. We conclude that with such a scheme establishing a global network of ground based clocks synchronized to sub-nanosecond level (up to a few picoseconds) of precision, would be feasible. Such QCS satellite constellations would form the infrastructure for a future quantum network, able to serve as a globally accessible entanglement resource. At the same time, our clock synchronization protocol, provides the sub-nanosecond level synchronization required for many quantum networking protocols, and thus, can be seen as adding an extra layer of utility to quantum technologies in the space domain designed for other purposes.Stav Haldar, Ivan Agullo, Anthony J. Brady, Antía Lamas-Linares, W. Cyrus Proctor, James E. Troupework_nujgt6ihyfe65f264r2stxaqteThu, 29 Sep 2022 00:00:00 GMTQSLT: A Quantum-Based Lightweight Transmission Mechanism against Eavesdropping for IoT Networks
https://scholar.archive.org/work/tdbhfn5crzfjthn5dsvc2tearq
Quantum Key Distribution (QKD) is a promising paradigm for Internet of Things (IoT) networks against eavesdropping attacks. However, classical quantum-based mechanisms are overweight and expensive for resource-constrained IoT devices. That is, the devices need to frequently exchange with the QKD controller via an out-band quantum channel. In this paper, we propose a novel Quantum-based Secure and Lightweight Transmission (QSLT) mechanism to ease the overweight pain for IoT devices against eavesdropping. Particularly, the mechanism predistributes quantum keys into IoT devices with SIM cards. Using one of the keys, QSLT encrypts or decrypts IoT sensitive data. It is noting that an in-band key-selection method is used to negotiate the session key between two different devices. For example, on one IoT device, the in-band method inserts a key-selection field at the end of the encrypted data to indicate the key's sequence number. After another device receives the data, QSLT extracts the key-selection field and decrypts the data with the selected quantum key stored locally. We implement the proposed mechanism and evaluate its security and transmission performances. Experimental results show that QSLT can transmit IoT data with a lower delay while guaranteeing the security performance. Besides, QSLT also decreases power usage by approximately 58.77% compared with state of the art mechanisms.Gang Liu, Jingyuan Han, Yi Zhou, Tao Liu, Jian Chen, Rüdiger Prysswork_tdbhfn5crzfjthn5dsvc2tearqTue, 27 Sep 2022 00:00:00 GMTShuffle-QUDIO: accelerate distributed VQE with trainability enhancement and measurement reduction
https://scholar.archive.org/work/vf7vmnkbkrefzjoz4cnp5rms4i
The variational quantum eigensolver (VQE) is a leading strategy that exploits noisy intermediate-scale quantum (NISQ) machines to tackle chemical problems outperforming classical approaches. To gain such computational advantages on large-scale problems, a feasible solution is the QUantum DIstributed Optimization (QUDIO) scheme, which partitions the original problem into K subproblems and allocates them to K quantum machines followed by the parallel optimization. Despite the provable acceleration ratio, the efficiency of QUDIO may heavily degrade by the synchronization operation. To conquer this issue, here we propose Shuffle-QUDIO to involve shuffle operations into local Hamiltonians during the quantum distributed optimization. Compared with QUDIO, Shuffle-QUDIO significantly reduces the communication frequency among quantum processors and simultaneously achieves better trainability. Particularly, we prove that Shuffle-QUDIO enables a faster convergence rate over QUDIO. Extensive numerical experiments are conducted to verify that Shuffle-QUDIO allows both a wall-clock time speedup and low approximation error in the tasks of estimating the ground state energy of molecule. We empirically demonstrate that our proposal can be seamlessly integrated with other acceleration techniques, such as operator grouping, to further improve the efficacy of VQE.Yang Qian, Yuxuan Du, Dacheng Taowork_vf7vmnkbkrefzjoz4cnp5rms4iMon, 26 Sep 2022 00:00:00 GMTObservability of fidelity decay at the Lyapunov rate in few-qubit quantum simulations
https://scholar.archive.org/work/qv3uxdw6ing3nnehm74xetxq4e
In certain regimes, the fidelity of quantum states will decay at a rate set by the classical Lyapunov exponent. This serves both as one of the most important examples of the quantum-classical correspondence principle and as an accurate test for the presence of chaos. While detecting this phenomenon is one of the first useful calculations that noisy quantum computers without error correction can perform [G. Benenti et al., Phys. Rev. E 65, 066205 (2001)], a thorough study of the quantum sawtooth map reveals that observing the Lyapunov regime is just beyond the reach of present-day devices. We prove that there are three bounds on the ability of any device to observe the Lyapunov regime and give the first quantitatively accurate description of these bounds: (1) the Fermi golden rule decay rate must be larger than the Lyapunov rate, (2) the quantum dynamics must be diffusive rather than localized, and (3) the initial decay rate must be slow enough for Lyapunov decay to be observable. This last bound, which has not been recognized previously, places a limit on the maximum amount of noise that can be tolerated. The theory implies that an absolute minimum of 6 qubits is required. Recent experiments on IBM-Q and IonQ imply that some combination of a noise reduction by up to 100× per gate and large increases in connectivity and gate parallelization are also necessary. Finally, scaling arguments are given that quantify the ability of future devices to observe the Lyapunov regime based on trade-offs between hardware architecture and performance.Max D. Porter, Ilon Josephwork_qv3uxdw6ing3nnehm74xetxq4eThu, 08 Sep 2022 00:00:00 GMTObservability of fidelity decay at the Lyapunov rate in few-qubit quantum simulations
https://scholar.archive.org/work/m5dzmwog2jdq7cmge75j5ng4f4
In certain regimes, the fidelity of quantum states will decay at a rate set by the classical Lyapunov exponent. This serves both as one of the most important examples of the quantum-classical correspondence principle and as an accurate test for the presence of chaos. While detecting this phenomenon is one of the first useful calculations that noisy quantum computers without error correction can perform [G. Benenti et al., Phys. Rev. E 65, 066205 (2001)], a thorough study of the quantum sawtooth map reveals that observing the Lyapunov regime is just beyond the reach of present-day devices. We prove that there are three bounds on the ability of any device to observe the Lyapunov regime and give the first quantitatively accurate description of these bounds: (1) the Fermi golden rule decay rate must be larger than the Lyapunov rate, (2) the quantum dynamics must be diffusive rather than localized, and (3) the initial decay rate must be slow enough for Lyapunov decay to be observable. This last bound, which has not been recognized previously, places a limit on the maximum amount of noise that can be tolerated. The theory implies that an absolute minimum of 6 qubits is required. Recent experiments on IBM-Q and IonQ imply that some combination of a noise reduction by up to 100× per gate and large increases in connectivity and gate parallelization are also necessary. Finally, scaling arguments are given that quantify the ability of future devices to observe the Lyapunov regime based on trade-offs between hardware architecture and performance.Max D. Porter, Ilon Josephwork_m5dzmwog2jdq7cmge75j5ng4f4Wed, 24 Aug 2022 00:00:00 GMTThe Value and Use of Data in Chemical Engineering Practice
https://scholar.archive.org/work/4375wkaprncitmmmqm37gh7w64
The ability to generate, organize, analyze, understand and leverage data for sound decision making is a central activity of chemical engineers. Chemical engineers are responsible for the safe, profitable and environmentally friendly operation of chemical facilities; thus, they are expected to be good at designing and operating chemical processes. To this end, they make use of models which involves planning and conducting experiments in the laboratory or a pilot plant, analyzing the generated data and making use of it for designing large scale industrial systems. In an operating plant, they need to analyze data so as to achieve maximum efficiencies, reduce the use of precious natural resources, minimize environmental degradation, keep the plant safe as well as to help generate value for the customers and stakeholders. This chapter provides a non-technical view of how chemical engineers translate data to the best possible decisions that result in reliable, safe and profitable process design and operations.Suparna Samavedham, S. Lakshminarayananwork_4375wkaprncitmmmqm37gh7w64Fri, 05 Aug 2022 00:00:00 GMTQuantum Key Distribution with a Hand-held Sender Unit
https://scholar.archive.org/work/z6qhmtgc6fb5rnnp3rknoecgnu
Quantum key distribution (QKD) is a crucial component for truly secure communication, which enables to analyze leakage of information due to eavesdropper attacks. While impressive progress was made in the field of long-distance implementations, user-oriented applications involving short-distance links have mostly remained overlooked. Recent technological advances in integrated photonics now enable developments towards QKD also for existing hand-held communication platforms. In this work we report on the design and evaluation of a hand-held free-space QKD system including a micro-optics based sender unit. This system implements the BB84-protocol employing polarization-encoded faint laser pulses at a rate of 100 MHz. Unidirectional beam tracking and live reference-frame alignment systems at the receiver side enable a stable operation over tens of seconds when aiming the portable transmitter to the receiver input by hand from a distance of about half a meter. The user-friendliness of our system was confirmed by successful key exchanges performed by different untrained users with an average link efficiency of about 20 % relative to the case of the transmitter being stationarily mounted and aligned. In these tests we achieve an average quantum bit error ratio (QBER) of 2.4 % and asymptotic secret key rates ranging from 4.0 kbps to 15.3 kbps. Given its compactness, the versatile sender optics is also well suited for integration into other free-space communication systems enabling QKD over any distance.Gwenaelle Vest, Peter Freiwang, Jannik Luhn, Tobias Vogl, Markus Rau, Lukas Knips, Wenjamin Rosenfeld, Harald Weinfurterwork_z6qhmtgc6fb5rnnp3rknoecgnuMon, 25 Jul 2022 00:00:00 GMTA review of cryptosystems based on multi layer chaotic mappings
https://scholar.archive.org/work/ass5ln7plnd5fogmnmen65jvgy
In recent years, a lot of research has gone into creating multi-layer chaotic mapping-based cryptosystems. Random-like behavior, a continuous broadband power spectrum, and a weak baseline condition dependency are all characteristics of chaotic systems. Chaos could be helpful in the three functional components of compression, encryption, and modulation in a digital communication system. To successfully use chaos theory in cryptography, chaotic maps must be built in such a way that the entropy they produce can provide the necessary confusion and diffusion. A chaotic map is used in the first layer of such cryptosystems to create confusion, and a second chaotic map is used in the second layer to create diffusion and create a ciphertext from a plaintext. A secret key generation mechanism and a key exchange method are frequently left out, and many researchers just assume that these essential components of any effective cryptosystem are always accessible. We review such cryptosystems by using a cryptosystem of our design, in which confusion in plaintext is created using Arnold's Cat Map, and logistic mapping is employed to create sufficient dispersion and ultimately get a matching ciphertext. We also address the development of key exchange protocols and secret key schemes for these cryptosystems, as well as the possible outcomes of using cryptanalysis techniques on such a system.Awnon Bhowmik, Emon Hossain, Mahmudul Hasanwork_ass5ln7plnd5fogmnmen65jvgySun, 17 Jul 2022 00:00:00 GMTPrivacy-Preserving Aggregation in Federated Learning: A Survey
https://scholar.archive.org/work/rel2vcv2y5g2jfrqy3xzzvh4oe
Over the recent years, with the increasing adoption of Federated Learning (FL) algorithms and growing concerns over personal data privacy, Privacy-Preserving Federated Learning (PPFL) has attracted tremendous attention from both academia and industry. Practical PPFL typically allows multiple participants to individually train their machine learning models, which are then aggregated to construct a global model in a privacy-preserving manner. As such, Privacy-Preserving Aggregation (PPAgg) as the key protocol in PPFL has received substantial research interest. This survey aims to fill the gap between a large number of studies on PPFL, where PPAgg is adopted to provide a privacy guarantee, and the lack of a comprehensive survey on the PPAgg protocols applied in FL systems. In this survey, we review the PPAgg protocols proposed to address privacy and security issues in FL systems. The focus is placed on the construction of PPAgg protocols with an extensive analysis of the advantages and disadvantages of these selected PPAgg protocols and solutions. Additionally, we discuss the open-source FL frameworks that support PPAgg. Finally, we highlight important challenges and future research directions for applying PPAgg to FL systems and the combination of PPAgg with other technologies for further security improvement.Ziyao Liu, Jiale Guo, Wenzhuo Yang, Jiani Fan, Kwok-Yan Lam, Jun Zhaowork_rel2vcv2y5g2jfrqy3xzzvh4oeWed, 13 Jul 2022 00:00:00 GMTLIPIcs, Volume 230, ITC 2022, Complete Volume
https://scholar.archive.org/work/x5cobg6anzbgjazexwg7mkanie
LIPIcs, Volume 230, ITC 2022, Complete VolumeDana Dachman-Soledwork_x5cobg6anzbgjazexwg7mkanieThu, 30 Jun 2022 00:00:00 GMTImpact of dynamics, entanglement, and Markovian noise on the fidelity of few-qubit digital quantum simulation
https://scholar.archive.org/work/eb2dzkkkvvc5lct3ewwrhclrei
For quantum computations without error correction, the dynamics of a simulation can strongly influence the overall fidelity decay rate as well as the relative impact of different noise processes on the fidelity. Theoretical models of Markovian noise that include incoherent Lindblad noise, gate-based errors, and stochastic Hamiltonian noise qualitatively agree that greater diffusion and entanglement throughout Hilbert space typically increase the fidelity decay rate. Simulations of the gate-efficient quantum sawtooth map support these predictions, and experiments performed on three qubits on the IBM-Q quantum hardware platform at fixed gate count qualitatively confirm the predictions. A pure depolarizing noise model, often used within randomized benchmarking (RB) theory, cannot explain the observed effect, but gate-based Lindblad models can provide an explanation. They can also estimate the effective Lindblad coherence times during gates, and find a consistent 2-3× shorter effective T_2 dephasing time than reported for idle qubits. Additionally, the observed error per CNOT gate exceeds IBM-Q's reported value from RB by 3.0× during localized dynamics and 4.5× during diffusive dynamics. This demonstrates the magnitude by which RB understates error in a complex quantum simulation due to dynamics and crosstalk.Max D. Porter, Ilon Josephwork_eb2dzkkkvvc5lct3ewwrhclreiFri, 10 Jun 2022 00:00:00 GMTSoK: Decentralized Randomness Beacon Protocols
https://scholar.archive.org/work/jnt6wey46beq3bjesjrlp2usiu
The scientific interest in the area of Decentralized Randomness Beacon (DRB) protocols has been thriving recently. Partially that interest is due to the success of the disruptive technologies introduced by modern cryptography, such as cryptocurrencies, blockchain technologies, and decentralized finances, where there is an enormous need for a public, reliable, trusted, verifiable, and distributed source of randomness. On the other hand, recent advancements in the development of new cryptographic primitives brought a huge interest in constructing a plethora of DRB protocols differing in design and underlying primitives. To the best of our knowledge, no systematic and comprehensive work systematizes and analyzes the existing DRB protocols. Therefore, we present a Systematization of Knowledge (SoK) intending to structure the multi-faced body of research on DRB protocols. In this SoK, we delineate the DRB protocols along the following axes: their underlying primitive, properties, and security. This SoK tries to fill that gap by providing basic standard definitions and requirements for DRB protocols, such as Unpredictability, Bias-resistance, Availability (or Liveness), and Public Verifiability. We classify DRB protocols according to the nature of interactivity among protocol participants. We also highlight the most significant features of DRB protocols such as scalability, complexity, and performance along with a brief discussion on its improvement. We present future research directions along with a few interesting research problems.Mayank Raikwar, Danilo Gligoroskiwork_jnt6wey46beq3bjesjrlp2usiuThu, 26 May 2022 00:00:00 GMTEnsuring the security for cloud storage data using a novel ADVP protocol by multiple auditing
https://scholar.archive.org/work/kgpc6wryzjdy5bmkuby2hbkqh4
The increasing growth of storage data in the cloud and virtual reality allows it a significant challenge to maintain the security of data that is outsourced by the data owners. The existing protocols for the auditing of cloud storage normally use post-quantum cryptography to monitor data integrity to solve this issue. Nevertheless, these protocols use strong cryptography to create data tags that reduce their reliability and extensibility. In this research work, we propose a novel protocol named "Advanced Distribution Verification Protocol (ADVP)" to design secure cloud storage to track the quality of cloud-saved data with the help of "Multiple-Third Party Auditors" (mTPAs) and not a "Single-TPA" (sTPA). This protocol requires several SUBTPAs that operate throughout the single TPA, which then needs to be spread equally throughout the SUBTPAs to guarantee that every SUBTPA is checked throughout the entire section. It then has strict safety evidence of malicious cloud resistance and a promise of privacy. It would also broaden the suggested protocol and include additional implementation scenarios to enable data dynamics and batch audit. It sums up a structured process to establish security as a further contribution.Libin M Joseph, E. J. Thomson Fredrikwork_kgpc6wryzjdy5bmkuby2hbkqh4Wed, 18 May 2022 00:00:00 GMTExperimental evaluation of digitally-verifiable photonic computing for blockchain and cryptocurrency
https://scholar.archive.org/work/n2b3bxs5nzc7pd2iyhwvsjrtyu
As blockchain technology and cryptocurrency become increasingly mainstream, ever-increasing energy costs required to maintain the computational power running these decentralized platforms create a market for more energy-efficient hardware. Photonic cryptographic hash functions, which use photonic integrated circuits to accelerate computation, promise energy efficiency for verifying transactions and mining in a cryptonetwork. Like many analog computing approaches, however, current proposals for photonic cryptographic hash functions that promise similar security guarantees as Bitcoin are susceptible to systematic error, so multiple devices may not reach a consensus on computation despite high numerical precision (associated with low photodetector noise). In this paper, we theoretically and experimentally demonstrate that a more general family of robust discrete analog cryptographic hash functions, which we introduce as LightHash, leverages integer matrix-vector operations on photonic mesh networks of interferometers. The difficulty of LightHash can be adjusted to be sufficiently tolerant to systematic error (calibration error, loss error, coupling error, and phase error) and preserve inherent security guarantees present in the Bitcoin protocol. Finally, going beyond our proof-of-concept, we define a "photonic advantage" criterion and justify how recent developments in CMOS optoelectronics (including analog-digital conversion) provably achieve such advantage for robust and digitally-verifiable photonic computing and ultimately generate a new market for decentralized photonic technology.Sunil Pai, Taewon Park, Marshall Ball, Bogdan Penkovsky, Maziyar Milanizadeh, Michael Dubrovsky, Nathnael Abebe, Francesco Morichetti, Andrea Melloni, Shanhui Fan, Olav Solgaard, David A.B. Millerwork_n2b3bxs5nzc7pd2iyhwvsjrtyuTue, 17 May 2022 00:00:00 GMTEfficient and Secure ECDSA Algorithm and its Applications: A Survey
https://scholar.archive.org/work/y4it5pzrobhu7ahkzanmzy3bwq
Public-key cryptography algorithms, especially elliptic curve cryptography (ECC)and elliptic curve digital signature algorithm (ECDSA) have been attracting attention frommany researchers in different institutions because these algorithms provide security andhigh performance when being used in many areas such as electronic-healthcare, electronicbanking,electronic-commerce, electronic-vehicular, and electronic-governance. These algorithmsheighten security against various attacks and the same time improve performanceto obtain efficiencies (time, memory, reduced computation complexity, and energy saving)in an environment of constrained source and large systems. This paper presents detailedand a comprehensive survey of an update of the ECDSA algorithm in terms of performance,security, and applications.Mishall Al-Zubaidie, Zhongwei Zhang, Ji Zhangwork_y4it5pzrobhu7ahkzanmzy3bwqSun, 17 Apr 2022 00:00:00 GMTDealing with quantum computer readout noise through high energy physics unfolding methods
https://scholar.archive.org/work/7muxykeeubgfxl4mozau44kikm
Quantum computers have the potential to solve problems that are intractable to classical computers, nevertheless they have high error rates. One significant kind of errors is known as Readout Errors. Current methods, as the matrix inversion and least-squares, are used to unfold (correct) readout errors. But these methods present many problems like oscillatory behavior and unphysical outcomes. In 2020 Benjamin Nachman et al. suggested a technique currently used in HEP, to correct detector effects. This method is known as the Iterative Bayesian Unfolding (IBU), and they have proven its effectiveness in mitigating readout errors, avoiding problems of the mentioned methods. Therefore, the main objective of our thesis is to mitigate readout noise of quantum computers, using this powerful unfolding method. For this purpose we generated a uniform distribution in the Yorktown IBM Q Machine, for 5 Qubits, in order to unfold it by IBU after being distorted by noise. Then we repeated the same experiment with a Gaussian distribution. Very satisfactory results and consistent with those of B. Nachman et al., were obtained. After that, we took a second purpose to explore unfolding in a larger qubit system, where we succeed to unfold a uniform distribution for 7 Qubits, distorted by noise from the Melbourne IBM Q Machine. In this case, the IBU method showed much better results than other techniques.Imene Ouadah, Hacene Rabah Benaissawork_7muxykeeubgfxl4mozau44kikmFri, 08 Apr 2022 00:00:00 GMTAn Introduction to Quantum Computing for Statisticians and Data Scientists
https://scholar.archive.org/work/sm2v5mh6pnc6tgepm7ing2ljg4
Quantum computers promise to surpass the most powerful classical supercomputers when it comes to solving many critically important practical problems, such as pharmaceutical and fertilizer design, supply chain and traffic optimization, or optimization for machine learning tasks. Because quantum computers function fundamentally differently from classical computers, the emergence of quantum computing technology will lead to a new evolutionary branch of statistical and data analytics methodologies. This review provides an introduction to quantum computing designed to be accessible to statisticians and data scientists, aiming to equip them with an overarching framework of quantum computing, the basic language and building blocks of quantum algorithms, and an overview of existing quantum applications in statistics and data analysis. Our goal is to enable statisticians and data scientists to follow quantum computing literature relevant to their fields, to collaborate with quantum algorithm designers, and, ultimately, to bring forth the next generation of statistical and data analytics tools.Anna Lopatnikova, Minh-Ngoc Tran, Scott A. Sissonwork_sm2v5mh6pnc6tgepm7ing2ljg4Sun, 03 Apr 2022 00:00:00 GMTIsing machines as hardware solvers of combinatorial optimization problems
https://scholar.archive.org/work/wklr3hx36vej5bphvjvea3ryte
Ising machines are hardware solvers which aim to find the absolute or approximate ground states of the Ising model. The Ising model is of fundamental computational interest because it is possible to formulate any problem in the complexity class NP as an Ising problem with only polynomial overhead. A scalable Ising machine that outperforms existing standard digital computers could have a huge impact for practical applications for a wide variety of optimization problems. In this review, we survey the current status of various approaches to constructing Ising machines and explain their underlying operational principles. The types of Ising machines considered here include classical thermal annealers based on technologies such as spintronics, optics, memristors, and digital hardware accelerators; dynamical-systems solvers implemented with optics and electronics; and superconducting-circuit quantum annealers. We compare and contrast their performance using standard metrics such as the ground-state success probability and time-to-solution, give their scaling relations with problem size, and discuss their strengths and weaknesses.Naeimeh Mohseni, Peter L. McMahon, Tim Byrneswork_wklr3hx36vej5bphvjvea3ryteFri, 01 Apr 2022 00:00:00 GMTOn the Connection Between Quantum Pseudorandomness and Quantum Hardware Assumptions
https://scholar.archive.org/work/qrtxa4reifbklhecrevs36tofq
This paper, for the first time, addresses the questions related to the connections between the quantum pseudorandomness and quantum hardware assumptions, specifically quantum physical unclonable functions (qPUFs). Our results show that the efficient pseudorandom quantum states (PRS) are sufficient to construct the challenge set for the universally unforgeable qPUF, improving the previous existing constructions that are based on the Haar-random states. We also show that both the qPUFs and the quantum pseudorandom unitaries (PRUs) can be constructed from each other, providing new ways to obtain PRS from the hardware assumptions. Moreover, we provide a sufficient condition (in terms of the diamond norm) that a set of unitaries should have to be a PRU in order to construct a universally unforgeable qPUF, giving yet another novel insight into the properties of the PRUs. Later, as an application of our results, we show that the efficiency of an existing qPUF-based client-server identification protocol can be improved without losing the security requirements of the protocol.Mina Doosti, Niraj Kumar, Elham Kashefi, Kaushik Chakrabortywork_qrtxa4reifbklhecrevs36tofqWed, 30 Mar 2022 00:00:00 GMTContributions to cryptanalysis: design and analysis of cryptographic hash functions
https://scholar.archive.org/work/aaeikqqwobampjdbdbyow2jwzi
"A cryptographic hash function is a mechanism producing a fixed-length output of a message of arbitrary length. It fullfils a collection of security requirements guaranteeing that a hash function does not introduce any weakness into the system to which it is applied. The example applications of cryptographic hash functions include digital signatures and message authentication codes. This thesis analyzes cryptographic hash functions and studies the design principles in the construction of secure cryptographic hash functions. We investigate the problem of building hash functions from block ciphers and the security properties of different structures used to design compression functions. We show that we can build open-key differential distinguishers for Crypton, Hierocrypt-3, SAFER++ and Square. We know that our attack on SAFER++ is the first rebound attack with standard differentials. To demonstrate the efficiency of proposed distinguishers, we provide formal proof of a lower bound for finding a differential pair that follows a truncated differential in the case of a random permutation. Our analysis shows that block ciphers used as the underlying primitive should also be analyzed in the open-key model to prevent possible collision attacks. We analyze the IDEA-based hash functions in a variety of cipher modes. We present practical complexity collision search attacks and preimage attacks, where we exploit a null weak-key and a new non-trivial property of IDEA. We prove that even if a cipher is considered secure in the secret-key model, one has to be very careful when using it as a building block in the hashing modes. Finally, we investigate the recent rotational analysis. We show how to extend the rotational analysis to subtractions, shifts, bit-wise Boolean functions, multi additions and multi subtractions. In particular, we develop formulae for calculation of probabilities of preserving the rotation property for multiple modular additions and subtractions. We examine S-functions and its application to the rotational [...]Przemysław Szczepan Sokołowskiwork_aaeikqqwobampjdbdbyow2jwziMon, 28 Mar 2022 00:00:00 GMT