High speed and adaptable error correction for megabit/s rate quantum key distribution

A. R. Dixon, H. Sato
2014 Scientific Reports  
Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck
more » ... ng the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km. Q uantum Key Distribution (QKD) aims to create and provide secure key data to users for cryptographic tasks 1 . With the security based upon physical principles it is able to provide a theoretical guarantee that the keys are unknown to any third party with a high and quantifiable probability 2 , something not possible with other existing types of key distribution. From the theoretical beginnings rapid experimental progress has been made, with recent experiments demonstrating high rates of key distribution 3-5 combined with wavelength division multiplexing 6-9 and network operation 10 . The procedure for generating a key using QKD divides into two distinct parts. In the first stage hardware is used for the transmission and detection of quantum states, while in the second stage the information recorded from those quantum states is post-processed using software into a final secure key. The first stage is typically where most of the research effort into QKD has been concentrated, with systems now able to operate stably and continuously at Mbit/s hardware key rates 11,12 . However often the second stage is neglected, especially in high key rate experiments, with secure key rates instead estimated. For a complete high speed QKD system it is also necessary for the second stage, software post processing, to be also able to operate at Mbit/s rates to avoid limiting the final secure key rate. A noteable exception to this is in Continuous Variable (CV) QKD, where the particularly challenging post-processing required has seen significant research resulting in increases in the secure rate and distance 13, 14 . The focus of this paper however is the more commonly implemented Discrete Variable (DV) QKD. The post-processing divides into three main steps; sifting, error correction and privacy amplification. Sifting is computationally straightforward, consisting mainly of simple bit comparison operations, and can generally be performed at high speed without much difficulty. Privacy amplification is in principle a relatively straightforward matrix multiplication operation, however in order to reduce statistical finite data size effects very large (approximately 1 to 100 Mbit per dimension) matrix sizes should be used. Computing such large multiplication quickly is a challenge, with approaches using more complex algorithms such as number theoretic transforms suggested to enable operation at both large block sizes and with short computation times 15 . Here we focus on error correction which is generally a relatively computationally complex operation. As noted recently 16,17 much QKD error correction research has aimed at minimising the extra redundant information sent to correct the errors whereas in practice other parameters are also important. As the redundant information can also be intercepted by an eavesdropper and so must be removed from the secure key it is important to minimise it, but in a complete QKD system it is necessary to also consider two other parameters of the error correction; the bit throughput rate and the error correction failure rate. The throughput rate determines how much information can be processed by the error correction per time period and so sets a hard upper limit on the secure key rate. The failure rate indicates the fraction of data which cannot be corrected and so must be discarded, causing a corresponding percentage OPEN
doi:10.1038/srep07275 pmid:25450416 pmcid:PMC4250910 fatcat:e2zjb4xd3fc2dexymqijhuyd7a