##
###
Algorithms for the Densest Sub-Lattice Problem
[chapter]

Daniel Dadush, Daniele Micciancio

2013
*
Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms
*

We give algorithms for computing the densest k-dimensional sublattice of an arbitrary lattice, and related problems. This is an important problem in the algorithmic geometry of numbers that includes as special cases Rankin's problem (which corresponds to the densest sublattice problem with respect to the Euclidean norm, and has applications to the design of lattice reduction algorithms), and the shortest vector problem for arbitrary norms (which corresponds to setting k = 1) and its dual (k = n
## more »

... − 1). Our algorithm works for any norm and has running time k O(k·n) and uses 2 n poly(n) space. In particular, the algorithm runs in single exponential time 2 O(n) for any constant k = O(1). supremum is precisely (up to scaling and squaring) the objective function of the k-DSP problem. Determining the value of Rankin constants is a classical hard problem in mathematics, and to date, the value of γ n,k is known only for a handful of cases. (See Section 5.) An efficient algorithm to solve k-DSP immediately gives a powerful computational tool to determine lower bounds on γ n,k . In computer science and cryptography, Rankin constants and the associated k-DSP problem in the Euclidean norm has been suggested as a building block for novel basis reduction algorithms [GHGKN06], as well as an analytical method to understand the limitation of more classical block reduction methods [Sch87]. However, little progress was made along this direction since [GHGKN06], largely because of the lack of efficient algorithms to solve k-DSP. More specifically, [GHGKN06] gave an approximation algorithm for k-DSP (called the transference reduction algorithm), which results in a suboptimal basis block-reduction method, and is provably inferior to other techniques based on SVP and Hermite's constant γ n [GN08]. We remark that part of the difficulty of evaluating the potential of the Rankin reduction algorithm of [GHGKN06] is due to the fact that the value of Rankin constants γ n,k is not known except for a handful of values of n, k. An algorithm to solve k-DSP would serve both as a tool to study the value of Rankin constants γ n,k , and also as a method to instantiate the Rankin reduction framework of [GHGKN06] and better assess its potential. The generalization of k-DSP to arbitrary norms, while nontrivial even to define (see Section 3), arises naturally in the context of applications, and in particular, may be useful in the development of faster algorithms for integer programming. The asymptotically fastest known algorithm to solve integer programming is the one of Kannan [Kan87], running in time O(n O(n) ), with some recent work [HK10, DPV11] improving the constant in the exponent. Kannan's algorithm works by reducing an integer programming instance in n variables, to n O(r) instances in (n − r) variables, for some unspecified r = 1, . . . , n. Recursing, this yields an algorithm with running time n O(n) . The problem of finding an optimal decomposition (of the kind used by [Kan87]) into the smallest possible number of (n − r)-dimensional subproblems can be formulated as an r-DSP instance in an appropriate norm. Based on the best known upper and lower bounds in asymptotic convex geometry, this could lead to integer programming algorithms with running time as low as (log n) O(n) , much smaller than the current n O(n) . Similar ideas may also lead to better polynomial space algorithms for the closest vector problem with preprocessing. This and other possible potential applications are described in more detail in Section 5. State of the art It is easy to see that k-DSP (for any fixed k) is at least as hard as SVP. (For example, one can map any SVP instance in dimension n to a corresponding k-DSP instance in dimension n + k − 1 simply by adding k − 1 very short vectors orthogonal to the original lattice.) In particular, just like SVP [Ajt98, Mic98, Kho03, HR07, Mic12] , k-DSP is NP-hard (at least under randomized reductions) for any k, and it cannot be solved in subexponential time under standard complexity assumptions. A simple lattice duality argument (see Section 3) also shows that k-DSP is equivalent to (n − k)-DSP, where n is the dimension of the lattice. But beside that, not much is known about the computational complexity of k-DSP. In particular, while the algorithmic study of SVP, CVP and SIVP has received much attention, leading to practical heuristics [SE94, SH95, NV08, MV10a, GNR10, WLTB11] and asymptotically efficient algorithms with single exponential running time 2 O(n) [AKS01, BN07, AJ08, MV10a, PS09, MV10b], the only known algorithm for DSP in the literature is the one of [GHGKN06] for the special case of 4-dimensional lattices. We remark that [GHGKN06] also mentions that the general problem can be solved by a "gigantic" exhaustive search over all LLL reduced bases of the input lattice, resulting in 2 O(n 3 ) running time. (See discussion at the beginning of Section 3.1 for details.) The algorithms presented in this paper are also based on a form of exhaustive search, and this is unavoidable because of NP-hardness, but the search is over a much smaller space.

doi:10.1137/1.9781611973105.79
dblp:conf/soda/DadushM13
fatcat:gepxhjknzng4thpxjp3ylbbqoy