Lattice problem


In computer science, lattice problems are a class of optimization problems related to mathematical objects called lattices. The conjectured intractability of such problems is central to the construction of secure lattice-based cryptosystems: Lattice problems are an example of NP-hard problems which have been shown to be average-case hard, providing a test case for the security of cryptographic algorithms. In addition, some lattice problems which are worst-case hard can be used as a basis for extremely secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers. For applications in such cryptosystems, lattices over vector space or free modules are generally considered.
For all the problems below, assume that we are given a basis for the vector space V and a norm N. The norm usually considered is the Euclidean norm L2. However, other norms are also considered and show up in a variety of results. Let denote the length of the shortest non-zero vector in the lattice L, that is,

Shortest vector problem (SVP)

In the SVP, a basis of a vector space V and a norm N are given for a lattice L and one must find the shortest non-zero vector in V, as measured by N, in L. In other words, the algorithm should output a non-zero vector v such that.
In the γ-approximation version SVPγ, one must find a non-zero lattice vector of length at most for given.

Hardness results

The exact version of the problem is only known to be NP-hard for randomized reductions.
By contrast, the corresponding problem with respect to the uniform norm is known to be NP-hard.

Algorithms for the Euclidean norm

To solve the exact version of the SVP under the Euclidean norm, several different approaches are known, which can be split into two classes: algorithms requiring superexponential time and memory, and algorithms requiring both exponential time and space in the lattice dimension. The former class of algorithms most notably includes lattice enumeration and random sampling reduction, while the latter includes lattice sieving, computing the Voronoi cell of the lattice, and discrete Gaussian sampling. An open problem is whether algorithms for solving exact SVP exist running in single exponential time and requiring memory scaling polynomially in the lattice dimension.
To solve the γ-approximation version SVPγ for for the Euclidean norm, the best known approaches are based on using lattice basis reduction. For large, the Lenstra–Lenstra–Lovász algorithm can find a solution in time polynomial in the lattice dimension. For smaller values, the Block Korkine-Zolotarev algorithm is commonly used, where the input to the algorithm determines the time complexity and output quality: for large approximation factors, a small block size suffices, and the algorithm terminates quickly. For small, larger are needed to find sufficiently short lattice vectors, and the algorithm takes longer to find a solution. The BKZ algorithm internally uses an exact SVP algorithm as a subroutine, and its overall complexity is closely related to the costs of these SVP calls in dimension.

GapSVP

The problem GapSVPβ consists of distinguishing between the instances of SVP in which the length of the shortest vector is at most or larger than, where can be a fixed function of the dimension of the lattice. Given a basis for the lattice, the algorithm must decide whether or. Like other promise problems, the algorithm is allowed to err on all other cases.
Yet another version of the problem is GapSVPζ, γ for some functions. The input to the algorithm is a basis and a number. It is assured that all the vectors in the Gram–Schmidt orthogonalization are of length at least 1, and that and that where is the dimension. The algorithm must accept if, and reject if. For large , the problem is equivalent to GapSVPγ because a preprocessing done using the LLL algorithm makes the second condition redundant.

Closest vector problem (CVP)

In CVP, a basis of a vector space V and a metric M are given for a lattice L, as well as a vector v in V but not necessarily in L. It is desired to find the vector in L closest to v. In the -approximation version CVPγ, one must find a lattice vector at distance at most.

Relationship with SVP

The closest vector problem is a generalization of the shortest vector problem. It is easy to show that given an oracle for CVPγ, one can solve SVPγ by making some queries to the oracle. The naive method to find the shortest vector by calling the CVPγ oracle to find the closest vector to 0 does not work because 0 is itself a lattice vector and the algorithm could potentially output 0.
The reduction from SVPγ to CVPγ is as follows: Suppose that the input to the SVPγ problem is the basis for lattice. Consider the basis and let be the vector returned by CVPγ. The claim is that the shortest vector in the set is the shortest vector in the given lattice.

Hardness results

Goldreich et al. showed that any hardness of SVP implies the same hardness for CVP. Using PCP tools, Arora et al. showed that CVP is hard to approximate within factor unless. Dinur et al. strengthened this by giving a NP-hardness result with for.

Sphere decoding

Algorithms for CVP, especially the Fincke and Pohst variant, have been used for data detection in multiple-input multiple-output wireless communication systems. In this context it is called sphere decoding due to the radius used internal to many CVP solutions.
It has been applied in the field of the integer ambiguity resolution of carrier-phase GNSS. It is called LAMBDA method in that field.

GapCVP

This problem is similar to the GapSVP problem. For GapSVPβ, the input consists of a lattice basis and a vector and the algorithm must answer whether one of the following holds:
The opposite condition is that the closest lattice vector is at a distance, hence the name GapCVP.

Known results

The problem is trivially contained in NP for any approximation factor.
Schnorr, in 1987, showed that deterministic polynomial time algorithms can solve the problem for. Ajtai et al. showed that probabilistic algorithms can achieve a slightly better approximation factor of.
In 1993, Banaszczyk showed that GapCVPn is in. In 2000, Goldreich and Goldwasser showed that puts the problem in both NP and coAM. In 2005, Aharonov and Regev showed that for some constant, the problem with is in.
For lower bounds, Dinur et al. showed in 1998 that the problem is NP-hard for.

Shortest independent vectors problem (SIVP)

Given a lattice L of dimension n, the algorithm must output n linearly independent so that where the right hand side considers all bases of the lattice.
In the -approximate version, given a lattice L with dimension n, find n linearly independent vectors of length, where is the 'th successive minimum of.

Bounded distance decoding

This problem is similar to CVP. Given a vector such that its distance from the lattice is at most, the algorithm must output the closest lattice vector to it.

Covering radius problem

Given a basis for the lattice, the algorithm must find the largest distance from any vector to the lattice.

Shortest basis problem

Many problems become easier if the input basis consists of short vectors. An algorithm that solves the Shortest Basis Problem must, given a lattice basis, output an equivalent basis such that the length of the longest vector in is as short as possible.
The approximation version SBPγ problem consist of finding a basis whose longest vector is at most times longer than the longest vector in the shortest basis.

Use in cryptography

hardness of problems forms a basis for proofs-of-security for most cryptographic schemes. However, experimental evidence suggests that most NP-hard problems lack this property: they are probably only worst case hard. Many lattice problems have been conjectured or proven to be average-case hard, making them an attractive class of problems to base cryptographic schemes on. Moreover, worst-case hardness of some lattice problems have been used to create secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers.
The above lattice problems are easy to solve if the algorithm is provided with a "good" basis. Lattice reduction algorithms aim, given a basis for a lattice, to output a new basis consisting of relatively short, nearly orthogonal vectors. The Lenstra–Lenstra–Lovász lattice basis reduction algorithm was an early efficient algorithm for this problem which could output an almost reduced lattice basis in polynomial time. This algorithm and its further refinements were used to break several cryptographic schemes, establishing its status as a very important tool in cryptanalysis. The success of LLL on experimental data led to a belief that lattice reduction might be an easy problem in practice. However, this belief was challenged when in the late 1990s, several new results on the hardness of lattice problems were obtained, starting with the result of Ajtai.
In his seminal papers, Ajtai showed that the SVP problem was NP-hard and discovered some connections between the worst-case complexity and average-case complexity of some lattice problems. Building on these results, Ajtai and Dwork created a public-key cryptosystem whose security could be proven using only the worst case hardness of a certain version of SVP, thus making it the first result to have used worst-case hardness to create secure systems.