In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices.[1] Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.
Ideal lattices also form the basis for quantum computer attack resistant cryptography based on the Ring Learning with Errors.[2] These cryptosystems are provably secure under the assumption that the shortest vector problem (SVP) is hard in these ideal lattices.
In general terms, ideal lattices are lattices corresponding to ideals in rings of the form
Z[x]/\langlef\rangle
f
n
R
Zn
Z
n
\sigma
R
\sigma(R)
n
Rn
R
\sigma
\sigma(I)
I
R.
Let
f\inZ[x]
n
Z[x]/\langlef\rangle
Using the standard set of representatives
\lbrace(g\bmodf):g\inZ[x]\rbrace
Z[x]/\langlef\rangle
Zn
I\subseteqZ[x]/\langlef\rangle
l{L}(I)\subseteqZn
An ideal lattice is an integer lattice
l{L}(B)\subseteqZn
B=\lbraceg\bmodf:g\inI\rbrace
f
n
I\subseteqZ[x]/\langlef\rangle
It turns out that the relevant properties of
f
f
\lVertg\rVertf
\lVertg\rVertinfty
g
Z[x]/\langlef\rangle
Zn
I
Z[x]/\langlef\rangle
f
n
Zn
Ding and Lindner[4] gave evidence that distinguishing ideal lattices from general ones can be done in polynomial time and showed that in practice randomly chosen lattices are never ideal. They only considered the case where the lattice has full rank, i.e. the basis consists of
n
Algorithm: Identifying ideal lattices with full rank bases
Data: A full-rank basis
B\inZ(n,n)
bf{q}
B
bf{q}
B
A={\rmadj}(B)
d=\det(B)
z=B(n,n)
P=AMB\bmodd
c=P(\centerdot,n)
z\midci
i=1,\ldots,n
q\ast\equiv(c/z)\bmod(d/z)
q\ast\equiv0\bmod z
Bq\ast\equiv0\bmod(d/z)
q=Bq\ast/d
where the matrix M is
M=\begin{pmatrix} 0& ⋅ & ⋅ & ⋅ &0\\ &&&& ⋅ \\ &&&& ⋅ \\ In-1&&&& ⋅ \\ &&&&0 \end{pmatrix}
Using this algorithm, it can be seen that many lattices are not ideal lattices. For example, let
n=2
k\inZ\smallsetminus\lbrace0,\pm1\rbrace
B1=\begin{pmatrix} k&0\\ 0&1 \end{pmatrix}
B2=\begin{pmatrix} 1&0\\ 0&k \end{pmatrix}
B2
k=2
Performing the algorithm on it and referring to the basis as B, matrix B is already in Hermite Normal Form so the first step is not needed. The determinant is
d=2
A=\begin{pmatrix} 2&0\\ 0&1 \end{pmatrix},
M=\begin{pmatrix} 0&0\\ 1&0 \end{pmatrix}
P=AMB\bmodd
P=\begin{pmatrix} 0&0\\ 1&0 \end{pmatrix}.
At this point the algorithm stops, because all but the last column of
P
B
Micciancio[6] introduced the class of structured cyclic lattices, which correspond to ideals in polynomial rings
Z[x]/(xn-1)
\tilde{O}(n)
Z[x]/f(x)
The fundamental idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding and provided a state of the art description of a quantum resistant key exchange using Ring LWE. The paper[9] appeared in 2012 after a provisional patent application was filed in 2012. In 2014, Peikert presented a key transport scheme following the same basic idea of Ding's, where the new idea of sending additional signal for rounding in Ding's construction is also utilized. A digital signature using the same concepts was done several years earlier by Vadim Lyubashevsky in, "Lattice Signatures Without Trapdoors."[10] Together, the work of Peikert and Lyubashevsky provide a suite of Ring-LWE based quantum attack resistant algorithms with the same security reductions.
The main usefulness of the ideal lattices in cryptography stems from the fact that very efficient and practical collision resistant hash functions can be built based on the hardness of finding an approximate shortest vector in such lattices.[1] Independently constructed collision resistant hash functions by Peikert and Rosen,[7] as well as Lyubashevsky and Micciancio, based on ideal lattices (a generalization of cyclic lattices), and provided a fast and practical implementation.[3] These results paved the way for other efficient cryptographic constructions including identification schemes and signatures.
R=Zp[x]/\langlef\rangle
f\inZp[x]
n
p
n2
m
a1,...,am\inR
m
m
h=(a1,\ldots,am)\inRm
Dm
D
R
R
b=(b1,\ldots,bm)\inDm
h(b)=
m | |
\sum | |
i=1 |
\alphai\centerdotbi
O(mnlogp)=O(nlogn)
\alphai\centerdotbi
O(nlognloglogn)
f
m
O(nlognloglogn)
b ≠ b'\inDm
h(b)=h(b')
h\inRm
Z[x]/\langlef\rangle
Based on the work of Lyubashevsky and Micciancio in 2006, Micciancio and Regev[11] defined the following algorithm of hash functions based on ideal lattices:
q,n,m,d
n\midm
\inZn
m/n
a1,\ldots,am/n
n | |
Z | |
q |
fA:\lbrace0,\ldots,d-1\rbracem\longrightarrow
n | |
Z | |
q |
fA(y)=[F\asta1|\ldots|F\astam/n]y\bmod q
Here
n,m,q,d
Zn
A
A(i)=F\asta(i)
Finding short vectors in
\perp | |
Λ | |
q |
([F\asta1|\ldots|F\astam/n])
n
O(\sqrt{n}))
f(x)=
n+f | |
x | |
n |
xn-1+ … +f1\inZ[x]
The first property is satisfied by the vector
F=(-1,0,\ldots,0)
\lVert[bf{F}\astbf{u}]bf{v}\rVert\leq{\sqrt{n}}
xn-1
f=(-1,0,\ldots,0)
(x-1)(xn-1+xn-2+ … +x+1)
f=(-1,0,\ldots,0)
f=(1,\ldots,1)\inZn
n+1
f=(1,0,\ldots,0)\inZn
n
Digital signatures schemes are among the most important cryptographic primitives. They can be obtained by using the one-way functions based on the worst-case hardness of lattice problems. However, they are impractical. A number of new digital signature schemes based on learning with errors, ring learning with errors and trapdoor lattices have been developed since the learning with errors problem was applied in a cryptographic context.
Their direct construction of digital signatures based on the complexity of approximating the shortest vector in ideal (e.g., cyclic) lattices.[8] The scheme of Lyubashevsky and Micciancio[8] has worst-case security guarantees based on ideal lattices and it is the most asymptotically efficient construction known to date, yielding signature generation and verification algorithms that run in almost linear time.[11]
One of the main open problems that was raised by their work is constructing a one-time signature with similar efficiency, but based on a weaker hardness assumption. For instance, it would be great to provide a one-time signature with security based on the hardness of approximating the Shortest Vector Problem (SVP) (in ideal lattices) to within a factor of
\tilde{O}(n)
Z[x]/\langlef\rangle
f
Key-Generation Algorithm:Input:
1n
f\inZ
n
p\longleftarrow(\varphin)3
m\longleftarrow\lceillogn\rceil
R\longleftarrowZp[x]/\langlef\rangle
i
DKi
DLi
DKi=\lbrace\hat{y}\inRm
\lVert\hat{y}\rVertinfty\leq5ip1/m\rbrace
DLi=\lbrace\hat{y}\inRm
\lVert\hat{y}\rVertinfty\leq5in\varphip1/m\rbrace
h\inl{H}R,m
r\in\lbrace0,1
\lfloorlog2n\rfloor | |
\rbrace |
r=
\lfloorlog2n\rfloor | |
0 |
j=\lfloorlog2n\rfloor
j
r
\hat{k},\hat{l}
DKj
DLj
(\hat{k},\hat{l})
(h,h(\hat{k}),h(\hat{l}))
Signing Algorithm:
Input: Message
z\inR
\lVertz\rVertinfty\leq1
(\hat{k},\hat{l})
Output:
\hat{s}\longleftarrow\hat{k}z+\hat{l}
Verification Algorithm:
Input: Message
z
\hat{s}
(h,h(\hat{k}),h(\hat{l}))
Output: “ACCEPT”, if
\lVert\hat{s}\rVertinfty\leq10\varphip1/mnlog2n
\hat{s}=\hat{k}z+\hat{l}
“REJECT”, otherwise.
The hash function is quite efficient and can be computed asymptotically in
\tilde{O}(m)
Zq
(1,0,...,0)\inZn
n
xn+1
q
2n
q-1
bf{W}\in
n x n | |
Z | |
q |
Zq
\tilde{a}(1),\ldots,\tilde{a}(m/n)
m/n
n | |
Z | |
q |
y\in\lbrace0,\ldots,d-1\rbracem
bf{W}\centerdotfA(y)\bmod q
bf{A}=[bf{F}\ast\alpha(1),\ldots,bf{F}\ast\alpha(m/n)]
\alpha(i)=bf{W}-1\tilde{a}(i)\bmodq
bf{W}-1
\tilde{a}\in
n | |
Z | |
q |
\alpha\in
n | |
Z | |
q |
bf{W}\centerdot
\centerdot | |
f | |
A(y)=bf{W} |
fA(y')\pmodq
fA(y)=fA(y')\pmodq
fA
The algorithm of the SWIFFT hash function is:
n,m,q,d
n
q
2n\mid(q-1)
n\midm
m/n
\tilde{a}1,\ldots,\tilde{a}m/n
n | |
Z | |
q |
m/n
y(1),...,y(m/n)\in\lbrace0,...,d-1\rbracen
m/n | |
\sum | |
i=1 |
\tilde{a}(i)\odot(bf{W}y(i))\in
n | |
Z | |
q |
\odot
Learning with errors (LWE) problem has been shown to be as hard as worst-case lattice problems and has served as the foundation for many cryptographic applications. However, these applications are inefficient because of an inherent quadratic overhead in the use of LWE. To get truly efficient LWE applications, Lyubashevsky, Peikert and Regev[3] defined an appropriate version of the LWE problem in a wide class of rings and proved its hardness under worst-case assumptions on ideal lattices in these rings. They called their LWE version ring-LWE.
Let
f(x)=xn+1\inZ[x]
n
f(x)
f(x)
Let
R=Z[x]/\langlef(x)\rangle
f(x)
R
f(x)
n
q\equiv1\bmod2n
n
Rq=R/\langleq\rangle=Zq[x]/\langlef(x)\rangle
f(x)
q
Rq
n
\lbrace0,...,q-1\rbrace
In the above-described ring, the R-LWE problem may be described as follows.Let
s=s(x)\inRq
(a,b ≈ a\centerdots)\inRq x Rq
a\centerdots
R
They gave a quantum reduction from approximate SVP (in the worst case) on ideal lattices in
R
s\inRq
s
Theorem Let
K
n
\alpha=\alpha(n)\in(0,1)
q=q(n)\geq2
\alpha\centerdotq\geq\omega(\sqrt{logn})
K
DGS\gamma
l{O}K
LWEq,
\gamma=η\epsilon(I)\centerdot\omega(\sqrt{logn})/\alpha
In 2013, Guneysu, Lyubashevsky, and Poppleman proposed a digital signature scheme based on the Ring Learning with Errors problem.[13] In 2014, Peikert presented a Ring Learning with Errors Key Exchange (RLWE-KEX) in his paper, "Lattice Cryptography for the Internet."[14] This was further developed by the work of Singh.[15]
Stehle, Steinfeld, Tanaka and Xagawa[16] defined a structured variant of LWE problem (Ideal-LWE) to describe an efficient public key encryption scheme based on the worst case hardness of the approximate SVP in ideal lattices. This is the first CPA-secure public key encryption scheme whose security relies on the hardness of the worst-case instances of
\tilde{O}(n2)
\tilde{O}(n)
\tilde{O}(1)
\tilde{\Omega}(n)
\tilde{O}(n)
\tilde{O}(n2)
Most of the cryptosystems based on general lattices rely on the average-case hardness of the Learning with errors (LWE). Their scheme is based on a structured variant of LWE, that they call Ideal-LWE. They needed to introduce some techniques to circumvent two main difficulties that arise from the restriction to ideal lattices. Firstly, the previous cryptosystems based on unstructured lattices all make use of Regev's worst-case to average-case classical reduction from Bounded Distance Decoding problem (BDD) to LWE (this is the classical step in the quantum reduction from SVP to LWE). This reduction exploits the unstructured-ness of the considered lattices, and does not seem to carry over to the structured lattices involved in Ideal-LWE. In particular, the probabilistic independence of the rows of the LWE matrices allows to consider a single row. Secondly, the other ingredient used in previous cryptosystems, namely Regev's reduction from the computational variant of LWE to its decisional variant, also seems to fail for Ideal-LWE: it relies on the probabilistic independence of the columns of the LWE matrices.
To overcome these difficulties, they avoided the classical step of the reduction. Instead, they used the quantum step to construct a new quantum average-case reduction from SIS (average-case collision-finding problem) to LWE. It also works from Ideal-SIS to Ideal-LWE. Combined with the reduction from worst-case Ideal-SVP to average-case Ideal-SIS, they obtained the a quantum reduction from Ideal-SVP to Ideal-LWE. This shows the hardness of the computational variant of Ideal-LWE. Because they did not obtain the hardness of the decisional variant, they used a generic hardcore function to derive pseudorandom bits for encryption. This is why they needed to assume the exponential hardness of SVP.
A fully homomorphic encryption (FHE) scheme is one which allows for computation over encrypted data, without first needing to decrypt. The problem of constructing a fully homomorphic encryption scheme was first put forward by Rivest, Adleman and Dertouzos[17] in 1978, shortly after the invention of RSA by Rivest, Adleman and Shamir.[18]
An encryption scheme
\varepsilon=(KeyGen,Encrypt,Decrypt,Eval)
l{C}
C\inl{C}
given
PK,SK\leftarrowKeyGen(1λ)
y=Encrypt(PK,x)
y'=Eval(PK,C,y)
it holds that
Decrypt(SK,y')=C(x)
\varepsilon
\operatorname{poly}(λ)
λ
In 2009, Gentry[19] proposed the first solution to the problem of constructing a fully homomorphic encryption scheme. His scheme was based on ideal lattices.