⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 lll.txt

📁 数值算法库for Windows
💻 TXT
📖 第 1 页 / 共 2 页
字号:

/**************************************************************************\

MODULE: LLL

SUMMARY:

Routines are provided for lattice basis reduction, including both
exact-aritmetic variants (slow but sure) and floating-point variants
(fast but only approximate).

For an introduction to the basics of LLL reduction, see
[H. Cohen, A Course in Computational Algebraic Number Theory, Springer, 1993].

The LLL algorithm was introduced in [A. K. Lenstra, H. W. Lenstra, and
L. Lovasz, Math. Ann. 261 (1982), 515-534].

\**************************************************************************/




#include <NTL/mat_ZZ.h>



/**************************************************************************\

                         Exact Arithmetic Variants

\**************************************************************************/




long LLL(ZZ& det2, mat_ZZ& B, long verbose = 0);
long LLL(ZZ& det2, mat_ZZ& B, mat_ZZ& U, long verbose = 0);

long LLL(ZZ& det2, mat_ZZ& B, long a, long b, long verbose = 0);
long LLL(ZZ& det2, mat_ZZ& B, mat_ZZ& U, long a, long b, long verbose = 0);


// performs LLL reduction.

// B is an m x n matrix, viewed as m rows of n-vectors.  m may be less
// than, equal to, or greater than n, and the rows need not be
// linearly independent.  B is transformed into an LLL-reduced basis,
// and the return value is the rank r of B.  The first m-r rows of B
// are zero.  

// More specifically, elementary row transformations are performed on
// B so that the non-zero rows of new-B form an LLL-reduced basis
// for the lattice spanned by the rows of old-B.
// The default reduction parameter is delta=3/4, which means
// that the squared length of the first non-zero basis vector
// is no more than 2^{r-1} times that of the shortest vector in
// the lattice.

// det2 is calculated as the *square* of the determinant
// of the lattice---note that sqrt(det2) is in general an integer
// only when r = n.

// In the second version, U is set to the transformation matrix, so
// that U is a unimodular m x m matrix with U * old-B = new-B.
// Note that the first m-r rows of U form a basis (as a lattice)
// for the kernel of old-B. 

// The third and fourth versions allow an arbitrary reduction
// parameter delta=a/b, where 1/4 < a/b <= 1, where a and b are positive
// integers.
// For a basis reduced with parameter delta, the squared length
// of the first non-zero basis vector is no more than 
// 1/(delta-1/4)^{r-1} times that of the shortest vector in the
// lattice (see, e.g., the article by Schnorr and Euchner mentioned below).

// The algorithm employed here is essentially the one in Cohen's book.


// Some variations:

long LLL_plus(vec_ZZ& D, mat_ZZ& B, long verbose = 0);
long LLL_plus(vec_ZZ& D, mat_ZZ& B, mat_ZZ& U, long verbose = 0);

long LLL_plus(vec_ZZ& D, mat_ZZ& B, long a, long b, long verbose = 0);
long LLL_plus(vec_ZZ& D, mat_ZZ& B, mat_ZZ& U, long a, long b, 
              long verbose = 0);

// These are variations that return a bit more information about the
// reduced basis.  If r is the rank of B, then D is a vector of length
// r+1, such that D[0] = 1, and for i = 1..r, D[i]/D[i-1] is equal to
// the square of the length of the i-th vector of the Gram-Schmidt basis
// corresponding to the (non-zero) rows of the LLL reduced basis B.
// In particular, D[r] is equal to the value det2 computed by the
// plain LLL routines.

/**************************************************************************\

                      Computing Images and Kernels

\**************************************************************************/


long image(ZZ& det2, mat_ZZ& B, long verbose = 0);
long image(ZZ& det2, mat_ZZ& B, mat_ZZ& U, long verbose = 0);

// This computes the image of B using a "cheap" version of the LLL:
// it performs the usual "size reduction", but it only swaps
// vectors when linear dependencies are found.
// I haven't seen this described in the literature, but it works 
// fairly well in practice, and can also easily be shown
// to run in a reasonable amount of time with reasonably bounded
// numbers.

// As in the above LLL routines, the return value is the rank r of B, and the
// first m-r rows will be zero.  U is a unimodular m x m matrix with 
// U * old-B = new-B.  det2 has the same meaning as above.

// Note that the first m-r rows of U form a basis (as a lattice)
// for the kernel of old-B. 
// This is a reasonably practical algorithm for computing kernels.
// One can also apply image() to the kernel to get somewhat
// shorter basis vectors for the kernels (there are no linear
// dependencies, but the size reduction may anyway help).
// For even shorter kernel basis vectors, on can apply
// LLL(). 


/**************************************************************************\

                    Finding a vector in a lattice 

\**************************************************************************/

long LatticeSolve(vec_ZZ& x, const mat_ZZ& A, const vec_ZZ& y, long reduce=0);

// This tests if for given A and y, there exists x such that x*A = y;
// if so, x is set to a solution, and the value 1 is returned;
// otherwise, x is left unchanged, and the value 0 is returned.

// The optional parameter reduce controls the 'quality' of the
// solution vector;  if the rows of A are linearly dependent, 
// there are many solutions, if there are any at all.
// The value of reduce controls the amount of effort that goes
// into finding a 'short' solution vector x.

//    reduce = 0: No particular effort is made to find a short solution.

//    reduce = 1: A simple 'size reduction' algorithm is run on kernel(A);
//                this is fast, and may yield somewhat shorter
//                solutions than the default, but not necessarily
//                very close at all to optimal.

//    reduce = 2: the LLL algorithm is run on kernel(A);
//                this may be significantly slower than the other options,
//                but yields solutions that are provably close to optimal.
//                More precisely, if kernel(A) has rank k,
//                then the squared length of the obtained solution
//                is no more than max(1, 2^(k-2)) times that of 
//                the optimal solution.  This makes use of slight
//                variation of Babai's approximately nearest vector algorithm.

// Of course, if the the rows of A are linearly independent, then
// the value of reduce is irrelevant: the solution, if it exists,
// is unique.

// Note that regardless of the value of reduce, the algorithm
// runs in polynomial time, and hence the bit-length of the solution
// vector is bounded by a polynomial in the bit-length of the inputs.




/**************************************************************************\

                   Floating Point Variants

There are a number of floating point LLL variants available:
you can choose the precision, the orthogonalization strategy,
and the reduction condition.

The wide variety of choices may seem a bit bewildering.
See below the discussion "How to choose?".

*** Precision:

  FP -- double
  QP -- quad_float (quasi quadruple precision)
        this is useful when roundoff errors can cause problems
  XD -- xdouble (extended exponent doubles)
        this is useful when numbers get too big
  RR -- RR (arbitrary precision floating point)
        this is useful for large precision and magnitudes

  Generally speaking, the choice FP will be the fastest,
  but may be prone to roundoff errors and/or overflow.
  

*** Orthogonalization Strategy: 

  -- Classical Gramm-Schmidt Orthogonalization.
     This choice uses classical methods for computing
     the Gramm-Schmidt othogonalization.
     It is fast but prone to stability problems.
     This strategy was first proposed by Schnorr and Euchner
     [C. P. Schnorr and M. Euchner, Proc. Fundamentals of Computation Theory, 
     LNCS 529, pp. 68-85, 1991].  
     The version implemented here is substantially different, improving
     both stability and performance.

  -- Givens Orthogonalization.
     This is a bit slower, but generally much more stable,
     and is really the preferred orthogonalization strategy.
     For a nice description of this, see Chapter 5 of  
     [G. Golub and C. van Loan, Matrix Computations, 3rd edition,
     Johns Hopkins Univ. Press, 1996].


⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -