Diophantine Approximation and Basis Reduction

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

10.4 Complex Vector Spaces.
5.4 Basis And Dimension.
5.1 Real Vector Spaces.
Chapter 4 Euclidean Vector Spaces
Shortest Vector In A Lattice is NP-Hard to approximate
Fearful Symmetry: Can We Solve Ideal Lattice Problems Efficiently?
Ch 7.7: Fundamental Matrices
Elementary Linear Algebra Anton & Rorres, 9th Edition
Applied Informatics Štefan BEREŽNÝ
Finding Reduced Basis for Lattices Ido Heskia Math/Csc 870.
Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Chapter 5 Orthogonality
4.I. Definition 4.II. Geometry of Determinants 4.III. Other Formulas Topics: Cramer’s Rule Speed of Calculating Determinants Projective Geometry Chapter.
ENGG2013 Unit 9 3x3 Determinant
Orthogonality and Least Squares
Linear Equations in Linear Algebra
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
INDR 262 INTRODUCTION TO OPTIMIZATION METHODS LINEAR ALGEBRA INDR 262 Metin Türkay 1.
5  Systems of Linear Equations: ✦ An Introduction ✦ Unique Solutions ✦ Underdetermined and Overdetermined Systems  Matrices  Multiplication of Matrices.
Polynomial Factorization Olga Sergeeva Ferien-Akademie 2004, September 19 – October 1.
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach Wayne Lawton Department of Mathematics National University of Singapore S ,
Calculus and Analytic Geometry II Cloud County Community College Spring, 2011 Instructor: Timothy L. Warkentin.
Sequences Informally, a sequence is a set of elements written in a row. – This concept is represented in CS using one- dimensional arrays The goal of mathematics.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
1 Chapter 6 – Determinant Outline 6.1 Introduction to Determinants 6.2 Properties of the Determinant 6.3 Geometrical Interpretations of the Determinant;
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Linear Algebra Chapter 4 Vector Spaces.
ME 1202: Linear Algebra & Ordinary Differential Equations (ODEs)
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Chapter 2 Nonnegative Matrices. 2-1 Introduction.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 02 Chapter 2: Determinants.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Section 2.3 Properties of Solution Sets
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Lattice-based cryptography and quantum Oded Regev Tel-Aviv University.
Chapter 2 Determinants. With each square matrix it is possible to associate a real number called the determinant of the matrix. The value of this number.
Chapter 5 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis and Dimension 5. Row Space, Column Space, and Nullspace 6.
2 2.2 © 2016 Pearson Education, Ltd. Matrix Algebra THE INVERSE OF A MATRIX.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Approximation Algorithms based on linear programming.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Matrices, Vectors, Determinants.
Lecture XXVI.  The material for this lecture is found in James R. Schott Matrix Analysis for Statistics (New York: John Wiley & Sons, Inc. 1997).  A.
Fuw-Yi Yang1 Textbook: Introduction to Cryptography 2nd ed. By J.A. Buchmann Chap 1 Integers Department of Computer Science and Information Engineering,
MAT 322: LINEAR ALGEBRA.
Elementary Linear Algebra Anton & Rorres, 9th Edition
5 Systems of Linear Equations and Matrices
Matrices and Vectors Review Objective
Background: Lattices and the Learning-with-Errors problem
Computability and Complexity
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach
Numerical Analysis Lecture 16.
Chapter 3 Linear Algebra
Lattices. Svp & cvp. lll algorithm. application in cryptography
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Linear Algebra Lecture 18.
Numerical Analysis Lecture 17.
On The Quantitative Hardness of the Closest Vector Problem
Eigenvalues and Eigenvectors
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
Presentation transcript:

Diophantine Approximation and Basis Reduction By Shu Wang CAS 746 Presentation 6th, Feb, 2006

Overview Problem: Approximating real numbers by rational numbers of low denominator and finding a so-called reduced basis in a lattice Content The continued fraction method for approximating one real number Lovász’s basis reduction method for lattices Applications Notations

Dirichlet’s Theorem Let be a real number and let Then there exist two integers p and q such that Example.

Proof of Dirichlet’s Theorem … 1 Let we find two different integers i and j where Consider the following series Otherwise, according to pigeon-hole principle,

Proof of Dirichlet’s Theorem - continued Exercises

The Continued Fraction Method Given a real number , we compute its rational approximation by following a series of steps as follows: First we define This sequence stops if becomes an integer We define an sequences called convergents that approximate to the above If becomes an integer then the last term of convergents equals to . We use to denote the term of the convergents of

The Continued Fraction Method (2) We can determine a sequence where so that it corresponds to the convergent series Suppose the first two terms are as follows: What can we deduce from it? If then . Contradiction exist.

Proof

The Continued Fraction Method (3) Suppose we have found nonnegative integers such that This implies why?

The Continued Fraction Method (4) We find the largest integer such that We define If then the sequence stop, otherwise we find the largest such that We define and so on…… We can repeat the iteration and find the sequence It turns out that this sequence is the same as the sequence of convergents of real number !

Proof We use to denote the term with respect to First we prove when Prove by induction Then we prove

Some Properties of Sequence Denominators are monotonically increasing For any real numbers and with , one of the convergents satisfy the Dirichlet’s theorem Proof: Let be the last convergent for which holds. Then The sequence converge to Proof by induction

Algorithm of Continued Fraction Method Initially . Suppose then we compute by using the following rule: If k is even and , subtract times the second column of from the first column; If k is odd and , subtract times the first column of from the second column; The matrices is in the following form: The found in this way are the same as in the convergents Proved by induction

Time complexity of Continued Fraction Method Corollary. Given rational number , the continued fraction method finds integers and as described in Dirichelet’s theorem in time polynomially bounded by the size of Proved similar to Euclidean algorithm Theorem. Let be a real number, and let and be natural numbers with . Then occurs as convergent for Corollary. There exist a polynomial algorithm which, for given rational number and natural number M, tests if there exists a rational number with . If so, finds this rational number.

Summary Given a real number , there exist a rational number with small that is close enough to Continued fraction method compute a rational number that equals to if is a rational number. Otherwise converge to The algorithm for continued fraction method is a polynomial Euclidean-like algorithm

Basis Reduction in Lattices - Overview Problem: Given a lattice (represented by its basis), finds a reduced “short” (nearly orthogonal) basis. Applications: Finding a short nonzero vector in a lattice Simultaneous Diophantine approximation Finding the Hermite normal form Basis reduction has numerous applications in cryptanalysis of public-key encryption schemes: knapsack cryptosystems, RSA with particular settings, and so forth

Basic Concepts Review Lattice. Given a sequence of vectors , and a group we say generate if . We call a lattice and the basis of . In other words, a lattice can be seen as an integer linear combinations of its basis. It is a subset of the subspace generated by its basis. A matrix can be seen as a sequence of column (row) vectors, therefore a lattice can be generated by columns (rows) of a matrix

Basic Concepts Review - 2 Let A and B both be a nonsingular matrix of order n, and whose column both generate the same lattice , then and this is called the det of lattice . In other words, det is independent to chose of basis Proof: Lemma 1: If B is obtained by interchanging two columns (rows) of A, then det B = -det A. Proof: Complicated (component-wise) proof by induction Lemma 2: If A has two identical columns (rows), then det A = 0. Proof: Let A be a matrix with two identical rows, let B be a matrix constructed from A by interchanging these two column (rows). Then det B = det A because these two matrices are equal. However, from Lemma 1 we know that det B = -det A. So det B = det A = 0 Lemma 3: The determinant of an nxn matrix can be computed by expansion of any row or column. Also called Laplace Expansion Theorem, component-wisely proved by Laplace. Lemma 4: If B is obtained by multiplying a column (row) of A by k, then det B = k det A. Proof. We can calculate det B by expanding the same column (row) of B as that of A, which yields det B = k det A.

Basic Concepts Review - 3 Lemma 5: If A, B and C are identical except that the i-th column (row) of C is the sum of the i-th columns (rows) of A and B, then det C = det A + det B. Proof. We can calculate det B by expanding the i-th column of C, then we can prove det C = det A + det B by using the distributivity of multiplication of matrices Lemma 6: If B is obtained by adding a multiple of one column (row) i of A to another column (row) j, then det B = det A. Proof. Let A’ be the matrix that constructed by replacing column (row) i of A to j, then det A’ = 0 because A’ has two identical columns. Matrix A, A’ and B satisfy Lemma 5 so that det B = det A + det A’ = det A Lemma 7: If If B is obtained by elementary column operations from A, then |det B| = |det A|. Proof. Directly from Lemma 1, 4 and 6. From chapter 4, we know that if matrix A and B generate the same lattice then they have the same Hermite Normal Form by elementary column operations, therefore from Lemma 7 we have |det B| = |det A|.

Geometric Meaning of Determinant The determinant of corresponds to the volume of the parallelepiped Where is any basis for Hadamard Inequality theorem: When are orthogonal to each other, the equality holds. We now have the lower bound of , what about the upper bound? Hermite showed that Minkowski showed that Schnorr proved that for each fixed then there exist a polynomial algorithm finding a basis satisfying

Basis Reduction Theorem A matrix is called positive definite if There exist a polynomial algorithm which, for given positive definite rational matrix D, finds a basis for the lattice satisfying ‖b1‖ ‖b2‖…‖bn‖≤ where ‖x‖ We prove this theorem by showing the LLL algorithm

The Lenstra, Lenstra and Lovász Algorithm We construct a series of basis for as follows: The first basis is the unit basis. We construct the next basis inductively using the following steps: 1. Denote as the matrix with columns , we calculate 2. 3. Choose, if possible, an index i such that ‖b2*‖2>2‖b*i+1‖2. Exchange bi and bi+1, and start with step 1 again. If no such i exists, the algorithm stops.

The Lenstra, Lenstra and Lovász Algorithm - Continued The LLL algorithm is an approximation of the Gram-Schmidt orthogonalization process which finds a orthogonal basis in a subspace of The LLL algorithm terminates in polynomial time, with intermediate numbers polynomially bounded by the size of D Complicated proof see p.68 – p.71

Finding a Short Nonzero Vector in a Lattice In 1891, Minkowski proved a classical result: any n-dimensional lattice contains a nonzero vector b with where denotes the volume of the n-dimensional unit ball. However, no polynomial algorithm finding such a vector b is known. With the basis reduction method, by taking the shortest vector one can find a “longer short vector” in a lattice, which satisfy However, this vector is generally not the shortest one in the lattice The CVP (Closest Vector Problem): “Given a lattice and vector a, find b with (any kind of) norm of b-a as small as possible” is proven to be NP-complete The SVP (Shortest Nonzero Vector Problem): “Given a lattice, finding a vector in the lattice as small as possible” is even proven to be NP-hard to approximate within some constant [Dan 2001]

Simultaneous Diophantine Approximation Dirichlet showed that Let be real numbers with Then there exist two integers and q such that No polynomial method is known for this problem, unless when n=1, where we can use the continued fraction method However, we can use basis reduction method to find a weaker approximation of the problem in polynomial time

Finding the Hermite Normal Form Given a matrix A, we can use basis reduction method to calculate vector and record it in such a way that it can be transform to Hermite Normal Form by elementary column operations Some of the other applications Lenstra’s Integer Linear Programming algorithm Factoring polynomials (over rationals) in polynomial time Breaking cryptographic codes Disproving Mertens’ conjecture Solving low density subset sum problems

Summary The continued fraction method for approximating one real number by rational numbers Lovász’s basis reduction method for finding a short basis in a lattice Applications

Thank you 