Cryptography, Attacks and Countermeasures Lecture 4 –Boolean Functions John A Clark and Susan Stepney Dept. of Computer Science University of York, UK.

Slides:



Advertisements
Similar presentations
CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE.
Advertisements

Cryptography, Attacks and Countermeasures Lecture 3 - Stream Ciphers
Transformations We want to be able to make changes to the image larger/smaller rotate move This can be efficiently achieved through mathematical operations.
Session 2: Secret key cryptography – stream ciphers – part 2.
Stream ciphers 2 Session 2. Contents PN generators with LFSRs Statistical testing of PN generator sequences Cryptanalysis of stream ciphers 2/75.
Section 2.3 Gauss-Jordan Method for General Systems of Equations
Artificial Neural Networks
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Symmetric Matrices and Quadratic Forms
Chapter 5 Orthogonality
Session 6: Introduction to cryptanalysis part 2. Symmetric systems The sources of vulnerabilities regarding linearity in block ciphers are S-boxes. Example.
Calculating Spectral Coefficients for Walsh Transform using Butterflies Marek Perkowski September 21, 2005.
MAE 552 Heuristic Optimization
Constrained Optimization
INTEGRALS 5. INTEGRALS We saw in Section 5.1 that a limit of the form arises when we compute an area.  We also saw that it arises when we try to find.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Basic Concepts and Definitions Vector and Function Space. A finite or an infinite dimensional linear vector/function space described with set of non-unique.
Session 6: Introduction to cryptanalysis part 1. Contents Problem definition Symmetric systems cryptanalysis Particularities of block ciphers cryptanalysis.
Class 25: Question 1 Which of the following vectors is orthogonal to the row space of A?
Orthogonality and Least Squares
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
6 6.3 © 2012 Pearson Education, Inc. Orthogonality and Least Squares ORTHOGONAL PROJECTIONS.
Orthogonal Sets (12/2/05) Recall that “orthogonal” matches the geometric idea of “perpendicular”. Definition. A set of vectors u 1,u 2,…,u p in R n is.
Subdivision Analysis via JSR We already know the z-transform formulation of schemes: To check if the scheme generates a continuous limit curve ( the scheme.
New Approach to Quantum Calculation of Spectral Coefficients Marek Perkowski Department of Electrical Engineering, 2005.
Separate multivariate observations
5.1 Orthogonality.
Quantum One: Lecture 8. Continuously Indexed Basis Sets.
Lecture 10: Inner Products Norms and angles Projection Sections 2.10.(1-4), Sections 2.2.3, 2.3.
SVD(Singular Value Decomposition) and Its Applications
Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
Chapter 5: The Orthogonality and Least Squares
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Linear Algebra Chapter 4 Vector Spaces.
Block ciphers 2 Session 4. Contents Linear cryptanalysis Differential cryptanalysis 2/48.
4 4.2 © 2012 Pearson Education, Inc. Vector Spaces NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS.
4.1 Vector Spaces and Subspaces 4.2 Null Spaces, Column Spaces, and Linear Transformations 4.3 Linearly Independent Sets; Bases 4.4 Coordinate systems.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
Vectors CHAPTER 7. Ch7_2 Contents  7.1 Vectors in 2-Space 7.1 Vectors in 2-Space  7.2 Vectors in 3-Space 7.2 Vectors in 3-Space  7.3 Dot Product 7.3.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
AN ORTHOGONAL PROJECTION
Basic Concepts in Number Theory Background for Random Number Generation 1.For any pair of integers n and m, m  0, there exists a unique pair of integers.
Copyright © Cengage Learning. All rights reserved. 4 Quadratic Functions.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Chapter 6. Threshold Logic. Logic design of sw functions constructed of electronic gates different type of switching element : threshold element. Threshold.
Section 2.3 Properties of Solution Sets
Numerical Methods.
Chapter 10 Real Inner Products and Least-Square
Two and Three Dimensional Problems: Atoms and molecules have three dimensional properties. We will therefore need to treat the Schrödinger equation for.
Section 5.1 Length and Dot Product in ℝ n. Let v = ‹v 1­­, v 2, v 3,..., v n › and w = ‹w 1­­, w 2, w 3,..., w n › be vectors in ℝ n. The dot product.
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
AGC DSP AGC DSP Professor A G Constantinides©1 Signal Spaces The purpose of this part of the course is to introduce the basic concepts behind generalised.
Class 26: Question 1 1.An orthogonal basis for A 2.An orthogonal basis for the column space of A 3.An orthogonal basis for the row space of A 4.An orthogonal.
Class 24: Question 1 Which of the following set of vectors is not an orthogonal set?
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Mathematical Tools of Quantum Mechanics
Joint Moments and Joint Characteristic Functions.
INTEGRALS We saw in Section 5.1 that a limit of the form arises when we compute an area. We also saw that it arises when we try to find the distance traveled.
5 INTEGRALS.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
Basic Theory (for curve 01). 1.1 Points and Vectors  Real life methods for constructing curves and surfaces often start with points and vectors, which.
4 Vector Spaces 4.1 Vector Spaces and Subspaces 4.2 Null Spaces, Column Spaces, and Linear Transformations 4.3 Linearly Independent Sets; Bases 4.4 Coordinate.
Computer Graphics Mathematical Fundamentals Lecture 10 Taqdees A. Siddiqi
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Lecture 03: Linear Algebra
Linear Algebra Lecture 39.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Presentation transcript:

Cryptography, Attacks and Countermeasures Lecture 4 –Boolean Functions John A Clark and Susan Stepney Dept. of Computer Science University of York, UK

Stream Cipher Components Boolean Functions Typical Security Related Criteria Non-linearity. Correlation immunity Algebraic degree. Tradeoffs Will give a linear algebra treatment. Pythagoras’s theorem!

Boolean Functions A Boolean function f:{0,1} n ->{0,1} Polar representation f(x) x Can view BF as vector in R 2 n f(x) = ( -1 ) f(x)

Boolean Functions – Algebraic normal Form (ANF) A Boolean function on n-inputs can be represented in minimal sum (XOR +) of products (AND.) form: This is the algebraic normal form of the function. The algebraic degree of the function is the size of the largest subset of inputs (i.e. the number of x j in it) associated with a non-zero co- efficient. 1 is a constant function (as is 0) x 1 +x 3 +x 5 is a linear function x 1.x 3 +x 5 is a quadratic function x 1.x 3.x 5 +x 4 x 5 +x 2 is a cubic function f(x 1,…,x n )=a 0 +a 1. x 1 +…+a n. x n + a 1,2.x 1.x 2 +…+ a n-1,n.x n-1.x n +… …+a 1,2..n x 1.x 2...x n

Generating ANF Given f(x 1,…,x n ) it is fairly straightforward to derive the ANF. Consider the general form: The constant term a0 is easily derived. a 0 =f(0,0,…,0) We can now determine a k by considering: f(1,….,0,0,0)=a 0 +a 1 x 1 = a 0 +a 1 and so a 1 =a 0 +f(1,….,0,0,0) f(0,1,0….,0,0)=a 0 +a 2 x 2 = a 0 +a 2 and so a 2 =a 0 + f(0,1,0….,0,0)……. f(0,0,0….,0,1)=a 0 +a n x n = a 0 +a n and so a n =a 0 +f(0,0,0,….0,1) We can now determine a j,k by considering: f(1,1,0…,0)=a 0 +a 1 x 1 +a 2 x 2 + a 1,2 x 1,2 = a 0 +a 1 +a 2 +a 1,2 and so a 1,2 = a 0 +a 1 +a 2 + f(1,1,0…,0) and so on. f(x 1,…,x n )=a 0 +a 1. x 1 +…+a n. x n + a 1,2.x 1.x 2 +…+ a n-1,n.x n-1.x n +… …+a 1,2..n x 1.x 2...x n

Vectors and their Representations Boolean functions can be regarded as vectors in R 2 n. Boolean functions are vectors with elements 1 or –1. Any vector space has a basis set of vectors. Given any vector v it can always be expressed UNIQUELY as a weighted sum of the vectors in the basis set. This in 3-D we have the following standard basis Others are possible:

Orthonormal Basis If the basis vectors are orthogonal and each have norm (length) 1 we say that they form an orthonormal basis. We can express any vector in terms of its projections onto each of the basis vectors.

Creating Orthonormal Basis Given a basis you can always turn it into an orthonomal basis using the Gram-Schmidt procedure. (We won’t go into details). Given an orthogonal basis you can always create an orthonormal one by dividing each vector by its norm. In 2-D, the following are clearly orthogonal We can form an orthonomal basis

N-Dimensional vectors To normalise an n-dimensional vector we proceed in the same way. The norm is the square root of the sum of squares of its elements

Linear Functions Recall that for any  in 0..(2 n -1) we can define a linear function for all x in 0..(2 n -1) by: where  and x are simply sequences of bits We will use natural decimal indexing where convenient, e.g

Polar Form of Linear Functions The polar form of a linear function is just a vector of +1 and –1 elements defined by

Orthonormal Basis of Linear Functions x Columns are polar forms of functions

Balance One criterion that we might desire for a combining function is balance. there are an equal number of 0’s and 1’s in the truth table form. there are an equal number of +1’s and –1’s in the polar form. The polar form has elements that sum to 0. Or, if you take the dot product of the polar form of a function with the constant function comprising all 1’s, the result is 0. New improved slide

Linear Functions are Balanced Each linear function has an equal number of 1’s and –1’s (and so is a balanced function). The sum of elements in a column is just Is it obvious that this will always produce a sum to zero, whatever the value of  ? Consider  with k bits set (w.l.o.g. consider the first k bits as set). Now consider x as it varies over its whole range. Can you partition the x into two equal sets that give opposite values of the L w (x)? (Consider the x 1 component.)

Linear Functions are Balanced Consider

Linear Functions are Orthogonal Dissimilar linear functions are orthogonal. Consider the dot product of any two columns of the 8 x 8 matrix given earlier. The result is 0. To see why. Consider two linear functions x 1 + x 3 and x 2 + x 3. The dot product is given by

Orthonormal Basis with Linear Functions The linear functions are vectors of 2 n elements each of which is 1 or –1. The norm is therefore Thus we can form an orthonormal basis set

Representing Functions Since a function f is just a vector and we have an orthonormal basis, we can represent it as the sum or projections onto the elements of that basis. This is called the Walsh Hadamard function This is the signed magnitude of the projection onto the linear function

Security Criteria - Balance Various desirable properties of functions are expressed in terms of the Walsh Hadamard function values. Balance – equal numbers of trues and falses, or +1’s and –1’s in the polar form. Saw that the projection onto the constant function should be 0.

Security Criteria We saw that functions that ‘looked like’ (agreed with) linear functions too much were a problem. But a measure of agreed with is fairly easily calculable (Hamming distance with linear function in usual bit form). In polar form, we simply take the dot product with the linear function. When sort of function f agrees most with the linear function L  ? Yes, when f = L  all the elements agree

Security Criteria – Non-linearity Also if they all disagree, i.e. f= NOT L , we can form another function that agrees with L  entirely by negating f. Or in other words f   1 A function f that has minimal useful agreement (i.e. 50% agreement) with L  has Hamming distance of 2 n/2 with it. Or, in polar terms (each is +1 or –1), half the elements agree and half disagree

Security Criteria – Non-linearity Well, if correlation with linear functions is a bad idea let’s have all such correlations being equal to 0, i.e. choose f such that the projections onto all linear functions are 0. Would if I could, but I can’t. Why is this NOT possible?

Back in Mundane World of 3-D In 3-D is there a vector that has a null projection onto the x-axis? Is there a vector that has a null projection onto each of the x and y axes? Is there a vector that has a null projection onto each of the x, y and z axes?

Security Criteria Because we have a basis set of linear functions. If a vector has a null projection onto all of them it is the zero-vector. A Boolean function is not a zero-vector. It must be have projections onto some of the linear functions. But some projections are more harmful than others from the point of view of the correlation attacks. Those correlations with single inputs are particularly dangerous, followed by correlations with linear functions of two inputs etc.

Security Criteria – Correlation Immunity Correlations with single inputs correspond to projections onto the L  where the  has only a single bit set. For three inputs, we might require Similarly, correlations with linear functions on two inputs correspond to the projections onto linear functions L  where the  has only two bits set.

Security Criteria – Correlation Immunity If a function has a null projection onto all linear L  functions with 1,2,..,k bits set in  (i.e. it is uncorrelated with any subset of k or fewer inputs) the function is said to be correlation immune of order k. Or put another way If it is also balanced then we say it is resilient.

Non-linearity For a variety of reasons (there are other attacks that exploit linearity) we would like to keep the degree of agreement with any linear function as low as possible. So if we cannot have all that we want (all projections 0) perhaps we might try to keep the worst agreement to a minimum. These leads to the definition of the non-linearity of a function. We want to keep the Hamming distance to any linear function (or its negation) as close to 2 (n/2) as possible. Or.. Keep the maximum absolute value of any projection on a linear function to a minimum. Keep the following as low as possible

Non-linearity Non-linearity is defined by: It seeks to minimise the worst absolute value of the projection onto any linear function. But what is the maximum value we can get for non- linearity?

Boolean Functions We can project these vectors onto a basis of 2 n orthogonal (Boolean function) vectors L 0, …, L 2 n -1. where L  (x)=  1 x 1  …   n x n f(x) Each point on the 2 n dimension hyper-sphere surface has a standard vector representation and a spectral representation in terms of its Walsh Hadamard values.

Norm of a Vector The square of the length of the vector is just the sum of squares of its projection magnitudes onto the orthonormal basis. Thus, for 2-D we have the usual Pythagoras rule b a c

Norm of a Boolean Vector The square of the norm of a Boolean vector is just 2 n. But we know that this is just the sum of the squares of the projections onto the orthonormal basis

Parseval’s Theorem Parseval’s Theorem. This is really a form of Pythagoras’s theorem. This means that if we reduce the magnitude of one of the F(  ) another must increase in magnitude.

Bent Functions Maximise Non-linearity Researched first by Rothaus. These functions maximise non-linearity and are functions on even numbers of variables. Bent functions have projection magnitudes of the same size (but with different signs) But this includes projection onto the constant function => not a balanced function. If you want maximum non-linearity, you cannot have balance.

Correlation Immunity and Non-linearity Let’s look again at Parseval’s theorem: Now if we want correlation immunity of order k Then the F(  ) of some of the remaining (|  |>k) must increase in magnitude. But this increases non- linearity. Non-linearity and correlation immunity are in conflict.

Other Criteria – Algebraic Degree All other things being equal, we would prefer more complex functions to simpler ones. One aspect that is of interest is the algebraic degree of the function. We would typically like this to be as high as possible. It can be shown (not here) that there is a conflict with correlation immunity. Sigenthaler has shown that for function f on n variables with correlation immunity of order m and algebraic degree d, we must have For balanced functions we must have m+d<=n m+d<=n-1

Further Structure There is another structure that can be exploited. It is a form of correlation between outputs corresponding to inputs that are related in a straightforward way. This is autocorrelation. Bitwise XOR

Tradeoffs We begin to see the sorts of problems cryptographers face. There are many different forms of attack. Protecting against one in an ideal way may allow another form of attack. Life is an unending series of tradeoffs. However, given the mathematical constraints, we might still want to achieve the best profile of properties we can. A lot of Boolean function research seeks constructions to derive such functions.

No Such Thing As A Secure Boolean Function There is no such thing as a secure Boolean function. There may be functions that are appropriate to be used in particular contexts to give secure system. However, the treatment here shows quite effective that life is not easy and that compromises have to be made. Nice treatment in terms of vector algebra and security criteria being defined in terms of subspaces of a vector space of R 2 n.