Presentation is loading. Please wait.

Presentation is loading. Please wait.

Applied Cryptography (Public key) (I)

Similar presentations


Presentation on theme: "Applied Cryptography (Public key) (I)"— Presentation transcript:

1 Applied Cryptography (Public key) (I)
ECE 693 BD Security

2 John wrote the letters of the alphabet under the letters in its first lines and tried it against the message. Immediately he knew that once more he had broken the code. It was extraordinary the feeling of triumph he had. He felt on top of the world. For not only had he done it, had he broken the July code, but he now had the key to every future coded message, since instructions as to the source of the next one must of necessity appear in the current one at the end of each month. —Talking to Strange Men, Ruth Rendell Opening quote.

3 Confidentiality using Symmetric Encryption
traditionally symmetric encryption is used to provide message confidentiality If encryption is to be used to counter attacks on confidentiality, need to decide what to encrypt and where the encryption function should be located. Now examine potential locations of security attacks and then look at the two major approaches to encryption placement: link and end to end. Have many locations where attacks can occur in a typical scenario (Stallings Figure 7.1), such as when have: + workstations on LANs access other workstations & servers on LAN + LANs interconnected using switches/routers + with external lines or radio/satellite links Consider attacks and placement in this scenario: + snooping from another workstation + use dial-in to LAN or server to snoop + physically tap line in wiring closet + use external router link to enter & snoop + monitor and/or modify traffic one external links

4 Placement of Encryption
have two major placement alternatives link encryption encryption occurs independently on every link implies must decrypt traffic between links requires many devices, but paired keys end-to-end encryption encryption occurs between original source and final destination need devices at each end with shared keys There are two fundamental encryption placement alternatives: link encryption and end-to-end encryption. With link encryption, each vulnerable communications link is equipped on both ends with an encryption device. But all the potential links in a path from source to destination must use link encryption. Each pair of nodes that share a link should share a unique key, with a different key used on each link. Thus, many keys must be provided. With end-to-end encryption, the encryption process is carried out at the two end systems. Thus end-to-end encryption relieves the end user of concerns about the degree of security of networks and links that support the communication. The user data is secure, but the traffic pattern is not because packet headers are transmitted in the clear. See Stallings Table 7.1 for more detailed comparison between these alternatives.

5 Placement of Encryption
Stallings Figure 7.2 contrasts the two encryption placement alternatives, for encryption over a Packet Net.

6 Placement of Encryption
when using end-to-end encryption must leave headers in clear so network can correctly route information hence although contents protected, traffic pattern flows are not ideally want both at once end-to-end protects data contents over entire path and provides authentication link protects traffic flows from monitoring With end-to-end encryption, user data are secure, but the traffic pattern is not because packet headers are transmitted in the clear. However end-to-end encryption does provide a degree of authentication, since a recipient is assured that any message that it receives comes from the alleged sender, because only that sender shares the relevant key. Such authentication is not inherent in a link encryption scheme. To achieve greater security, both link and end-to-end encryption are needed, as is shown in Figure 7.2 on the previous slide.

7 Placement of Encryption
can place encryption function at various layers in OSI Reference Model link encryption occurs at layers 1 or 2 end-to-end can occur at layers 3, 4, 6, 7 as move higher less information is encrypted but it is more secure though more complex with more entities and keys Can place encryption at any of a number of layers in the OSI Reference Model. Link encryption can occur at either the physical or link layers. End-to-end encryption could be performed at the network layer (for all processes on a system, perhaps in a Front End Processor), at the Transport layer (now possibly per process), or at the Presentation/Application layer (especially if need security to cross application gateways, but at cost of many more entities to manage). Can view alternatives noting that as you move up the communications hierarchy, less information is encrypted but it is more secure.

8 Encryption vs Protocol Level
Stallings Figure 7.5 illustrates the relationship between encryption and protocol level, using the TCP/IP architecture as an example, showing how much information in a packet is protected.

9 Random Numbers many uses of random numbers in cryptography
nonces in authentication protocols to prevent replay session keys public key generation keystream for a one-time pad in all cases its critical that these values be statistically random, uniform distribution, independent unpredictability of future values from previous values Random numbers play an important role in the use of encryption for various network security applications. Getting good random numbers is important, but difficult. You don't want someone guessing the key you're using to protect your communications because your "random numbers" weren't (as happened in an early release of Netscape SSL). Need such random values to be both statistically random (with uniform distribution & independent) and also to be unpredictable (so that it is not possible to predict future values having observed previous values).

10 Pseudorandom Number Generators (PRNGs)
often use deterministic algorithmic techniques to create “random numbers” although are not truly random can pass many tests of “randomness” known as “pseudorandom numbers” created by “Pseudorandom Number Generators (PRNGs)” Cryptographic applications typically make use of deterministic algorithmic techniques for random number generation, producing sequences of numbers that are not statistically random, but if the algorithm is good, the resulting sequences will pass many reasonable tests of randomness. Such numbers are referred to as pseudorandom numbers, created by “Pseudorandom Number Generators (PRNGs)”.

11 Linear Congruential Generator
common iterative technique using: Xn+1 = (aXn + c) mod m given suitable values of parameters can produce a long random-like sequence suitable criteria to have are: function generates a full-period generated sequence should appear random efficient implementation with 32-bit arithmetic note that an attacker can reconstruct sequence given a small number of values have possibilities for making this harder By far the most widely used technique for pseudorandom number generation is the “Linear Congruential Generator”, first proposed by Lehmer. It uses successive values from an iterative equation. Given suitable values of parameters can produce a long random-like sequence, but there are only a small number of such good choices. Note that the sequence, whilst looking random, is highly predictable, and an attacker can reconstruct the sequence knowing only a small number of values. There are some approaches to making this harder to do in practice by modifying the numbers in some way, see text.

12 Using Block Ciphers as PRNGs
for cryptographic applications, can use a block cipher to generate random numbers often for creating session keys from master key Counter Mode Xi = EKm[i] Output Feedback Mode Xi = EKm[Xi-1] For cryptographic applications, it makes some sense to take advantage of any block cipher encryption functions available to produce random numbers. Can use the Counter Mode or Output Feedback Mode, typically for session key generation from a master key.

13 ANSI X9.17 PRG See a good website for software:
It uses date/time & seed inputs and 3 triple-DES encryptions to generate a new seed & random value; DTi - Date/time value at the beginning of ith generation stage Vi - Seed value at the beginning of ith generation stage Ri - Pseudorandom number produced by the ith generation stage K1, K2 - DES keys used for each stage Then compute successive values as: Ri = EDE([K1, K2], [Vi XOR EDE([K1, K2], DTi)]) Vi+1 = EDE([K1, K2], [Ri XOR EDE([K1, K2], DTi)]) One of the strongest (cryptographically speaking) PRNGs is specified in ANSI X It uses date/time & seed inputs and 3 triple-DES encryptions to generate a new seed & random value. See discussion & illustration in Stallings section 7.4 & Figure 7.14 where: DTi - Date/time value at the beginning of ith generation stage Vi - Seed value at the beginning of ith generation stage Ri - Pseudorandom number produced by the ith generation stage K1, K2 - DES keys used for each stage Then compute successive values as: Ri = EDE([K1, K2], [Vi XOR EDE([K1, K2], DTi)]) Vi+1 = EDE([K1, K2], [Ri XOR EDE([K1, K2], DTi)]) Several factors contribute to the cryptographic strength of this method. The technique involves a 112-bit key and three EDE encryptions for a total of nine DES encryptions. The scheme is driven by two pseudorandom inputs, the date and time value, and a seed produced by the generator that is distinct from the pseudo-random number produced by the generator. Thus the amount of material that must be compromised by an opponent is overwhelming. EDE: triple DES (Encryption – Decryption – Encryption)

14 Blum Shub Generator based on public key algorithms
use least significant bit from iterative equation: xi = xi-12 mod n where n=p.q, and primes p,q =3 mod 4 unpredictable, passes next-bit test security rests on difficulty of factoring N is unpredictable given any run of bits slow, since very large numbers must be used too slow for cipher use, good for key generation A popular approach to generating secure pseudorandom number is known as the Blum, Blum, Shub (BBS) generator, after its developers [BLUM86]. It has perhaps the strongest public proof of its cryptographic strength of any PRNG. It is based on public key algorithms, and hence is very slow, but has a very high level of security. It is referred to as a cryptographically secure pseudorandom bit generator (CSPRBG), being in practice unpredictable.

15 Natural Random Noise best source is natural randomness in real world
find a regular but random event and monitor do generally need special h/w to do this eg. radiation counters, radio noise, audio noise, thermal noise in diodes, leaky capacitors, mercury discharge tubes, etc starting to see such h/w in new CPU's A true random number generator may produce an output that is biased in some way. Various methods of modifying a bit stream to reduce or eliminate the bias have been developed. A true random number generator (TRNG) uses a nondeterministic source to produce randomness. Most operate by measuring unpredictable natural processes, such as pulse detectors of ionizing radiation events, gas discharge tubes, and leaky capacitors. Special hardware is usually needed for this. A true random number generator may produce an output that is biased in some way. Various methods of modifying a bit stream to reduce or eliminate the bias have been developed.

16 Published Sources a few published collections of random numbers
Rand Co, in 1955, published 1 million numbers generated using an electronic roulette wheel has been used in some cipher designs cf Khafre earlier Tippett in 1927 published a collection issues are that: these are limited too well-known for most uses Another alternative is to dip into a published collection of good-quality random numbers (e.g., [RAND55], [TIPP27]). However, these collections provide a very limited source of numbers compared to the potential requirements of a sizable network security application. Furthermore, although the numbers in these books do indeed exhibit statistical randomness, they are predictable because an opponent who knows that the book is in use can obtain a copy.

17 Review Number Theory The Devil said to Daniel Webster: "Set me a task I can't carry out, and I'll give you anything in the world you ask for." Daniel Webster: "Fair enough. Prove that for n greater than 2, the equation an + bn = cn has no non-trivial solution in the integers." They agreed on a three-day period for the labor, and the Devil disappeared. At the end of three days, the Devil presented himself, haggard, jumpy, biting his lip. Daniel Webster said to him, "Well, how did you do at my task? Did you prove the theorem?' "Eh? No no, I haven't proved it." "Then I can have whatever I ask for? Money? The Presidency?' "What? Oh, that—of course. But listen! If we could just prove the following two lemmas—" —The Mathematical Magpie, Clifton Fadiman Opening quote. A number of concepts from number theory are essential in the design of public-key cryptographic algorithms, which this chapter will introduce.

18 Prime Numbers prime numbers only have divisors of 1 and self
they cannot be written as a product of other numbers note: 1 is prime, but is generally not of interest eg. 2,3,5,7 are prime, 4,6,8,9,10 are not prime numbers are central to number theory list of prime number less than 200 is: A central concern of number theory is the study of prime numbers. Indeed, whole books have been written on the subject. An integer p>1 is a prime number if and only if its only divisors are 1 and itself. Prime numbers play a critical role in number theory and in the techniques discussed in this chapter. Stallings Table 8.1 (excerpt above) shows the primes less than Note the way the primes are distributed. In particular note the number of primes in each range of 100 numbers.

19 Prime Factorisation to factor a number n is to write it as a product of other numbers: n=a x b x c note that factoring a number is relatively hard compared to multiplying the factors together to generate the number the prime factorisation of a number n is when its written as a product of primes eg. 91=7x13 ; 3600=24x32x52 The idea of "factoring" a number is important - finding numbers which divide into it. Taking this as far as can go, by factorising all the factors, we can eventually write the number as a product of (powers of) primes - its prime factorisation. Note also that factoring a number is relatively hard compared to multiplying the factors together to generate the number.

20 Relatively Prime Numbers & GCD
two numbers a, b are relatively prime if have no common divisors apart from 1 eg. 8 & 15 are relatively prime since factors of 8 are 1,2,4,8 and of 15 are 1,3,5,15 and 1 is the only common factor conversely can determine the greatest common divisor by comparing their prime factorizations and using least powers eg. 300=21x31x52 18=21x32 hence GCD(18,300)=21x31x50=6 Have the concept of “relatively prime” if two number share no common factors other than 1. Another common problem is to determine the "greatest common divisor” GCD(a,b) which is the largest number that divides into both a & b.

21 Fermat's Theorem ap-1 = 1 (mod p) i.e. remainder is 1
i.e. the number (a p-1 − 1) is an integer multiple of p where p is prime and gcd(a,p)=1 also known as Fermat’s Little Theorem also ap = a (mod p) i.e., the number (a p − a) is an integer multiple of p useful in public key and primality testing Two theorems that play important roles in public-key cryptography are Fermat’s theorem and Euler’s theorem. Fermat’s theorem (also known as Fermat’s Little Theorem) as listed above, states an important property of prime numbers. See Stallings section 8.2 for its proof.

22

23 Euler Totient Function ø(n)
when doing arithmetic modulo n complete set of residues is: 0..n-1 reduced set of residues is those numbers (residues) which are relatively prime to n eg for n=10, complete set of residues is {0,1,2,3,4,5,6,7,8,9} reduced set of residues is {1,3,7,9} The number of elements in reduced set of residues is called the Euler Totient Function ø(n) Now introduce the Euler’s totient function ø(n), defined as the number of positive integers less than n & relatively prime to n. Note the term “residue” refers to numbers less than some modulus, and the “reduced set of residues” to those numbers (residues) which are relatively prime to the modulus (n). Note by convention that ø(1) = 1.

24 Euler Totient Function ø(n)
to compute ø(n) need to count number of residues to be excluded in general need prime factorization, but we have some special cases that allow us to quickly calculate ø(n) : for p (p prime) ø(p) = p-1 for p.q (p,q prime) ø(pq) =(p-1)x(q-1) eg. ø(37) = 36 ø(21) = (3–1)x(7–1) = 2x6 = 12 To compute ø(n) need to count the number of residues to be excluded. In general you need use a complex formula on the prime factorization of n, but have a couple of special cases as shown.

25 Euler's Theorem It is a generalisation of Fermat's Theorem
aø(n) = 1 (mod n) for any a,n where gcd(a,n)=1 eg. a=3;n=10; ø(10)=4; hence 34 = 81 = 1 mod 10 a=2;n=11; ø(11)=10; hence 210 = 1024 = 1 mod 11 Euler's Theorem is a generalization of Fermat's Theorem for any number n. See Stallings section 8.2 for its proof.

26 Primality Testing often need to find large prime numbers
traditionally sieve using trial division ie. divide by all numbers (primes) in turn less than the square root of the number only works for small numbers alternatively can use statistical primality tests based on properties of primes for which all primes numbers satisfy property but some composite numbers, called pseudo-primes, also satisfy the property can use a slower deterministic primality test For many cryptographic functions it is necessary to select one or more very large prime numbers at random. Thus we are faced with the task of determining whether a given large number is prime. Traditionally sieve for primes using trial division of all possible prime factors of some number, but this only works for small numbers. Alternatively can use repeated statistical primality tests based on properties of primes, and then for certainty, use a slower deterministic primality test, such as the AKS test.

27 Miller Rabin Algorithm
a test based on Fermat’s Theorem algorithm is: TEST (n) is: 1. Find integers k, q, k > 0, q odd, so that (n–1)=2kq 2. Select a random integer a, 1<a<n–1 3. if aq mod n = 1 then return (“maybe prime"); 4. for j = 0 to k – 1 do 5. if (a2jq mod n = n-1) then return(" maybe prime ") 6. return ("composite") The algorithm shown is due to Miller and Rabin is typically used to test a large number for primality. See Stallings section 8.3 for its proof, which is based on Fermat’s theorem.

28 Probabilistic Considerations
if Miller-Rabin returns “composite” the number is definitely not prime otherwise is a prime or a pseudo-prime chance it detects a pseudo-prime is < 1/4 hence if repeat test with different random a then chance n is prime after t tests is: Pr(n prime after t tests) = 1-4-t eg. for t=10 this probability is > If Miller-Rabin returns “composite” the number is definitely not prime, otherwise it is either a prime or a pseudo-prime. The chance it detects a pseudo-prime is < 1/4 So if apply test repeatedly with different values of a, the probabiility that the number is a pseudo-prime can be made as small as desired, eg after 10 tests have chance of error < If really need certainty, then would now expend effort to run a deterministic primality proof such as AKS.

29 Prime Distribution prime number theorem states that primes occur roughly every ln(n) integers but can immediately ignore even #’s so in practice need only test 0.5 ln(n) numbers of size n to locate a prime note this is only the “average” sometimes primes are close together other times are quite far apart A result from number theory, known as the prime number theorem, states that primes near n are spaced on the average one every (ln n) integers. Since you can ignore even numbers, on average need only test 0.5 ln(n) numbers of size n to locate a prime. eg. for numbers round 2^200 would check 0.5ln(2^200) = 69 numbers on average. This is only an average, can see successive odd primes, or long runs of composites.

30 Chinese Remainder Theorem
used to speed up modulo computations, it allows you to perform calculations modulo factors of your modulus, and then combine the answers to get the actual result. if working modulo a product of numbers eg. mod M = m1m2..mk Chinese Remainder theorem lets us work in each moduli mi separately since computational cost is proportional to size, this is faster than working in the full modulus M One of the most useful results of number theory is the Chinese remainder theorem (CRT), so called because it is believed to have been discovered by the Chinese mathematician Sun-Tse in around 100 AD. It is very useful in speeding up some operations in the RSA public-key scheme, since it allows you to do perform calculations modulo factors of your modulus, and then combine the answers to get the actual result. Since the computational cost is proportional to size, this is faster than working in the full modulus sized modulus.

31 Chinese Remainder Theorem
can implement CRT in several ways to compute A(mod M) first compute all ai = A mod mi separately (i.e., don’t use big M, use small mi) determine constants ci below, where Mi = M/mi then combine results to get answer using: Final result! One of the useful features of the Chinese remainder theorem is that it provides a way to manipulate (potentially very large) numbers mod M, in terms of tuples of smaller numbers.This can be useful when M is 150 digits or more. However note that it is necessary to know beforehand the factorization of M. See worked examples in Stallings section 8.4.

32 Primitive Roots from Euler’s theorem have aø(n)mod n=1
consider am=1 (mod n), GCD(a,n)=1 must exist for m = ø(n) but may be smaller once powers reach m, cycle will repeat if smallest is m = ø(n) then a is called a primitive root if p is prime, then successive powers of a "generate" the group mod p these are useful but relatively hard to find Consider the powers of an integer modulo n. By Eulers theorem, for every relatively prime a, there is at least one power equal to 1 (being ø(n)), but there may be a smaller value. If the smallest value is m = ø(n) then a is called a primitive root. If n is prime, then the powers of a primitive root “generate” all residues mod n. Such generators are very useful, and are used in a number of public-key algorithms, but they are relatively hard to find.

33 Discrete Logarithms the inverse problem to exponentiation is to find the discrete logarithm of a number modulo p that is to find x such that y = gx (mod p) this is written as x = logg y (mod p) if g is a primitive root then it always exists, otherwise it may not, eg. x = log3 4 mod 13 has no answer x = log2 3 mod 13 = 4 by trying successive powers whilst exponentiation is relatively easy, finding discrete logarithms is generally a hard problem Discrete logarithms are fundamental to a number of public-key algorithms, including Diffie-Hellman key exchange and the digital signature algorithm (DSA). Discrete logs (or indices) share the properties of normal logarithms, and are quite useful. The logarithm of a number is defined to be the power to which some positive base (except 1) must be raised in order to equal that number. If working with modulo arithmetic, and the base is a primitive root, then an integral discrete logarithm exists for any residue. However whilst exponentiation is relatively easy, finding discrete logs is not, in fact is as hard as factoring a number. This is an example of a problem that is "easy" one way (raising a number to a power), but "hard" the other (finding what power a number is raised to giving the desired answer). Problems with this type of asymmetry are very rare, but are of critical usefulness in modern cryptography.

34 Summary have considered: prime numbers
Fermat’s and Euler’s Theorems & ø(n) Primality Testing Chinese Remainder Theorem Discrete Logarithms Chapter 8 summary.


Download ppt "Applied Cryptography (Public key) (I)"

Similar presentations


Ads by Google