Foundations of CryptographyProf. Dr. Ashish Choudhury(Former) Infosys Foundation Career Development Chair ProfessorIndian Institute of Technology – BangaloreLecture – 6Introduction to Computational Security(Refer Slide Time: 00:31)Hello everyone, welcome to lecture 6. The plan for this lecture is as follows. We will discuss about the birth of modern cryptography, namely the approach that we use in modern cryptography. We will also discuss about the notion of computational security and necessary evils which are associated with computational security, and finally, we will define mathematically what do we mean by efficient algorithms and negligible probability.(Refer Slide Time: 00:54)So, just to recall in the last couple of lectures, we discussed about the notion of perfect secrecy, which is always desirable because that is the strongest notion of secrecy we can achieve. Because the secrecy is achieved against an adversary who is computationally unbounded, whose running time is unlimited, but we also discussed that we have to pay a heavy price to achieve perfect secrecy, namely in any perfectly secured encryption process, your key need to be as largest as a message and each instance of the encryption needs to use a fresh key.So these 2 restrictions are kind of impractical, because in practice, we aim to design an encryption process where we can use a short key for encrypting long messages and we would like to have an encryption scheme where the same short key could be used for encrypting sequence of multiple messages. That means, practically perfectly-secure encryption is simply not possible, that is kind of cheating. So if someone claims that his or her encryption process is practical as well as it provides you perfect secrecy, then that is clear cheating.That brings us to the approach that modern cryptography follows. The principle that we follow in modern cryptography is that instead of achieving perfect secrecy, we will try to get as closer as possible to perfect secrecy and in return, we achieve two practical goals which we aim for. Namely, we achieve an encryption process where we can use the same short key for encrypting multiple messages. So that is the kind of tradeoff we use in modern cryptography.(Refer Slide Time: 02:35)So, let us see the approach that we use in modern cryptography. Remember, in the perfect secrecy model, our adversary is computationally unbounded and a secrecy goal in the model of perfect secrecy is that we want to ensure that adversary learns absolutely nothing about the plain text and then we rigorously formalize this notion: what do we mean by adversary learns absolutely nothing about plain text? We also discussed that the consequences of this goal, namely the goal of achieving that adversary learns absolutely nothing is that you have to have a key as large as the plain text and you cannot afford to reuse the key. So, that was the consequences or the restrictions of perfect secrecy.(Refer Slide Time: 03:33)Now, the changes that we are going to do in modern cryptography is as follows: instead of assuming that our adversary is computationally unbounded or computationally unlimited, we assume that our adversary is computationally bounded techniques, that means we are no longer going to assume that adversary could run his algorithm for breaking the scheme or attacking the scheme for an unlimited period of time. We will see how to mathematically formulate this notion of computationally bounded adversary. The second relaxation that we are going to make in the model of perfect secrecy is that instead of demanding that adversary learns absolutely nothing about the plaintext, we target to achieve the following goal: we target to achieve that adversary should learn additional information about the plaintext with a negligible probability. That means we are now willing to let adversary learn something about the plaintext, but that is additional information or the probability with which the adversary could learn the additional information is so small, it is so negligible that for almost all practical purposes we can ignore.(Refer Slide Time: 04:42)As soon as we make these two relaxations and the model of perfect secrecy, we should hope that we should get the following two implications namely, our encryption process should be using short key and the same short key should be usable to encrypt a sequence of messages. It turns out that indeed if we make these two relaxations in the model of perfect secrecy, we can achieve the two desired goals, namely the same short key can be used for encrypting sequence of long messages and that is what the approach modern cryptography use.The notion of secrecy that we get by making these two relaxations is what we call us computational secrecy, because the security is achieved against an adversary whose computational power is now limited, rather than saying that the adversarial computing power is unlimited. So, that is the approach we are going to follow in modern cryptography.(Refer Slide Time: 05:38)So the two relaxations that we are going to make is we are now aiming to achieve security only against efficient adversaries, and what I mean by efficient adversary is informally those algorithms or those adversarial algorithms whose running time is practical or whose running time which takes an amount of time which is feasible in practise. We will mathematically define what exactly we mean by efficient adversaries very soon. The second relaxation that we are going to now make is to assume that adversaries allowed to break the scheme with some probability, but the probability with which the adversary can break the scheme is so small that we do not bother about such a small probability. Again, we are going to very soon mathematically and rigorously define what exactly we mean by such small error probability. Moreover, as we will see during the course, as the course proceeds, that under certain assumption the amount of time that the adversary will require to break the scheme with that small probability will be of order of few lifetimes. This is in contrast to what we achieve in perfect secrecy. In perfect secrecy, even if we give the adversary unlimited time, there is absolutely zero probability that he learns anything about underlying plain text. But in the computational security, in model of computational security where our goal is to achieve key reusability if we give enormous amount of time to the adversary, then there is a chance that adversary will be able to learn something about the underlying message, but that something is going to be so small that for most practical purposes, we can ignore it. Moreover, the amount of time that is going to require for the adversary to learn the message, it is some that a small probability which will be of order of few lifetimes. It turns out that this is acceptable for most of the practical applications because in most of the practical applications, we do not require everlasting security. What I mean by this last statement is the following: imagine you want to have a secure system to maintain the secrecy of your credit card details. So if I have an encryption process, which ensures that it will preserve the privacy of your credit card details, with significant amount of probability. That means the probability that adversary can learn your credit card details by looking into the encryption of your credit card details with very, very small probability and the amount of time that the adversary will take to learn about your credit card details is of order say 35 years or 40 years, then it is fine because ideally, you will require the secrecy of your credit card details to be maintained only for few years. You do not require the secrecy, or the privacy of your credit card details to be maintained forever lasting. So this is acceptable. As it will turn out that above two relaxations that we are making in the model of computational secrecy is absolutely necessary if our ultimate goal is to achieve key reusability. Namely, in the next couple of slides, we are going to discuss that indeed if we want to design the encryption process where our goal is to ensure that the same short key is used to encrypt multiple messages, then definitely we need to make the two relaxations that I am discussing here, namely the first relaxation is that we should be willing to let the adversary learn something about the underlying message with a small probability and a second relaxation is that we should target security only against efficient adversaries.(Refer Slide Time: 09:16)Let us see the necessity of these two relaxations. Namely, the first relaxation is that we should now target security only against efficient adversaries. So, to see that why this relaxation is necessary, consider an arbitrary encryption process, where you are going to use the same key for encrypting sequence of messages and namely, the same key is going to be used for encrypting multiple messages. And imagine we are in the known plaintext attack model (KPA) attack model.Just to recall, in the KPA attack model, we assume that adversary sees encryptions of several messages, and it means it knows both the underlying messages as well as their encryptions under an unknown key, where the same key is going to be retained for encrypting the new messages. Imagine I have an arbitrary encryption process whereas say the sender has messages m1, m2. ..., mt and the resultant ciphertexts are C1, C2, ..., Ct and adversary has got access to this (plain text, ciphertext) pairs. It knows that each of the ciphertext in each of these pairs has the encryption of the corresponding plain text under some unknown key k, that the key is not known to the attacker. The adversary also knows the description of the encryption process and it also knows that the same unknown key k is going to be retained by the sender for encrypting next sequence of messages. So, under this scenario, the adversary can always run what we call as the brute-force key-recovery attack. Idea behind this brute-force key-recovery attack is that what the adversary can do is since it knows the description of the key space, it can try for each candidate key k belonging to the key space and decrypt each of the ciphertext in his pairs of (??, ?? ) and see that does there exist a candidate key k ∈? each of the ?? in his pairs of (??, ?? ) decrypt back to the corresponding plain text under that candidate key k?If he could, definitely there is some candidate key k and as soon as it hit upon that candidate key k, it can find out what is the key which sender is going to use for encrypting the next sequence of messages. So you can see that the success probability of this brute-force key?recovery attack is one because for some k ∈?, such that for Deck (?? ):=??, for each (??, ?? ), the running time of this brute-force key-recovery algorithm is ?(|?|). If we assume that our key space is significantly large for us, for instance, imagine that the key space is the set of all possible 256-bit strings. That means imagine my key space is 2256. Now this brute force over | ? | = 2256 is going to take enormous amount of time, that is kind of impractical. That means if I make the relaxation that I am not going to tolerate or I am not bothered about adversaries whose running time is impractical, then this brute-force recovery attack will not be considered as an attack in my attack model.So that is the necessity of the first relaxation if your goal is to achieve an encryption scheme, where the same key is going to be used for encrypting multiple messages. That shows us the necessity of the first relaxation.(Refer Slide Time: 12:54)Now let us see the necessity of the second relaxation if your goal is to achieve key reusability. The second relaxation is that you should allow the scheme to be broken with a small probability. Again, consider an instance of an arbitrary encryption process where the same key k has been used to encrypt multiple messages in sequence and say the adversary is in the KPA attack model, where it has got access to several (??, ?? ) pairs and the key is not known to the adversary. But adversary knows the corresponding ciphertext or the encryptions of the corresponding plain text in each of the pairs, and the adversary knows that the same unknown key k is going to be retained by the sender for encrypting the next sequence of message. Now, adversary can always launch what we call this as a guessing attack. The idea behind a guessing attack is adversary can simply get a candidate value of key, say k from the key space and check whether that candidate key which he has guessed is indeed the right key or not by performing the following operation: randomly guess a k ∈?, and check if Deck (?? ):=??, , for each (??, ?? ), then he has hit upon the right key. Now, what is the success probability of this attack? The success probability of this attack is 1 / | ? |. What is the running time of the attack or the attacker’s algorithm? The running time of the adversary’s algorithm is ?(|1|) because he is now not doing brute-force over the key space, heis just guessing the value of the candidate key.So again, if I assume that my key space is extremely large, that means imagine again for the moment that your key space if order 2256, then the success probability of this attack is 1/2256, which is very, very small. That means even though adversary has a chance to break the scheme, namely to learn the key, the chance that he can learn the key is so small that, namely, it is 1/2256 that we can ignore it for most practical purpose. So, this again demonstrate that if key reusability is your ultimate goal, then we have to make the second relaxation in our model, namely, we should be willing to let the adversary break the scheme with a smaller probability and which is so small that we can ignore it.(Refer Slide Time: 15:19)So, just to summarize the two necessary evils which are associated with our ultimate goal of key reusability are the following. There are two possible attacks, two extreme attacks which can always be launched against an arbitrary scheme where the key reusability is the ultimate goal. The first attack is the brute-force key-recovery attack, whose running time is very large, but success probability is 100%. There is the second extreme attack against such scheme where key reusability is the goal, where the running time of the attacker is very less, it is constant, but the success probability of the attacker is very very small, it is so small that for most practical purposes we can ignore.So now if you see that if we make the two relaxations that I have been talking about, namely the first relaxation where we target to achieve secrecy only against efficient adversaries, then the brute force attack would not be considered as an attack in our attack model because as I said, the time complexity of the brute force recovery attack will be enormously large. If I make the second relaxation, namely where I am willing to let the adversary learn or break the scheme with a very small error probability, then the second attack, namely the key recovery attack would not be considered as an attack in our attack model.(Refer Slide Time: 16:42)So this is the summary of the two necessary evils which are associated with any encryption process, where key reusability is the goal. The first relaxation that you should make in your model is instead of targeting security against a computationally unbounded adversary, you should target secrecy only against computationally efficient adversaries. And the second relaxation that you should make in your model is instead of demanding that absolutely nothing about the underlying plaintext should be revealed, you should be willing to let the adversary learn something about the underlying plaintext with some small error probability and that probability should be so small that for most practical purposes, you can ignore it off. Now, the challenges how exactly we mathematically define efficient adversaries, namely which algorithms, which adversarial algorithm, which you can say is an efficient adversarial algorithm? The second challenge here is which quantities you will define, or you will call as a small quantity or a small error probability. So, what we are going to do here is we are going to mathematically define these two terms in asymptotic notation.(Refer Slide Time: 17:53)So, people who are familiar with the concept of algorithms, they will know what exactly we mean by asymptotic notation. So, when I want to measure the running time of an algorithm, there are two approaches by which we can measure the running time of the algorithm. One is the concrete approach, where we compare two algorithms for the same goal, based on the exact running time. So, you have say, algorithm 1 algorithm 2 for a task. You run the algorithm 1 on samples of various size, you run algorithm 2 on samples of various size and then you compare the exact timings of the algorithm 1 and algorithm 2, of course over what processor you are given andso on. Based on that, you compare whether algorithm 1 is better? or algorithm 2 is better? But the downfall of this approach is even though you get the concrete comparison of algorithm 1 versus algorithm 2, this approach does not take into consideration the underlying computing speed, the progress in the computing speed and so on.The second approach that we follow in the world of algorithms to compare 2 algorithms is asymptotic notation, where we compare the running time of the 2 algorithms for solving the same task in asymptotic notations, namely in big O notation. We compare the running time by measuring the number of basic steps of algorithm 1 and the number of basic steps that algorithm 2 where the number of basic steps is computed as a function of the input size. Depending upon which algorithm takes less number of steps, we define whether algorithm 1 is better? or algorithm 2 is better? You have various asymptotic notations like big O notation, theta notation, omega notation based on which you can compare 2 algorithms. So, when we want to define what we mean by efficient, algorithm negligible probability in the context of cryptography, we are going to follow this latter approach, namely we are going to define these terms asymptotically. For defining these terms asymptotically, we have to introduce something what we call a security parameter which we denote by n. The reason we want to introduce this security parameter is that once we introduce the security parameter n, then all the three quantitiesnamely the running time of the users, the running time of the key generation algorithm, the running time of the encryption algorithm, the running time of the decryption algorithm. Similarly, the running time of the adversarial algorithm of the attacker, and the success probability of the attacker, all are going to be expressed as a function of the security parameter. Typically, in the context of symmetric key encryption process, the security parameter is the size of the secret key, which is typically for most practical purposes is 128, 256, and so on.(Refer Slide Time: 20:41)So, let us first define what do we mean by efficient algorithms asymptotically. Informally, we say an algorithm is efficient if its running time is some polynomial function of the security parameter. Namely, Algorithm ? has a polynomial running time, if there exists a polynomial ?(.), such for every input ?∈{0, 1}∗, the computation of ?(?) terminates within ?(|?|) steps, where |?| denotes the length of the string ?.If that is the case, then we say that our algorithm A has polynomial running time and we will call our algorithm A to be an efficient algorithm. Whereas, if we cannot bound the running time of our algorithm A by some polynomial function in the size of its input, then we say that our algorithm is not efficient. That is the definition of efficient algorithm. Now, once we have defined a notion of efficient algorithm, the requirement that we put on any cipher is the following: remember we have the correctness requirement, we have the privacy requirement, and apart from that we have now the third requirement from any encryption process. The new requirement is that the running time of the key generation algorithm, encryption algorithm and decryption algorithm should be some polynomial function of this security parameter n. If the running time is not a polynomial function, then we would not consider that algorithm or cipher to be an efficient cipher.(Refer Slide Time: 22:15)Now, let us define the notion of negligible probability as a function of the security parameter. So informally, we say a function of the security parameter is negligible if it becomes almost 0as the value of your security parameter n tends to infinity or the function of the security parameter will be called as a negligible function if it is asymptotically smaller than the inverse of every polynomial function.Namely, if f (n) is a function, then we will say that f (n) is a negligible function if for every polynomial function p (n), there exists some value of n, namely N, such that f (n) is strictly less than the inverse of the polynomial function p (n) for all values of n > N. If this holds, then we say that our function f (n) is a negligible function. Another equivalent definition of this negligible function is for every constant ?, there exists some ?, such that ?(?)<?(−?), for all ?>?, then we say that our function f (n) is a negligible function. The reason that these 2 definitions are equivalent is that any polynomial functionp(n), you can always write it as some nc. So, if f (n) is strictly less than the inverse of the polynomial function for every polynomial function, then I can rewrite it as ?(?)<?(−?)for every constant c.So, you can use any of these two definitions to prove or disprove whether a given function f(n) is a negligible function in n or not. So, here, example a few functions which are all negligible functions. Each of functions is strictly less than the inverse of every polynomial function, where the value of N is different for the corresponding polynomial functions. So, even though all these functions are negligible functions, namely if I keep on making the value of small n to be large and as n tends to infinity, each of these candidate functions will become zero eventually. However, the rate at which each of these functions approaches to zero is different. So, for instance if I consider the function 2-nand if I consider the second function , then definitely 2-n will approach the value zero faster compared to the value of the function and so on. On the other hand, if I consider the function 1/n10, then it is not a negligible function. Because the requirement from a negligible function is that the function should be strictly less than the inverse of every polynomial function, but you can easily see that for no value of n, the function , that is not possible for every value of n. As a result, it violates the definition of negligible probability.(Refer Slide Time: 25:34)So, we have defined mathematically what do we mean by efficient algorithm and we have defined which probability you can consider as a small probability. So now the family of negligible and polynomial functions satisfy some nice closure properties. Let us see the closure properties satisfied by the class of polynomial functions. So if you consider twoarbitrary polynomial functions, say P1(n) and P2(n), , as well as are polynomial functions.What is the implication of the first closure property? It says that suppose if you have two subroutines the way to interpret the first closure property is the following: imagine you have two subroutines, where the running time of the first subroutine is some polynomial function in n and the running time of the second procedure is also some polynomial function in n, and if you have a bigger routine, which actually calls this sub procedures in sequence, then the running time of the bigger algorithm is also going to be a polynomial functions.Namely, a bigger algorithm which causes these two smaller functions in sequence is also going to be considered as an efficient algorithm. That is what is the interpretation of the first closure property. Just to summarize, in this lecture, we discussed that if key reusability is our ultimate goal, namely if we want to design a scheme where we want to retain the same key for encrypting multiple messages, then we have to make to relaxations to the model of perfect secrecy.The first relaxation that we have to make is that instead of assuming that our adversary is computationally unbounded, we have to assume that our adversary is computationally bounded and we have also seen that what do we mean by, how to measure whether the adversary’s time is computationally bounded or not. In the same way, the second relaxation that we have to make in our model is instead of saying that adversary should not learn absolutely nothing about the underlying message, we should give the adversary some chance to break your scheme. That some chance to break the scheme should be so small that for most practical purpose we can ignore it off. So, we have also seen how to mathematically define such a small probability of adversary breaking the scheme. We have seen that these two relaxations are kind of necessary evils for any encryption scheme where the goal is to achieve key reusability. Because if you do not make these two relaxations, then there are always 2extreme attacks which are possible, namely the guessing attack and a brute force attack.The success probability of the guessing attack will be very, very small, but the running time of this guessing attack will be practical, whereas the success probability of the brute force attack will be 100% but the running time of the brute force attack will be extremely large. So, we have to definitely make these two relaxations. I hope you enjoyed this lecture. Thank you.
Log in to save your progress and obtain a certificate in Alisonâ€™s free Advanced Diploma in Cryptography online course
Sign up to save your progress and obtain a certificate in Alisonâ€™s free Advanced Diploma in Cryptography online course
Please enter you email address and we will mail you a link to reset your password.