Complexity Theory Clause Samples
Complexity Theory. In Chapter 7, efficient algorithms were introduced that are important for cryp- tographic protocols. Designing efficient algorithms of course is a central task in all areas of computer science. Unfortunately, many important problems have resisted all attempts in the past to devise efficient algorithms solving them. Well-known ex- amples of such problems are the satisfiability problem for boolean formulas and the graph isomorphism problem. One of the most important tasks in complexity theory is to classify such problems according to their computational complexity. Complexity theory and algorithmics are the two sides of the same medal; they complement each other in the following sense. While in algorithmics one seeks to find the best upper bound for some problem, an important goal of complexity theory is to obtain the best possible lower bound for the same problem. If the upper and the lower bound coincide, the problem has been classified. The proof that some problem cannot be solved efficiently often appears to be “negative” and not desirable. After all, we wish to solve our problems and we wish to solve them fast. However, there is also some “positive” aspect of proving lower bounds that, in particular, is relevant in cryptography (see Chapter 7). Here, we are interested in the applications of inefficiency: A proof that certain problems (such as the factoring problem or the discrete logarithm) cannot be solved efficiently can support the security of some cryptosystems that are important in practical applica- tions. In Section 8.1, we provide the foundations of complexity theory. In particular, the central complexity classes P and NP are defined. The question of whether or not these two classes are equal is still open after more than three decades of intense research. Up to now, neither a proof of the inequality P NP (which is widely believed) could be achieved, nor were we able to prove the equality of P and NP. This question led to the development of the beautiful and useful theory of NP- completeness. One of the best understood NP-complete problems is SAT, the satisfiability prob- lem of propositional logic: Given a boolean formula ϕ, does there exist a satisfying assignment for ϕ, i.e., does there exist an assignment of truth values to ϕ’s variables that makes ϕ true? Due to its NP-completeness, it is very unlikely that there exist efficient deterministic algorithms for SAT. In Section 8.3, we present a deterministic and a randomised algorithm for SAT t...
Complexity Theory. The goal of complexity theory is to provide mechanisms by which computational problems may be classified in terms of the resources required to solve them. The resources measured are usually time, and occasionally storage space. We now define some of the terminology required. An algorithm is a computational procedure which takes a variable input and terminates with some output. If an algorithm follows the same execution path each time it is executed with the same input, then we say that the algorithm is deterministic. By contrast, a randomized algorithm’s execution path differs each time it is executed on the same input since its decisions rely on a supply of random bits. The running time of an algorithm on a particular input is the number of steps or primitive operations executed before the algorithm terminates. The worst-case running time of an algorithm is an upper bound on the running time of an algorithm for any input. This is usually expressed as a function of the input size. The expected running time of an algorithm is the average running time of an algorithm over all inputs of a specific size. This is usually expressed as a function of the input size. Since the exact running time of an algorithm is often difficult to derive, we often rely on approximations of the running time. In particular, we often refer to the order of the asymptotic upper bound of the running time of an algorithm using the big-O notation.
2.1 For two functions f (l) and g(l), we say that f (l) = O(g(l)) if there exists a positive constant c and a positive integer lc such that for all l > lc, 0 ≤ f (l) ≤ cg(l). Two frequently used concepts in cryptology are the notions of negligible functions and
