BOOK
REVIEW
K. Wagner and G. Wechsung, 551 pp., np.
Computational
Complexity,
D. Reidel,
1986,
This book collects together in one place for the first time most of the results currently known in the field of computational complexity. As the authors state: “This is no textbook but a monograph which is primarily intended for specialists doing research on computational complexity. It can be used by everybody who has a certain basic knowledge concerning effective computability, recursive functions and formal languages.. “. For those not familiar with computational complexity theory or how it might affect them, let me review the field briefly. Leibnir had a dream that one day there would exist a “universal” language built on the laws of logic: a logical calculus within which the truth of any question within the domain of reason could be resolved. By the middle of this century, it was known that Leibniz’ dream was impossible. The work of Godel, Post, and Turing had shown that there were limits to the domain of questions that could be resolved by formal systems, that there existed well-posed questions that could not be decided by any logical method. Their results provided the first hint that the class of well-posed questions might contain some internal structure, that different questions might be resolved with different degrees of difficulty. The results of Godel, Post, and Turing provided the initial distinction between formally decidable and formally undecidable problems. In the time since, it has been revealed that there is structure within the domain of formally decidable problems as well. This domain contains a whole hierarchy of subclasses of problems, each representing a different degree of complexity in the procedures required for their resolution. Thus, there are two questions to be asked about a particular problem. First, is it decidable at all? Second, if it is decidable, how much computational effort will the decision require? The first question defines the field of computability theory, and the second question defines the field of computational complexity. Turing and Post showed that the procedures used to decide formal questions can be reduced to the actions of idealized computing machines. The universally accepted formal model for a computing machine is a Turing machine, an extremely simple abstraction of a computing device consisting of a finite state control directing the actions of a tape head that can read, write, and move on an indefinitely extensible data tape. Despite its simplicity, the Turing machine formalism is thought to be equivalent in computational power to any other formal system for computing, a proposition known as Church’s thesis.
MA THEMATICAL
BIOSCIENCES
85:105-107
OElsevier Science Publishing Co., Inc., 1987 52 Vanderbilt Ave., New York, NY 10017
(1987)
105 0025-5564/87/$03.50
106
BOOK REVIEW
The notion of an “effective procedure,” more commonly known as an algorithm, has become associated with Turing machines in the following manner. An algorithm is formally equivalent to a Turing machine that halts in finite time for any input. The functions for which algorithms can be constructed are known, as the recursive functions. The class of functions for which a Turing machine can be guaranteed to halt in finite time for only some inputs is known as the class of recursively enumerable functions. Functions for which a Turing machine may not halt in finite time on any input at all are nonrecursively enumerable. Computability theory concerns itself with locating problems within one of these three classes. Complexity theory concerns itself only with the class for which algorithms exist: the recursive functions. The recursive functions, then, are computable in finite time. The question now becomes: just how long can we expect “finite” to be? A problem that is computable in theory may require so much time as to be uncomputable in practice. A similar problem exists with regard to the amount of work-space required by the computation. A Turing machine makes use of a data tape that is indefinitely extendable, which means that, although always finite, the amount of workspace available to the Turing machine is in principle unbounded. The amount of workspace required by a “computable” function may be impractical on any physically realizable computer. In practice, the most important distinction within the domain of recursive functions is between those whose critical complexity measure rises as a polynomial function of the “size” of the problem, and those whose complexity measure rises as an exponential function of the size of the problem. The common measure of size is the size of the input to the problem. For example, the time required to find the optimal tour in the famous “traveling salesman” problem rises exponentially with the number of cities that must be visited. In general, problems whose time complexity exhibits exponential growth with the size of the input are considered to be intractable, whereas those exhibiting polynomial growth with the size of the input are considered to be tractable. Another important distinction is that between deterministic time measures and nondeterministic time measures. Many exponential time problems are primarily searching through a space of possible solutions that grows exponentially with the size of the input, whereas the amount of time required to check each candidate solution is only polynomial in the size of the input. If the machine could “guess” a correct solution, the time complexity of the problem would only be polynomial in the size of the input. Alternatively, one can assume that a great number of machines are started on all possible solutions to a problem simultaneously. If a solution exists, this array of machines will find it in polynomial time. The deterministic (or DTIME) complexity is the measure of how long a single machine might take to find a solution without guessing. The nondeterministic (or NTIME) complexity is the measure of how long an array of machines, or a single machine with the ability to guess correct solutions, would take to find a solution. It turns out that many problems of practical importance are exponential in DTIME complexity, but only polynomial in NTIME complexity. Problems that exhibit polynomial DTIME complexity belong to the class P, whereas problems
BOOK REVIEW
107
that are polynomial in NTIME complexity but something worse than polynomial in DTIME complexity belong to the class NP. In the course of trying to find more efficient algorithms for important problems in class NP, it was discovered that some of them could be reduced to problems in the class P. This naturally lead to attempts to determine the relationship between P and NP. Despite a great deal of effort, the question of whether or not P = NP has not been resolved. It is currently the big open question in complexity theory. Why should researchers from disciplines outside of computer science be interested in issues related to computational complexity? Because scientists from many different areas are often doing things that are very similar from a computational point of view. The complexity class is now known for many of the algorithms that typically arise in the course of scientific computation. Many clever variants have been found that reduce the complexity of common algorithms under a wide variety of circumstances. Furthermore, polynomial time algorithms have been exhibited that will find near optimal solutions to many problems for which exponential time would be required to find the optimum solution. Most important, perhaps, is the unveiling of the similar structure exhibited by wide varieties of scientific computations, and the insight that may follow from the identification of a researcher’s particular computational problem with the wider class of computational processes to which it belongs. The issues raised here are but some of the highlights of complexity theory. Computational Complexity stands alone in providing a comprehensive coverage of the field, but it is not the place for the novice to start. There are several other books that the newcomer should turn to first. Introduction to Automata Theory, Languages, and Computation (J. E. Hopcroft and J. D. Ullman, Addison-Wesley, 1979) is a general text on formal computer theory with good introductory sections on computability and complexity. The style is somewhat terse and not all of the proofs are well motivated, but most of the basic definitions and concepts are covered. Elements of the Theory of Computation (H. R. Lewis and C. H. Papadimitriou, Prentice-Hall, 1981) contains good introductory material, and the definitions, concepts, and proofs are somewhat better motivated, although the coverage is not as complete. Computers and Intractability: A Guide to the Theory of NP-Completeness (M. R. Garey and D. S. Johnson, Freeman, 1979) is an excellent introduction to one of the most important areas of complexity theory, with fully half of the book devoted to a list of known problems, their complexity measures, and references to the significant publications for each problem. CHRISTOPHER G. LANGTON Center for Nonlinear Studies Los Alamos National Laboratory Los Alamos NM 87545