Preface
At the intuitive level, any practicing mathematician or computer scientist —indeed any student of these two fields of study— will have no difficulty at all to recognize a computation or an algorithm, as soon as they see one, the latter defining, in a finite manner, computations for any given input. It is also an expectation that students of computer science (and, increasingly nowadays, of mathematics) will acquire the skill to devise algorithms (normally expressed as computer programs) that solve a variety of problems.
But how does one tackle the questions “is there an algorithm that solves such and such a problem for all possible inputs?” —a question with a potentially “no” answer— and also “is there an algorithm that solves such and such a problem via computations that take no more steps than some (fixed) polynomial function of the input length?” —this, too, being a question with a, potentially, “no” answer.
Typical (and tangible, indeed “interesting” and practically important) examples that fit the above questions, respectively, are
- “is there an algorithm which can determine whether or not a given computer program (the latter written in, say, the C-language) is correct?”1
and
- “is there an algorithm that will determine whether or not any given Boolean formula is a tautology, doing so via computations that take no more steps than some (fixed) polynomial function of the input length?”
For the first question we have a definitive “no” answer,2 while for the second one ...
Get Theory of Computation now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.