O'Reilly logo

Elements of Quantum Computation and Quantum Communication by Anirban Pathak

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Chapter 2
Basic ideas of classical and
quantum computational
models and complexity
classes
In this chapter we briefly describe the basic ideas of classical and quantum
computational models. To be precise, we will provide an overview of the
classical (irreversible, probabilistic and reversible) and quantum Turing
machines, the need for error correction and the quantum circuit model of
computation. We also describe a few complexity classes that are relevant
for the study of quantum computing and quantum cryptography.
2.1 Elementary idea of complexity of an algo-
rithm
Classical theory of computation basically considers two questions: (1) What
is computable? (2) What resources are required by a computing machine
to solve a problem? The fundamental resources required for computing
are means to store and manipulate symbols. Thus the most important re-
sources are time (the amount of time required by the computing machine to
complete a specific computational task) and space (the amount of memory
required by the computing machine to perform the computation). Resource
requirements are measured in terms of the length (size) of the problem. Let
us think about an algorithm that can find out the square of x provided x
is given. To specify the problem we need to provide L =log
2
(x) amount of
information (L =log
2
(x) is the number of bits needed to store the value of
33
34 Classical and quantum computational models and complexity classes
x) to the computing machine. This L can be visualized as the size of the
problem. The computational complexity of the algorithm is determined
by the number of steps s required by a computing machine to solve the
problem of size L by using that particular algorithm. It is expected that
the number of steps s required by the computing machine would depend on
the size of the problem L.Thuss is expected to be a function of L.Forex-
ample, we can think of the following cases: s
1
= A(L
3
+L), s
2
= A(L
2
+L)
and s
3
= A2
L
= Ax,whereA is a constant. In the first two cases, the
problems are considered tractable or computable, and we say that these
problems can be efficiently solved in the computing machine. In the last
case, the computational task is considered to be hard or uncomputable.
Let us make the notion a bit more compact. To do so, we introduce a
coarse measure of complexity. To describe this measure we need to intro-
duce the following two notations:
1. O (pronounced as “big oh”) notation which provides an upper bound
on the growth rate of a function. The idea will be clear from the fol-
lowing examples: In this notation O(s
1
)=O
L
3
, O(s
2
)=O
L
2
and O(s
3
)=O
2
L
. Thus in this notation we suppress the constants
and lower order terms to obtain a coarse upper bound. In this nota-
tion O(s(L)) is used to denote the upper bound on the running time
of the algorithm.
2. Similarly, we may introduce a notation to denote the lower bound.
Such a notation is usually called “big omega” notation and denoted
as Ω.
In the above examples, we have considered time as the computational re-
source, but these ideas are equally valid for any other choice of resource;
say, space.
In general an algorithm is said to be efficient with respect to a particular
resource if the amount of that resource used to implement the algorithm to
solve the corresponding computational task is O
L
k
for some k.Inthis
case we say that the algorithm is polynomial with respect to that resource.
Similarly, if an algorithm runs in O(L) then we call it linear, and if it runs in
O(log L) then we call it logarithmic. We know that linear and logarithmic
functions cannot grow faster than the polynomial function. Consequently,
these algorithms are also called efficient. On the other hand, if an algorithm
requires Ω(c
L
) resources (similar to our third example where s
3
= A2
L
)
where c>1 is a constant then the algorithm is called exponential. It may
be noted that the bound mentioned here is the lower bound. Precisely, if
the resource requirement of an algorithm cannot be bounded above by any
polynomial, then we call it super-polynomial (sometimes we loosely call it
exponential, too) and the algorithm is considered to be inefficient and the
corresponding task (problem) is considered as hard.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required