What is Complexity?

Ignasi Andres
6 min readMar 25, 2022

What does a computer scientist mean when he says that something is complex

Photo by Tim Johnson on Unsplash

Sometimes, we hear people talking about how complex or difficult is for a machine to solve a problem, or to perform certain tasks that, in humans, involve thinking. But what is exactly the complexity of a problem? Humans compare and evaluate problems and situations all the time, for example, between two situations s1 and s2, we can say a situation s1 is more complex than s2, because we may have some information about these problems. But, is it possible to define the complexity in terms a machine could understand?, and if possible, how can we measure how complex can a problem be?

Big O notation

Perhaps, most of you have already heard of the big O notation. This notation indicates an approximation of many iterations an algorithm needs to solve a problem. For example, if an algorithm solves a problem in O(n²), this means that the main component of the code, the one that may take longer, is quadratic (a loop inside a loop…). It is commonly accepted that polynomial complexities (O(n^x) where x is an integer) are better than other complexities, such as exponential (O(2^n)). However there are some exponential algorithms that perform better that some polynomial algorithms.

As a measure of complexity this can works. But, this is a measure of the number of lines, the “algorithms” execute in order to solve a problem, not of the problem itself (or, in the case of Space complexity, the number of registries in memory). For example, there are different search algorithms (Breadth First Search, Deep First Search, etc..) and each one has a different time and space complexity. This is a measure that is agnostic of the platform where the solution will run, that is, it doesn’t matter if you run the solution on a super server or in an old laptop. Thus, we can make an abstract representation of a computer using the Turing Machine that we will describe next.

Turing Machine

Photo by Mauro Sbicego on Unsplash

A Turing Machine is a mathematical construct to represent a computer. It represents a machine that reads an input or string, positioned in a tape, where there is a symbol in each position, and it must process the string to verify if the string belongs to a language L, by accepting or rejecting the string. Formally it is a tuple composed of a set of states, a set of symbols, a function mapping any combination of states and symbols to a new state, including the halting state (h), the accept state or the reject state and a direction (left, right or stay), and finally an initial state. In this post we will use language L as a synonym to problem p, and deciding a language as solving a problem p.

How a Turing Machine works? It is really simple. The machine starts in the initial state, and it works by positioning the head at the start of the tape, and reading the first symbol on the tape. This symbol is fed to the mapping function, that outputs a new state for the machine (or an accept/reject/halt state) and a direction for the head over the tape. The machine continues to do so, until the accept or the reject state is reached, in which case the string it is accepted as part of the language L, or rejected (the problem is solved).

Representation of a Turing Machine

Simple, isn’t it? Well, with a Turing machine it is possible to represent every computation a machine is able to do.

The machine described above is a deterministic Turing machine. A non-deterministic Turing machine, on the other hand, has a non-deterministic mapping function, that instead of mapping the symbol and the state to a single state, it maps the combination of symbol and state to a set of states.

When a Turing machine processes , or decides, a string s of a language L (any string s of L) in a time f(n) it is said that the language L belongs to the set of languages TIME(f(n)): L ∈ TIME(f(n)). As we said before, we can reformulate this sentence as: p TIME(f(n)) means that the problem p is solved in time f(n). This set of languages is called a complexity class. Likewise of the set of languages that are decided by using a function f(n) of SPACE.

A complexity class includes all the languages that are decided in the same time. For example, the set of all the languages that are decided in polynomial time (the problems that are easy to solve…) by a deterministic Turing machine is called P. And all the languages that are decided in polynomial time by a non-deterministic Turing machine is called NP (the hard ones…). And it is an open problem in computer science to determine if P=NP.

To think how hard is an NP problem, imagine the following situation, where you have a sequential decision problem, that is, in each step you have to take a decision from a set of different actions that is possible to take. Now imagine that you have an oracle, a machine that in each step, it gives you the best action available. Well, in an NP problem (or in harder problems) even with this oracle, there will be a lot of processing before finding a solution!

This classification of languages is useful, because if we have a problem 1 belonging to P, and another problem 2 from which we do not know if it belongs to P, if we manage to reduce the problem 1 to 2, we know that the problem belongs to the same category.

Complexity of Planning

Photo by Jeremy Bishop on Unsplash

We can think of a Turing machine to solve a problem of automated planning as follows: the set of states of the Turing machine is the set of states from the problem; the mapping function is the set of actions; the initial state is the initial state of the problem, and the goal states are the accepting states of the Turing machine.

Automated planning is a hard problem, and it may appear that it belongs to the NP class of problems. But in fact it belongs to the PSPACE class of problems. Recall that a planning problem is a search problem in the state space, and in each step several branches are opened. So, the time may seem as non-deterministic, but once all the states are expanded, the search may seem easy, that is polynomial in the number of states. The reason that planning is a tractable problem is the existence of heuristics, that may cut the size of the explored state space.

Conclusions

This post has a lot of computer science theory. We talked about Turing machines and complexity, and this is a lot! I hope that now you have an idea of what is the complexity of a problem and what it means for a problem to be hard, even if we have not yet delved into EXP classes, problems that are definitely intractable!

Stay safe!

--

--

Ignasi Andres

I am a Über nerd, interested in Robotics, Machine Learning and Computer Science in general, as well as Entrepreneurship.