What Are Algorithms and Their Complexities

post-title

What Are Algorithms and Their Complexities

We constantly use algorithms in our everyday life. For example what if we’re somewhere on a street and want to get home to our apartment that is on the 20th floor? In order to achieve that we could do the following:

  1. Take a walk to your home.
  2. Use stairs to go up to the 20th floor.

Algorithm Complexity

Let’s take the example of “getting home”. Now how would we do this?

  1. Call a cab that will drive you home.
  2. Use the elevator to get upstairs to the 20th floor.

As you can see each of these two algorithms requires a different amount of time and resources. The first one will require more money but less energy. The second one will require less money but more energy. And if we don’t have money and are pretty healthy we need to choose the second algorithm and enjoy the walk. Otherwise, if we have enough money and need to get home urgently the first option is better.

What we’ve just done here is we’ve evaluated the complexity of each algorithm. So, this is how we decide which option is best for us depending upon the available resources. In computing world the concept is same only the variables are different.

Big O Notation

Big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows. The key here is that Big O notation is a relative metric that shows how fast execution time of the algorithm or its consumed memory will grow depending on input size growth. Absolute values of time and memory vary on different hardware so we need to use a relative metric. The same program may be executed in 1 second on one computer and in 10 seconds on another one. Thus when you compare time complexities of two algorithms it is required for them to be running on the same predefined hardware configuration which is not convenient and sometimes is not reproducible.

Let’s illustrate the Big O concept on the getting home algorithm, specifically the stair climbing portion of the algorithm.

Imagine we have two men: one is a young and fast Olympic champion and the other one is your typical middle-aged male. We’ve just agreed that we want the complexity of stairs climbing algorithm to be the same for both men so it should not depend on a man’s speed but rather depends on how high the two men climb. In terms of a computer, our algorithm analysis will not depend on the hardware, but on the software problem itself. How many steps would these two men need to take to get to the 20th floor? Well, let’s assume that we have 20 steps between each floor. That means the two men need to take 20 * 20 = 400 steps to get to the 20th floor. Let’s generalize it for n’s floor:

Input: (20 * n) steps Output: (20 * n) steps (moves)

Do you see the pattern here? The time complexity of climbing the stairs has a linear dependency on the number of floors. In Big O syntax, this is written as follows:

O(20 * n)

One important rule to understand here:

We must always get rid of non-dominant parts when describing complexity using Big O notation: O(n + 1) becomes O(n), O(n + n^2) becomes O(n^2), O(n + log(n)) becomes O(n) and so on.

Why is that? Because Big O must describe the order of the function and not an exact number of milliseconds or megabytes it consumes. It describes the trend and not absolute values. Therefore non-dominant constants do not matter. Non-dominant constants are the ones we may discard for huge values of n. For example in a situation when n = 10000 which of these two parts will influence the final number more: 20 or n = 10000? It is obvious that the influence of n to the final number is 500 times bigger than the constant 20. Therefore, we may discard it and the time complexity of the stairs climbing algorithm is as follows:

O(n)

This was an example of linear dependency, but there are other types of dependency exists as well: O(log(n)), O(n * log(n)), O(n^2), O(2^n), O(n!).

This article will be updated in coming days.