In the previous article “Big O Notation for Time and Space Complexity,” we introduced the origin and notation of Big O time complexity. Now let’s explore how to analyze the time complexity of a piece of code.

Understanding how to analyze the time complexity of a piece of code using Big O notation is a crucial skill for any software developer or computer scientist. Big O notation provides a way to describe the performance of an algorithm in terms of the amount of time it takes to execute or the amount of memory it uses (though we’re focusing on time complexity here). Here’s a step-by-step guide on how to analyze the time complexity of a piece of code using Big O.

Identify the Nested Loops

Start by looking at the code and identifying any loops, especially nested loops. Loops are often the primary contributors to the time complexity of an algorithm. Note the number of loops and how they are nested.

Count the Operations Inside the Loops

For each loop, determine the number of operations that are performed within the loop body. Operations can include arithmetic operations, comparisons, assignments, etc. In Big O notation, we often focus on the “most significant” term, meaning the term that grows the fastest as the input size increases.

Determine the Growth Rate

Based on the number of loops and the operations within them, determine the growth rate of the algorithm. The growth rate describes how the time required to execute the algorithm increases as the input size increases. Common growth rates include$O(1)$(constant time),$O(log n)$(logarithmic time),$O(n)$(linear time),$O(n^2)$(quadratic time), and$O(n!)$(factorial time).

  • $O(1)$: The time required to execute the algorithm is constant regardless of the input size.
  • $O(log n)$: The time required to execute the algorithm grows logarithmically with the input size.
  • $O(n)$: The time required to execute the algorithm grows linearly with the input size.
  • $O(n^2)$: The time required to execute the algorithm grows quadratically with the input size.
  • $O(n!)$: The time required to execute the algorithm grows factorially with the input size, which is extremely slow for large inputs.

Ignore Constants and Lower-Order Terms

When analyzing time complexity, it’s common to ignore constants and lower-order terms. For example, if an algorithm’s time complexity is$O(5n^2 + 3n + 100)$, we would simplify it to$O(n^2)$because the$n^2$term grows faster than the$n$term or the constant term as$n$increases.

Sharing Three Highly Practical Methods

  1. Only focus on the segment of code with the highest number of loop executions.

We focus exclusively on the segment of code with the maximum number of loop executions. Big O notation describes a complexity trend. Typically, we ignore constants, lower-order terms, and coefficients in the formula, concentrating solely on the highest-order term. Therefore, when analyzing the time complexity of an algorithm, we concentrate on the segment with the maximum number of loop executions. The magnitude of this segment’s execution count, represented by$n$, dictates the overall time complexity of the analyzed code.

To clarify this concept further, I’ll use the previous article’s example as an illustration.

1
2
3
4
5
6
7
8
int sum(int n) {
int s = 0;
int i = 1;
for (; i <= n; ++i) {
s = s + i;
}
return s;
}

The 2nd and 3rd lines of code execute in constant time, regardless of the size of$n$, and thus do not impact the overall complexity. The 4th and 5th lines, however, are executed most frequently and require focused analysis. As previously noted, these lines are executed$n$times, leading to an overall time complexity of$O(n)$.

  1. Addition rule: The overall complexity is determined by the segment of code with the highest order of magnitude.

Taking the following code snippet as an example, let’s analyze the time complexity of the given function sum(int n) step by step.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
int sum(int n)
{
int total_1 = 0, total_2 = 0, total_3 = 0;

for(int i = 0; i < 500; i++) {
total_1 += i;
}

for(int i = 0; i < n; i++) {
total_2 += i;
}

for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
total_3 += i + j;
}
}

return total_1 + total_2 + total_3;
}

  1. First for loop (total_1):

    1
    2
    3
    for(int i = 0; i < 500; i++) {
    total_1 += i;
    }
    • This loop iterates 500 times, regardless of the value of n.
    • Time complexity:$O(1)$(constant time). Even though the loop iterates 500 times, it’s a fixed number of iterations and does not depend on n.
  2. Second for loop (total_2):

    1
    2
    3
    for(int i = 0; i < n; i++) {
    total_2 += i;
    }
    • This loop iterates n times.
    • Time complexity:$O(n)$. The number of iterations depends linearly on the value of n.
  3. Nested for loops (total_3):

    1
    2
    3
    4
    5
    for (int i = 0; i < n; i++) {
    for (int j = 0; j < n; j++) {
    total_3 += i + j;
    }
    }
    • The outer loop runs n times.
    • The inner loop runs n times for each iteration of the outer loop.
    • Therefore, the total number of iterations for the inner loop is$n \times n = n^2$.
    • Time complexity:$O(n^2)$. The time taken grows quadratically with respect to n.
  4. Combined Time Complexity:

To find the overall time complexity of the function sum(int n), we sum up the complexities of each part:

  • $O(1)$(from total_1)
  • $O(n)$(from total_2)
  • $O(n^2)$(from total_3)

The dominant term here is$O(n^2)$, because it grows faster than both$O(n)$and$O(1)$. Thus, the overall time complexity of the function sum(int n) is$O(n^2)$. This quadratic complexity is primarily determined by the nested loops that calculate total_3.

From the above analysis, we can infer that the overall time complexity is equal to the complexity of the segment with the highest order. We can abstract this rule into a formula:
if
$T1(n) = O(f(n))$,
$T2(n) = O(g(n))$;
then$T(n) = T1(n) + T2(n) = max(O(f(n)), O(g(n))) = O(max(f(n), g(n)))$.

  1. Multiplication rule: The complexity of nested code equals the product of the complexities of the outer and inner layers of code.

Again, we abstract this rule into a formula:

if
$T1(n)=O(f(n))$,
$T2(n)=O(g(n))$;
then
$T(n)=T1(n)T2(n)=O(f(n))O(g(n))=O(f(n)*g(n))$

In other words,

if$T1(n) = O(n)$,$T2(n) = O(n^2)$, then$T1(n) * T2(n) = O(n^3)$.

Taking the following code snippet as an example, let’s analyze the time complexity of function sum1(int n).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
int sum2(int n) {
int total = 0;

for (int i = 1; i < n; ++i) {
total += i;
}
return total;
}

int sum1(int n) {
int total = 0;

for (int i = 1; i < n; ++i) {
total += sum2(i);
}

return total;
}

To analyze the time complexity of the function sum1(int n), let’s break down each function individually and then combine them to determine the overall complexity.

  1. Analysis of **sum2(int n)**:

    1
    2
    3
    4
    5
    6
    7
    8
    int sum2(int n) {
    int total = 0;

    for (int i = 1; i < n; ++i) {
    total += i;
    }
    return total;
    }
    • Loop Iterations: The for loop runs from i = 1 to i < n, iterating n-1 times.
    • Operations Inside Loop: Inside the loop, total += i executes once per iteration.

The time complexity of sum2(int n) is$O(n)$, because the loop runs n-1 times, which is linear in terms of n.

  1. Analysis of **sum1(int n)**:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    int sum1(int n) {
    int total = 0;

    for (int i = 1; i < n; ++i) {
    total += sum2(i);
    }

    return total;
    }
    • Loop Iterations: The for loop in sum1(int n) also runs from i = 1 to i < n, iterating n-1 times.
    • Operations Inside Loop: Inside the loop, sum2(i) is called.

For each iteration of the loop in sum1(int n), sum2(i) is called. If sum2(i) is just a simple operation, the time complexity of lines 4 to 6 is$T1(n) = O(n)$. However, sum2(i) itself is not a simple operation; its time complexity is$T2(n) = O(n)$. According to the multiplication rule, the overall time complexity of the sum1(int n) function is$T(n) = T1(n) T2(n) = O(n n) = O(n^2)$.

You don’t have to deliberately memorize these three complexity analysis techniques. In fact, the key to mastering complexity analysis lies in familiarity. Just expose yourself to more examples and regularly practice analysis.

Conclusion

By following these steps, you can analyze the time complexity of a variety of algorithms and gain a better understanding of their performance characteristics.