Big O — Efficient Coding

Algorithms are important for software engineers to understand. Among some of the most tested interview subjects is how to make code run more quickly and efficiently. This is where the Big O Notation comes in. When you’re dealing with large amounts of data, understanding the Big O and cutting your running time will become critical to your code.

Big O essentially measures and expresses how complex your code is. It’s a great guide for ensuring that your run-time runs efficiently relative to the input. This is done by a ranking system using “O’s” and “n’s”.

In Big O, the “O’s” signify the algorithm’s order of the function, which is also called the growth rate. The “n’s” signify the length of the array that is being sorted. It is important to note that the speed of your algorithm aren’t measured in seconds, or milliseconds, but in growth rate of operations.

Now that you can physically see the Big O Notation, let’s discuss what these letters and numbers actually mean.

As the name suggests, these algorithms always take the same amount of time to execute. This constant time is dependent on the size of the input.

An algorithm represents this if its run time is correlated with the logarithm of input size n.

An algorithm represents O(n) if the run time of it is correlated with the input size n. As the amount of time to run the algorithm increases or decreases, we can infer that the input size n has as well.

You’ll often encounter this with for loops. In this case, your Big O will represent this when the run time is correlated with the square of the input size.

Software Engineering Student @ Flatiron Denver. Located in Austin, TX