Qapitol QA

Writing Efficient Code with Less Time Complexity and Faster Execution

Table of Contents

Introduction

Writing efficient code is essential to becoming a good software programmer. It is what differentiates an average and a ‘good’ software engineer. One should therefore learn how to write efficient code that has less time complexity and is faster to execute.

What is Time Complexity

As per programiz, time complexity of an algorithm signifies the total time required by the program to run till its completion. The time complexity of algorithms is most commonly expressed using the big O notation.

We should compare the performance of different algorithms and choose the best one to solve a problem. Time complexity of an algorithm is the amount of time taken by an algorithm to run as a function of the length of the input.

It represents the number of times a statement is executed. The time complexity of an algorithm is not the actual time required to execute a particular code. The idea behind time complexity is that it can measure only the execution time of the algorithm in a way that depends only on the algorithm itself and its input.

Big O notation expresses the run time of an algorithm in terms of how quickly it grows relative to the input.

Time-space tradeoff

The better the time complexity of an algorithm, the faster the algorithm will carry out its work in practice. Apart from time complexity, space complexity is also important: This is essentially the number of memory cells which an algorithm needs. A good algorithm keeps this number as small as possible, too. There is often a time-space-tradeoff involved in a problem, that is, it cannot be solved with few computing time and low memory consumption. One then has to make a compromise and to exchange computing time for memory consumption or vice versa, depending on which algorithm one chooses and how one parameterized it.

API testing

Constant time: O(1)

When time complexity is constant O(1), the size of the input (n) doesn’t matter. Algorithms with constant Time complexity take a constant amount of time to run, independently of size of n. For example, if we want to print “I am a boy”, we would run that with constant time complexity, since the amount of operations with this or any other phrase will remain the same, no matter which operating system or which machine configurations we are using. To remain constant, these algorithms shouldn’t contain loops, recursions or calls to any other non-constant time function. For constant time algorithms, run-time doesn’t increase: the order of magnitude is always 1.

Time complexity of a function is considered as O(1) if it doesn’t contain loop, recursion and call to any other non-constant time function.

Linear time: O(n)

When time complexity grows is in direct proportion to the size of the input, we are facing Linear Time Complexity, or O(n). This means that as the input grows, the algorithm takes proportionally longer to complete.

Example: Finding maximum element in unsorted array

Logarithmic Time: O(log n)

Algorithms with this complexity make computation fast. An algorithm is said to run in logarithmic time if time execution is proportional to the logarithm of the input size. This means that instead of increasing the time it takes to perform each subsequent step, the time is decreased at a magnitude that is inversely proportional to the input. These algorithms never have to go through all of the input, since they usually work by discarding large chunks of unexamined input with each step. This time complexity is generally associated with algorithms that divide problems in half every time, which is a concept known as Divide and Conquer.

Time Complexity of a loop is considered as O(Logn) if the loop variables is divided / multiplied by a constant amount. Example, binary search. Binary search searches in a sorted array by repeatedly dividing the search interval into half. Check till the value is found or the interval is empty.

Quadratic Time: O(n²)

In these algorithms, the time it takes to run grows directly proportional to the square of the size of the input. In most scenarios and particularly for large data sets, algorithms with quadratic time complexities take a lot of time to execute and should be avoided.

Exponential Time: O(2^n)

In these algorithms, the growth rate doubles with each addition to the input, often iterating through all subsets of the input elements. Any time an input unit increases by 1, it causes you to double the number of operations performed. Exponential time complexity is usually seen in Brute-Force algorithms. These algorithms blindly iterate an entire domain of possible solutions in search of one or more solutions which satisfy a condition. They try to find the correct solution by simply trying every possible solution until they happen to find the correct one.

For example, for a group of 10 friends in which we give chocolate to one person. Now, we want that chocolate. Ways to find the chocolate and what the O order is. O(n2): We go and ask the first person, if the person has the chocolate. Also, we ask this person about other 9 people in the group if they have that chocolate and so on, This is what we call O(n2). O(n): Going and asking each friend individually is O(N). O(log n): Now we divide into two groups, then ask: Is it on the left side, or the right side? Then we take that group and divide it into two and ask again, and so on. Repeat the process till you are left with one friend who has chocolate. This is O(log n).

Conclusion

Worst case analysis gives the maximum number of basic operations that have to be performed during execution of the algorithm. It assumes that the input is in the worst possible state and maximum work has to be done to put things right. One must therefore try to reduce the time complexity as much as possible and thereby write efficient code.

This blog was developed by Sanjeev Mishra, Qapitol QA

Sources: Geeksforgeeks, kdnuggets

Write to [email protected] for software testing services and test automation talent.

Share this post:

Talk to Us