# What are complexity of algorithms

Complexity of algorithms

# what are Complexity of algorithms

There are two types of complexity in the algorithm:

• Time complexity
• Space complexity

# Time complexity

Time complexity is represented as the process of determining a formula for the total time needed for the execution of that algorithm. This calculation is completely independent of the implementation and programming language. It is used to represent the time measurements required by any algorithm to solve any problem.

Three cases exist for time complexity

• worst case complexity : Maximum number of steps taken on any instance of size a.
• Average case complexity : An average number of steps taken on any given instance of sizes.

## Types of Time complexity / Time complexity notations

There are different types of time complications, see one by one:

• Constant time – O (1) : The algorithm is called continuous time with order O (1) when it does not depend on the input size n. Regardless of the input size n, the runtime will always be the same.
• Linear time – O (n) : An algorithm has a linear time complexity when the running time increases linearly with the length of the input. When function involves test all the values of the input data, such as function has time complexity with O (n).
• Logarithmic time – O (log n) : it shows that the number of operations isn't the same as the input size. The number of operations decreases as the input size increases. Algorithms with logarithmic time complexity are found in binary tree or binary search functions. It involves splitting a given value into two in an array and starting the search in a partition. This ensures that the operation is not performed on every element of the data.
• Quadratic time – O (n^2) : An algorithm has a non-linear time complexity, where the running time increases non-linear (n ^ 2) with the length of the input. Typically, nested loops currently fall under the complexity order where O (n) is used for a loop and if the function includes a loop within a loop, it is O (n) * O (n) = O ( n ^ 2) goes to order. Similarly, if 'm' loops are specified in the function, the order is given by O (n ^ m), which is called the polynomial (time complexity) function.
• Cubic time – O (n^3)

# space complexity

The complexity of the space is the amount of memory used by the algorithm, which includes the input values in the algorithm to execute and generate the result. Space complexity determines the total amount of memory that an algorithm requires to run according to its input size. Space complexity is defined as the process of defining a formula for predicting the need for memory space for successful execution of the algorithm.

The memory space is usually considered the primary memory. It is used to represent the amount of storage required for any problem. Space complexity is a program or algorithm. This is the amount of memory space Needed to resolve an illustration of a computational problem as a function of the characteristics of the input. this is the memory required by an algorithm until this executes completely.

## Notations of Space complexity

There are different notations that we can use to express space complexity measurements. The most widely used is the big-O notation. In addition, we will also briefly define other general information.

• Big-O notation: Big-O notation describes an immature upper limit. It represents scalability and performance of the algorithm. Simply put, this is the worst case of the algorithm's growth rate. we can say that: "In this algorithm the amount of space will not increase much faster than this f (x), but it can grow slowly."
• Omega Notation – Ω : Omega signaling expresses an asymptomatic lower extremity. So, it gives the best-case scenario of the complexity of the algorithm, as opposed to big-O notation. so, you can say that: "The amount of space in this algorithm will growth more slowly than this fix, but it can grow much faster."
• Theta Notation – θ) : Theta marking represents a function that lies within the lower and upper limits. So, We can say that: "The algorithm takes the least constrained function amount of space and does not exceed the maximum constrained function amount of space".

### Memory Usage while Execution

On the executing, the algorithms used memory space for the three causes:

1. Reference space:- This is amount of memory used to save the compiled version of instructions.

2. Environmental agglomeration:- Sometimes one algorithm (function) can be called inside another algorithm (function). In such a situation, the existing variables are pushed to the system stack, where they await further execution and then the algorithm (function) is called inside.

3. Data Space:- Amount of space used by variables and constants.