# Analysis of algorithms

In computer science, the **analysis of algorithms** is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm’s input to the number of steps it takes (its **time complexity**) or the number of storage locations it uses (its **space complexity**).

An algorithm is said to be efficient when this function’s values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest.

## Order-of-growth classifications

Notation |
Name |

$1$ |
constant |

$\log n$ |
logarithmic |

$n$ |
linear |

$n \log n$ |
linearithmic |

$n^2$ |
quadratic |

$2^n$ |
exponential |

## Algorithm analysis: Time and space complexity

Analysing an algorithm means determining the amount of resources needed to execute if. The **time complexity** of an algorithm is the running time of a program as a function of the input size. The **space complexity** of an algorithm is the amount of memory required during the program execution as a function of the input size.

### Expressing time and space complexity

Time and space complexity can be expressed using a function $f(n)$ where $n$ is the input size for a given instance of the algorithm.

Methodology of time complexity

A function without loops or recursion is linear and its running time can be given as the number of instructions it contains. However, if an algorithm contains loops, then its efficiency depends on the number of loops and the running time of each loop.

Cf.

##### Linear Loops

##### Logarithmic Loops

##### Nested Loops