The most canonical method of defining function complexity is the big O notation. This metric defines how an algorithm is affected by the size of input data. For instance, does an algorithm scale linearly with the size of the input data, or quadratically?
Manually calculating the big O notation for an algorithm is the best approach when trying to achieve an overview of how its performance is related to the size of input data. Knowing the complexity of your application's components gives you the ability to detect and focus on aspects that will significantly slow down code.
To measure the big O notation, all constants and low-order terms are removed in order to focus on the portion that really matters when the size of input ...