top of page
Writer's pictureJosef Mayrhofer

What are Time and Space Complexity, and why is it essential for Performance Engineering?


When applications slow down and crash, we know the root cause is often coding-related issues. Let's imagine your application has a search feature, and if you do not follow the time and space complexity considerations, your program will run too long, crash, and frustrates your customer.


So, when your application faces performance problems, review the code or use analysis tools to check the time and space complexity requirements expressed by Big O.


Time complexity

Have you ever hosted a dinner party for your friends? We can use basic Big-O math to express specific tasks such as the time to prepare for the party, pass toasts, and how long guests hug each other.


Big O(1): You clean the house.

  • No matter how many guests you've invited

  • The time it takes to clean the house is always the same


Big O(n): You pass toasts with every guest.

  • The number of guests matters

  • Time depends linearly on the number of guests.


Big O(n^2): Guests hug each other.

  • The number of guests matters

  • Time increases quadratic based on the number of guests.


A fast algorithm is one thing, but if its space complexity is too high, your application might use more memory than available and crash which would frustrate your customers. Therefore, when designing algorithms, it's essential to consider their time and space complexity and choose the one that provides the desired tradeoff between efficiency and sophistication.


Read more about Big-O on the famous Big-o-cheatsheet https://www.bigocheatsheet.com/



Space complexity


Some classic performance antipatterns, such as inappropriate sorting algorithms, can screw up your time KPIs and overload the available memory resources.


Selection Sort - Space complexity O(1):

  • Selection Sort

  • Space requirement is always 1

  • No matter how many elements to sort


Bucket Sort - Space complexity O(n):

  • Create sorted buckets

  • Sort the buckets

  • The more elements to sort, the more space it will take


The example above shows the tradeoff between time and space complexity. We've learned that the most memory-effective algorithms are not the best for performance, and faster code could result in high memory utilization. Therefore, it's best to keep the Big-O notation in your mind during your performance engineering and software development activities.


More about Big-O Notation


Big-O notation is a mathematical notation used to describe the limiting behavior of a function when the input size approaches infinity.

The Big-O notation provides an upper bound on the growth rate of the function in terms of a more straightforward function. For example, if a function f(n) has a Big-O notation of O(n^2), the function's growth rate is no worse than a quadratic function as n approaches infinity.


Some standard Big-O notations include:

  • O(1): constant time complexity, meaning the algorithm's running time is constant and does not depend on the input size.

  • O(log n): logarithmic time complexity, meaning the algorithm's running time increases logarithmically with the input size.

  • O(n): linear time complexity, meaning the algorithm's running time increases linearly with the input size.

  • O(n^2): quadratic time complexity, meaning the algorithm's running time increases quadratically with the input size.


It's important to note that Big-O notation provides only an upper bound on the growth rate of a function, and it doesn't consider the constant factors or lower-order terms in the process. As a result, two algorithms with the same Big-O notation may have significantly different performance characteristics in practice.


More resources


Tool to calculate Big-O


The famous Big-O Chatesheet



Keep up the great work! Happy Performance Engineering!

Recent Posts

See All

Comentários


bottom of page