Making Pytest benchmark automated, actionable, and intuitive

Photo by Lucas Santos on Unsplash
Photo by Lucas Santos on Unsplash

“Your code is slow” is something that is easily said, but it would take a lot of trial and error and testing to find out which part of the code is slow, and how slow is slow? Once the bottleneck of the code is found, does it scale well with an input that is 100 times or 1000 times larger, with results averaged across 10 iterations?

This is where pytest-benchmark comes in handy

Complementing the idea of unit testing, which is to test a single unit or small part of the codebase, we can expand on this and measure code performance easily with pytest-benchmark.

This article will touch on how to set up, run, and interpret the benchmark timing results of pytest-benchmark. To properly enforce benchmarking in a project, the advanced sections also touch on how to compare benchmark timing results across runs and reject commits if they fail certain thresholds, and how to store and view historical benchmark timing results in a histogram!

This can simply be done with pip install pytest-benchmark on the Terminal.

To enable additional features, such as visualizing the benchmark results, we can perform pip install 'pytest-benchmark[histogram]' to install the additional packages required.

Similar to pytest with added benchmark fixture