I said a thing on twitter that I like, and I realized that I hadn’t really written (or ranted) much about performance engineering, and it seemed like a good thing to do. Let’s get to it.
Making software fast is pretty easy:
-
Measure the performance of your software at two distinct levels:
- figure out how to isolate specific operations, as in unit test, and get the code to run many times, and measure how long the operations take.
- Run meaningful units of work, as in integration tests, to understand how the components of your system come together.
If you’re running a service, sometimes tracking the actual timing of actual operations over time, can also be useful, but you need a lot of traffic for this to be really useful. Run these measurements regularly, and track the timing of operations over time so you know when things actually chair.
-
When you notice something is slow, identify the slow thing and make it faster. This sounds silly, but the things that are slow usually fall into one of a few common cases:
- an operation that you expected to be quick and in memory, actually does something that does I/O (either to a disk or to the network,)
- an operation allocates more memory than you expect, or allocates memory more often than you expect.
- there’s a loop that takes more time than you expect, because you expected the number of iterations to be small (10?) and instead there are hundreds or thousands of operations.
Combine these and you can get some really weird effects, particularly over time. An operation that used to be quick gets slower over time, because the items iterated over grows, or a function is called in a loop that used to be an in-memory only operation, now accesses the database, or something like that. The memory based ones can be trickier (but also end up being less common, at least with more recent programming runtimes.)
Collect data, and when something gets slower you should fix it.
Well no.
Most of the time slow software doesn’t really matter. The appearance of slowness or fastness is rarely material to user’s experience or the bottom line. If software gets slower, most of the time you should just let it get slower:
-
Computers get faster and cheaper over time, so most of the time, as long as your rate of slow down is slow and steady over time, its usually fine to just ride it out. Obviously big slow downs are a problem, but a few percent year-over-year is so rarely worth fixing.
It’s also the case that runtimes and compilers are almost always getting faster, (because compiler devlopers are, in the best way possible, total nerds,) so upgrading the compiler/runtime regularly often offsets regular slow down over time.
-
In the vast majority of cases, the thing that makes software slow is doing I/O (disks or network,) and so your code probably doesn’t matter and so what your code does is unlikely to matter much and you can solve the problem by changing how traffic flows through your system.
For IX (e.g. web/front-end) code, the logic is a bit different, because slow code actually impacts user experience, and humans notice things. The solution here, though, is often not about making the code faster, but in increasingly pushing a lot of code to “the backend,” (e.g. avoid prossing data on the front end, and just make sure the backend can always hand you exactly the data you need and want.)
-
Code that’s fast is often harder to read and maintain: to make code faster, you often have to be careful and avoid using certain features of your programming language or runtime (e.g. avoiding ususing heap allocations, or reducing the size of allocations by encoding data in more terse ways, etc,) or by avoiding libraries that are “slower,” or that use certain abstractions, all of which makes your code less conventional more difficult to read, and harder to debug. Programmer time is almost always more expensive than compute time, so unless it’s big or causing problems, its rarely worth making code harder to read.
Sometimes, making things actually faster is actually required. Maybe you have a lot of data that you need to get through pretty quickly and there’s no way around it, or you have some classically difficult algorithm problem (graph search, say,), but in the course of generally building software this happens pretty rarely, and again most of the time pushing the problem “up” (from the front end to the backend and from the backend to the database, similar,) solves whatever problems you might have.
-
While there are obviously pathological counter-examples, ususally related to things that happen in loops, but a lot of operations never have to be fast because they sit right next to another operation that’s much slower:
- Lots of people analyze logging tools for speed, and this is almost always silly because all log messages either have to be written somewhere (I/O) and generally there has to be something to serialize messages (a mutex or functional equivalent,) somewhere because you want to write only one message at a time to the output, so even if you have a really “fast logger” on its own terms, you’re going to hit the I/O or the serializing nature of the problem. Use a logger that has the features you have and is easy to use, speed doesn’t matter.
- anything in HTTP request routing and processing. Because request processing sits next to network operations, often between a database as well as to the client, any sort of gain by using a “a faster web framework,” is probably immeasurable. Use the ones with the clearest feature set.