I've been working on continuous integration systems for a few years, and while the basic principle of CI is straightforward, it seems that most CI deployments are not. This makes sense: project infrastructure is an easy place to defer maintenance during the development cycle, and projects often prioritize feature development and bug fixing over tweaking the buildsystem or test infrastructure, but I almost think there's something more. This post is a consideration of what makes CI hard and perhaps provide a bit of unsolicited advice.
The Case for CI
I suppose I don't really have to sell anyone on the utility or power of CI: running a set of tests on your software regularly allows developers and teams to catch bugs early, and saves a bucket of developer time, and that is usually enough. Really, though, CI ends up giving you the leverage to solve a number of really gnarly engineering problems:
- how to release software consistently and regularly.
- how to support multiple platforms.
- how to manage larger codebases.
- anything with distributed systems.
- how to develop software with larger numbers of contributors.
Doing any of these things without CI isn't really particularly viable, particularly at scale. This isn't to say, that they "come free" with CI, but that CI is often the right place to build the kind of infrastructure required to manage distributed systems problems or release complexity.
Buildsystems are Crucial
One thing that I see teams doing some times is addressing their local development processes and tooling with a different set of expectations than they do in CI, and you can totally see and understand how this happens: the CI processes always start from a clean environment, and you often want to handle failures in CI differently than you might handle a failure locally. It's really easy to write a shell script that only runs in CI, and then things sort of accumulate, and eventually there emerge a class of features and phenomena that only exist for and because of CI.
The solution is simple: invest in your buildsystem, [1] and ensure that there is minimal (or no!) indirection between your buildsystem and your CI configuration. But buildsystems are hard, and in a lot of cases, test harnesses aren't easily integrated into build systems, which complicates the problem for some. Having a good build system isn't particularly about picking a good tool, though there are definitely tradeoffs for different tools, the problem is mostly in capturing logic in a consistent way, providing a good interface, and ensuring that the builds happen as efficiently as possible.
Regardless, I'm a strong believer in centralizing as much functionality in the buildsystem as possible and making sure that CI just calls into build systems. Good build systems:
- allow you to build or rebuild (or test/subtest) only subsets of work, to allow quick iteration during development and debugging.
- center around a model of artifacts (things produced) and dependencies (requires-type relationships between artifacts).
- have clear defaults, automatically detect dependencies and information from the environment, and perform any required set up and teardown for the build and/or test.
- provide a unified interface for the developer workflow, including building, testing, and packaging.
The upside, is that effort that you put into the development of a buildsystem pay dividends not just for managing to complexity of CI deployments, but also make the local development stable and approachable for new developers.
[1] | Canonically buildsystems are things like makefiles (or cmake, scons, waf, rake, npm, maven, ant, gradle, etc.) that are responsible for converting your source files into executable, but the lines get blurry in a lot of languages/projects. For Golang, the go tool plays the part of the buildsystem and test harness without much extra configuration, and many environments have a pretty robust separation between building and testing. |
T-Shaped Matrices
There's a temptation with CI systems to exercise your entire test suite with a comprehensive and complete range of platforms, modes, and operations. While this works great for some smaller projects, "completism" is not the best way to model the problem. When designing and selecting your tests and test dimensions, consider the following goals and approaches:
- on one, and only one, platform run your entire test suite. This platform should probably be very close to the primary runtime of your environment (e.g. when developing a service that runs on Linux service, your tests should run in a system that resembles the production environment,) or possibly your primary development environment.
- for all platforms other than your primary platform, run only the tests that are either directly related to that runtime/platform (e.g. anything that might be OS or processor specific,) plus some small subset of "verification" or acceptance tests. I would expect that these tests should easily be able to complete in 10% of the time of a "full build,"
- consider operational variants (e.g. if your product has multiple major-runtime modes, or some kind of pluggable sub-system) and select the set of tests which verifies these modes of operations.
In general the shape of the matrix should be t-shaped, or "wide across" with a long "narrow down." The danger more than anything is in running too many tests, which is a problem because:
- more tests increase the chance of a false negative (caused by the underlying systems infrastructure, service dependencies, or even flakey tests,) which means you risk spending more time chasing down problems. Running tests that provide signal is good, but the chance of false negatives is a liability.
- responsiveness of CI frameworks is important but incredibly difficult, and running fewer things can improve responsiveness. While parallelism might help some kinds of runtime limitations with larger numbers of tests, this incurs overhead, is expensive.
- actual failures become redundant, and difficult to attribute failures in "complete matrices." A test of certain high level systems may pass or fail consistently along all dimensions creating more noise when something fails. With any degree of non-determinism or chance of a false-negative, running tests more than once just make it difficult to attribute failures to a specific change or an intermittent bug.
- some testing dimensions don't make sense, leading to wasted time addressing test failures. For example when testing an RPC protocol library that supports both encryption and authentication, it's not meaningful to test the combination of "no-encryption" and "authentication," although the other three axes might be interesting.
The ultimate goal, of course is to have a test matrix that you are confident will catch bugs when they occur, is easy to maintain, and helps you build confidence in the software that you ship.
Conclusion
Many organizations have teams dedicated maintaining buildsystems and CI, and that's often appropriate: keeping CI alive is of huge value. It's also very possible for CI and related tools to accrued complexity and debt in ways that are difficult to maintain, even with dedicated teams: taking a step back and thinking about CI, buildsystems, and overall architecture strategically can be very powerful, and really improve the value provided by the system.