Finding Alignment

I keep making notes for writing a series of essays about alignment, the management concept, and it’s somewhere in between a blog post and a book, so maybe I’ll make it a series of blog posts. This is the introduction.


Alignment is this kind of abstract thing that happens when you have more than one entity (a person, working group, or team) working on a project (building or doing something). Leaving aside, for a moment, “entity” and “project,” when efforts well aligned in that all of the effort is in persuit of the same goal and collaborators do work in support of each other. When efforts are out of alignment, collaborators can easily undermine eachother or persue work that doesn’t support the larger goal.

Being well aligned sounds pretty great, you may think, “why wouldn’t you always just want to be aligned?” And I think deep down people want to be aligned, but it’s not obvious: as organizations grow and the problems that the organizations address become bigger (and are thus broken down into smaller parts,) it’s easy for part of a team to fall out of alignment with the larger team. It’s also the case that two parts of an organization may have needs or concerns that appear to be at odds with each other which can cause them to fall out of alignment.

Consider building a piece of software, as I often do: you often have a group of people who are building features and fixing bugs (engineers), and another group of people who support and interact with the people who are using the software (e.g. support, sale, or product management depending). The former group wants to build the product and make sure that it works, and the later group wants to get (potential) users using the software. While their goals are aligned in the broad sense, in practice there is often tension either between engineers who want things to be correct and complete before shipping them and product people who want to ship sooner or conversely between engineers who want to ship software early and product people who want to make sure the product actually works before it sees use. In short, while the two teams might be aligned on the larger goal, these teams often struggle to find alignment on narrower issues. The tension between stability and velocity is perennial and teams must work together to find alignment on this (and other issues.)

While teams and collaborators want to be in alignment, there are lots of reasons why a collaborator might fall out of alignment. The first and most common reason is that managers/leaders forget to build alignment: collaborators don’t know what the larger goals are or don’t know how the larger goals connect to the work that they’re doing (or should be doing!) If there’s redundancy in the organization that isn’t addressed', collaborators might end up compeating against eachother or defending their spheres or fifedomes. This is exacerbated if two collaborators or groups have overlapping areas of responsibility. Also, when the businesses falter and leaders don’t have a plan, collaborators can fall out of alignment to protect their own projects and jobs. It’s also the case that collaborators interests change over time, and they may find themselves aligned in general, but not to the part of the project that they’re working on. When identified, particularly, early, there are positive solutions to all these problems.

Alignment, when you have it feels great: the friction of collaboration often falls away because you can work independently while trusting that your collaborators are working toward the same goal. Strong alignment promotes prioritization, so you can be confident that you’re always working on the parts of the problem that are the most important.

Saying “we should strive to be aligned,” is not enough of a solution, and this series of posts that I’m cooking up addresses different angles of alignment: how to build it, how to tell when you’re missing alignment, what alignment looks like between different kinds of collaborators (individuals, teams, groups, companies,) and how alignment interacts with other areas and concepts in organizational infrastructure (responsibility, delegation, trust, planning.)

Stay tuned!

Against Testify

For a long time I’ve used this go library testify, and mostly it’s been pretty great: it provides a bunch of tools that you’d expect in a testing library, in the grand tradition of jUnit/xUnit/etc., and managed to come out on top in a field similar libraries a few years ago. It was (and is, but particularly then) easy to look at the testing package and say “wouldn’t it be nice if there were a bit more higher-level functionality,” but I’ve recently come around to the idea that maybe it’s not worth it.1 This is a post to collect and expand upon that thought, and also explain why I’m going through some older projects to cut out the dependency.

First, and most importantly, I should say that testify isn’t that bad, and there’s definitely a common way to use the library that’s totally reasonable. My complaint is basically:

  • The “suite” functionality for managing fixtures is a bit confusing: first it’s really easy to get the capitalization of the Setup/Teardown (TearDown?) functions wrong and have part of your fixture not run, and they’re enough different from “plain tests” to be a bit confusing. Frankly, writing test cases out by hand and using Go’s subtest functionality is more clear anyway.
  • I’ve never used testify’s mocking functionality, in part because I don’t tend to do much mock-based testing (which I see as a good thing,) and for the cases where I want to use mocks, I tend to prefer either hand written mocks or something like mockery.
  • While I know “require” means “halt on failure” and “assert” means “continue on error,” and it makes sense now, “assert” in most2 other languages means “halt on failure” so this is a bit confusing. Also while there are cases where you do want continue on error semantics for test assertions, (I suppose,) it doesn’t come up that often'
  • There are a few warts, with the assertions (including requires,) most notably that you can create an “assertion object” that wraps a *testing.T, which is really an anti-pattern, and can cause assertion failures to be reported at the wrong level.
  • There are a few testify assertions that have some wonky argument structure, notably that Equal wants arguments in expected, actual form but Len wants arguments in object, expected form. I have to look that up every time.
  • I despise the failure reporting format. I typically run tests in my text editor and then use “jump to failure” point when a test fails, and testify assertions aren’t well formed in the way that basically every other tool are (including the standard library!)3 such that it’s fussy to find a failure when it happens.

The alternative is just to check the errors manually and use t.Fatal and t.Fatalf to halt test execution (and t.Error and t.Errorf for the continue on error case.) So we get code that looks like this: :

// with testify:
require.NoErorr(t, err)

// otherwise:
if err != nil {
     t.Fatal(err)
}

In addition to giving us better reporting, the second case looks like code that is more typical of code that you might write outside of test code, and so gives you a chance to use the production API which can help you detect any awkwardness but also serve as a kind of documentation. Additionally, if you’re not lazy, the failure messages that you pass to Fatal can be quite useful in explaining what’s gone wrong.

Testify is fine and it’s not worth rewriting existing tests to exclude the dependency (except maybe in small libraries) but for new code, give it a shot!


  1. I must also confess that my coworker played some role in this conversion. ↩︎

  2. I’d guess all, but I haven’t done a survey. ↩︎

  3. Ok, the stdlib failures have the problem, where the failures are just attributed to the filename (no path) of the failure, which doesn’t work great in the situation where you have a lot of packages with similarly named files and you’re running tests from the root of the project. ↩︎

emt -- Golang Error Tools

I write a lot of Go code, increasingly so to the point that I don’t really write much code in other languages. This is generally, fine for me, and it means that most of the quirks of the language have just become sort of normal to me. There are still a few things that I find irritating, and I stumbled across some code at work a few weeks ago that was awkwardly aggregating errors from a collection of goroutines and decided to package up some code that I think solves this pretty well. This is an introduction and a story about this code.

But first, let me back up a bit.

The way that go models concurrency is very simple: you start gorountines, but you have to explicitly manage their lifecycle and output. If you want to get errors out of a thread you have to collect them somehow, and there’s no standard library code that does this so there are a million bespoke solutions to this, and while every Go programmer has or will eventually write a channel or some kind of error aggregator to collect errors from a goroutine, it’s a bit dodgy because you have to stop thinking about whatever thing you’re working on to write some thread-safe, non-deadlocking aggregation code, which inevitably means even more goroutines and channels and mutexes or some such.

Years ago, I wrote this type that I called a “catcher” that was really just a slice of errors and a mutex, wrapped up with [Add(error)]{.title-ref} and [Resolve() error]{.title-ref} methods, and a few other convenience methods. You’d pass or access the catcher from different goroutines and never really have to think much about it. You get “continue-on-error” semantics for thread pools, which is generally useful, and you never accidentally deadlock on a channel of errors that you fumbled in some way. This type worked its way into the logging package that I wrote for my previous team and got (and presumably still gets) heavy use.

We added more functionality over time: different output formats, support for error annotation when it came and also the ability to have a catcher annotate incoming errors with a timestamp for long running applications of the type. The ergonomics are pretty good, and it helped the team spend more time implementing core features and thinking about the core problems of the product’s domain and less time thinking about managing errors in goroutines.

When I left my last team, I thought that it’d be good to take a step back from the platform and tools that I’d been working on and with for the past several years, but when I saw some code a while back that implemented its own error handling again something clicked, and I wanted just this thing. '

So I dug out the old type, put it in a new package, dusted off a few cobwebs, improved the test coverage, gave it a cool name, and reworked a few parts to avoid forcing downstream users to pickup unnecessary dependencies. It was a fun project, and I hope you all find it useful!

Check out emt! Tell me what you think!

Rescoping the Engineering Interview

It’s not a super controversial to assert that the software engineering interview process is broken, but I think it’s worthwhile to do that anyway. The software engineering interview is broken. There are lots of reasons for this:

  • interview processes are overoptimized for rejecting candidates that aren’t good, that they often reject candidates that are good. This isn’t a problem if it happens occasionally, but it’s really routine.
  • it’s difficult to design an interview process that’s works consistently well across different levels and different kinds of roles, and companies/teams can easily get into a place where they really can only hire one type or level of engineer.
  • while many engineering teams know that the hiring process is biased, most of the attempts to mitigate this focus on the bias of the interviewer by making interview processes more consistent across candidate or easier to score objectively, while abdicating for the ways that the process can be biased toward certain kinds of candidates.

I’ve been part of lots of conversations over the years about “making interviews better,” and many of the improvements to the process that come out of these conversations don’t do much and sometimes exacerbate the biases and inefficiencies of the process. I think also, that the move toward remote work (and remote interviewing,) has presented an underrealized opportunity to revisit some of these questions and hopefully come up with better ways of interviewing and building teams.

To unwind a bit, the goals of an interview process should be:

  • have a conversation with a candidate to ensure that you can communicate well with them (and them with you!) and can imagine that they’ll fit into your team or desired role.
  • identify skills and interests, based on practical exercises, review of their past work (e.g. portfolio or open source work,) that would complement your team’s needs. Sometimes takes “figure out if the person can actually write code,” but there are lots of ways to demonstrate and assess skills.
  • learn about the candidates interests and past projects to figure out if there’s alignment between the candidate’s career trajectory and the team you’re building.

Most processes focus on the skills aspect and don’t focus enough on the other aspects. Additionally, there are a bunch of common skills assessments that lots of companies use (and copy from eachother!) and most of them are actually really bad. For example:

  • live coding exercises often have to be really contrived in order to fit within an hour interview, and tend to favor algorithims problems that folks either have memorized because they recently took a class or crammed for interviews. As engineers we almost never write code like this, and the right answer to most of these problems is “use a library function”, so while live coding is great for getting the opportunity to watch a candidate think/work on a problem, success or failure aren’t necessarily indicative of capability or fit.
  • take home coding problems provide a good alternative to live coding exercises, but can be a big imposition timewise particularly people on people who have jobs while interviewing. Often take home exercises also require people to focus more on buildsystems and project-level polish rather than the kind of coding that they’re likely to do more of. The impulse with take home problems is to make them “bigger,” and while these problems can be a little “bigger” than an hour, a lot of what you end up looking at with these problems is also finishing touches so keeping it shorter is also a good plan.
  • portfolio-style reviews (e.g. of open source contributions or public projects,) can be great in many situations, particularly when paired with some kind of session where the candidate can provide context, but lots of great programmers don’t have these kinds of portfolios because they don’t program for fun (which is totally fine!) or because their previous jobs don’t have much open source code. It can also be difficult to assess a candidate in situations where these work samples are old, or are in codebases with awkward conventions or requirements.

There isn’t one solution to this, but:

  • your goal is to give candidates the opportunity to demonstrate their competencies and impress you. Have an interview menu1 rather than an interview process, and let candidates select the kind of problem that they think will be best for them. This is particularly true for more senior candidates, but I think works across the experience spectrum.

  • if you want to do a programming or technical problem in real time, there are lots of different kinds of great exercises, so avoid having another candidate implement bubble sort, graph search, or reverse a linked list. Things like:

    • find a class (or collection of types/functions) in your codebase that you can share and have the candidate read it and try and understand/explain how it works, and then offer up suggestions for how to improve it in some way. I find this works best with 100/200 lines of code, and as long as you can explain idioms and syntax to them, it doesn’t matter if they know the language. Reading code you don’t know '
    • provide the candidate with a function that doesn’t have side effects, but is of moderate length and have them write tests for all the edge cases. It’s ok if the function has a bug that can be uncovered in the course of writing tests, but this isn’t particularly important.
    • provide the candidate with a set of stubs and a complete test suite and have them implement the interface that matches the test cases. This works well for problems where the class in question should implement a fairly pedestrian kind of functionality like “a hash map with versioned values for keys,” or “implement an collection/cache that expires items on an LRU basis.”
    • have the candidate do a code review of a meaningful change. This is an opportunity to see what it’s like to work with them, to give them a view into your process (and code!), and most importantly ask questions, which can provide a lot of insight into their mindset and method.

    I think that the menu approach also works well here: different people have different skills and different ways of framing them, and there’s no real harm in giving people a choice here.

  • independent/take home/asynchronous exercises tend to be better (particularly for more senior candidates,) as it more accurately mirrors the way that we, as programmers work. At the same time, it’s really easy to give people problems that are too big or too complex or just take too long to solve well. You can almost always get the same kind of signal by doing smaller problems anyway. I also believe that offering candidates some kind of honoraria for interview processes are generally a good practice.

  • one of the big goals of the interview processes is to introduce a candidate to the team and give them a sense for who’s on the team and how they operate, which I think has given rise to increasingly long interview sequences. Consider pairing up interviewers for some or all of your interview panel to give candidates greater exposure to the team without taking a huge amount of their time. This is also a great way to help your team build skills at interviewing.

  • assessing candidates should also balance the candidates skills, alignment with the team, with the team’s needs and capacity for ramping new members. Particularly for organizations that place candidates on teams late in the process, it’s easy to effectively have two processes (which just takes a while,) and end up with “good” candidates that are just haphazardly allocated to teams that aren’t a good fit.

These are hard problems, and I think its important both to be open to different ways of interviewing and reflecting on the process over time. One of the great risks is that a team will develop an interview process and then keep using it even if it turns out that the process becomes less effective as interviewers and needs change. Have (quick) retrospectives about your interview process to help make sure that stays fresh and effective.'


I think this is a follow up, in someways, to my earlier post on Staff Engineering. If you liked this, check that out!


  1. To be clear, I think the interview menu has to be tailored to candidates and roles. There’s a danger of decision paralysis, so recruiters and hiring managers should definitely use part of their time with the candidate to select a good interview plan. The options need to make sense for the role, the interviewers need to prepare, and the hiring manager/recruiter should be able to eliminate options from the menu that don’t make sense for the candidates background. ↩︎

Emacs Stability

A while ago I packaged up my emacs configuration for the world to see/use and I’m pretty proud of this thing: it works well out of the box, it’s super minimal and speedy, and has all of the features. I don’t think it’s the right solution for everyone, but I think there are a case of users for whom this configuration makes sense. I’ve definitely also benefited a lot for thinking about this “configuration” as a software project at least in terms of keeping things organized and polished and reasonably well tested. It’s a good exercise.

Historically, I’ve used my emacs configuration ans as a sort of “fun side project” and while I tried to avoid spending too much time tweaking various things, it did feel like the kind of thing that was valuable (given how much time I spend in a text editor,) without being too distracting. Particularly, early in the pandemic, or during periods over the summer when I was between jobs.

Then, I put the configuration in a public repo, and I basically haven’t made any meaningful changes since then. One part of this is clearly that I put a lot of time into polishing things in the initial push to get it released, and there haven’t been many bugs that have inspried any kind of major development effort. Another part is that, the way I use an editor isn’t really changing. I’m writing code and English and using a couple of applications (e.g. email and org-mode) within emacs, but I’m not really (often) adding new or different kinds of work, and while this isn’t exacting from a blogging perspective* it is exciting from a “things just work perspective.”


I have refered to myself as a degenerate emacs user. I’ve sometimes said unrepentant, but I think it’s basically the same. I’ve also realized that, given that I’ve basically been using emacs the same way since 2008 or so, I’m kind of an old timer, even if it doesn’t much feel like that, and there are lots of folks with longer histories.

I think I used care more about what tools other people used to edit text, even a couple of years ago, I thought that having good initial configuration and better out of the box experiences for emacs would lead to more people using emacs, which would be cool because they’d get to use a cool piece of software and we’d get more emacs users.

Increasingly, however, while I think emacs is great and people should use it, I’m less concerned: people should use what they want, and I think there will always be a enough people here and there who want to use emacs and that’s good enough for me. I think having good out of the box experiences are important, but it’s not a one-size fits all kind of situation. I also think that VS Code is pretty great software, and I like a lot of the implications for remote editing, even if I’m not particularly interested in it for myself.


Enjoy the repo, and let me know if there’s anything terrible about it. I’ve been getting back into blogging recently, and have started tweaking a few things about the ways I use computers/emacs, mostly in terms of exploring tmux (hah!) and also considering avoiding GUI emacs entirely. Stay tuned if you’re interested!

The Emacs Daemon GTK Bug, A Parable

There’s this relatively minor Emacs bug that I’ve been aware of for a long time, years. The basic drift is that on Linux systems, when running with GTK/Emacs as a daemon, and the X11 session terminates for any reason the Emacs daemon terminates. Emacs daemons are great: you start Emacs once, and it keeps running independently of what ever windows you have open. You can leave files open in Emacs buffers and not have move between different projects with minimal context switching costs.

First of all, emacs’s daemon mode is weird. I can’t think of another application that starts as a daemon (in the conventional UNIX double-forking manner,) and then a client process runs and spawns GUI (potentially) windows. If there are other applications that work this way, there aren’t many.

Nevertheless, being able to restart the window manager without loosing the current state of your Emacs session is one of the chief reasons to run Emacs in daemon mode, so this bug has always been a bit irksome. Also since it’s real, and for sure a thing, why has it taken so long to address? Lets dig a little bit deeper.


There are two GNOME bugs related to this:

What’s happening isn’t interesting or complicated: Emacs calls an API, which behaves differently than Emacs expects and needs, but not (particularly) differently than GNOME expects or needs. Which means GNOME has little incentive to fix the bug--if they even could without breaking other users of this API.

Emacs can’t fix the problem on their own, without writing a big hack around GNOME components, which wouldn’t be particularly desirable or viable, and because this works fine with the other toolkit (and is only possible in some situations,) it doesn’t feel like an Emacs bug.

We have something of a stalemate. Both party thinks the other is at fault. No one is particularly incentivized to fix the problem from their own code, and there is a work around,1 albeit a kind of gnarly one.

This kind of issue feels, if not common, incredibly easy for a project--even one like emacs--to stumble into and quite easy to just never resolve. This kind of thing happens, in some form, very often and boundaries between libraries make it even more likely.

On the positive side, It does seem like there’s recent progress on the issue, so it probably won’t be another 10 years before it gets fixed, but who knows.


  1. To avoid this problem either: don’t use GUI emacs windows and just use the terminal (fairly common, and more tractable as terminal emulators have improved a bunch in the past few years,) or use the Lucid GUI toolkit, which doesn’t depend on GTK at all. The lucid build is ugly (as the widgets don’t interact with GTK settings,) but its light weight and doesn’t suffer the ' ↩︎

The Org Mode Product

As a degenerate emacs user, as it were, I have of course used org-mode a lot, and indeed it’s probably the mode I end up doing a huge amount of my editing in, because it’s great with text and I end up writing a lot of text. I’m not, really, an org mode user in the sense that it’s not the system or tool that I use to stay organized, and I haven’t really done much development of my own tooling or process around using orgmode to handle document production, and honestly, most of the time I use reStructuredText as my preferred lightweight markup language.

I was thinking, though, as I was pondering ox-leanpub, what even is org-mode trying to do and what the hell would a product manager do, if faced with org-mode.

In some ways, I think it sucks the air out of the fun of hacking on things like emacs to bring all of the “professionalization of making software” to things like org-mode, and please trust that this meant with a lot of affection for org-mode: this is meant as a thought experiment.


Org has a lot going on:

  • it provides a set of compelling tools for interacting with hierarchical human-language documents.
  • it’s a document markup and structure system,
  • the table editing features are, given the ability to write formula in lisp, basically a spreadsheet.
  • it’s a literate programming environment, (babel)
  • it’s a document preparation system, (ox)
  • it’s a task manager, (agenda)
  • it’s a time tracking system,
  • it even has pretty extensive calendar management tools.

Perhaps the thing that’s most exciting about org-mode is that it provides functionality for all of these kinds of tasks in a single product so you don’t have to bounce between lots of different tools to do all of these things.

It’s got most of the “office” suite covered, and I think (particularly for new people, but also for people like me,) it’s not clear why I would want my task system, my notes system, and my document preparation system to all have their data intermingled in the same set of files. The feeling is a bit unfocused.

The reason for this, historically makes sense: org-mode grew out of technically minded academics who were mostly using it as a way of preparing papers, and who end up being responsible for a lot of structuring their own time/work, but who do most of their work alone. With this kind of user story in mind, the gestalt of org-mode really comes together as a product, but otherwise it’s definitely a bit all over the place.

I don’t think this is bad, and particularly given its history, it’s easy to understand why things are the way they are, but I think that it is useful to take a step back and think about the workflow that org supports and inspires, while also not forgetting the kinds of workflows that it precludes, and the ways that org, itself, can create a lot of conceptual overhead.

There are also some gaps, in org, as a product, which I think grow out of this history, and I think there are

They are, to my mind:

  • importing data, and bidirectional sync. These are really hard problems, and there’ve been some decent projects over the years to help get data into org, I think org-trello is the best example I can think about, but it can be a little dodgy, and the “import story” pales in comparison to the export story. It would be great if:
    • you could use the org interface to interact with and manipulate data that isn’t actually in org-files, or at least where the system-of-record for the data isn’t org. Google docs? Files in other formats?
  • collaborating with other people. Org-mode files tend to cope really poorly with multiple people editing them at the same time (asynchronously as with git,) and also in cases where not-everyone uses org-mode. One of the side effects of having the implementation of org-features so deeply tied to the structure of text in the org-format, it becomes hard to interact with org-data outside of emacs (again, you can understand why this happens, and it’s indeed very lispy,), which means you have to use emacs and use org if you want to collaborate on projects that use org.
    • this might look like some kind of different diff-drivers for git, in addition to some other more novel tools.
    • bi-directional sync might also help with this issue.
  • beyond the agenda, building a story for content that spans multiple-file. Because the files are hierarchical, and org provides a great deal of taxonomic indexing features, you really never need more than one org-file forever, but it’s also kind of wild to just keep everything in one file, so you end up with lots of org-files, and while the agenda provides a way to filter out the task and calendar data, it’s sort of unclear how to mange multi-file systems for some of the other projects. It’s also the case, that because you can inject some configuration at the file level, it can be easy to get stuck.
  • tools for interacting with org content without (interactive or apparent) emacs. While I love emacs as much as the next nerd, I tend to think that having a dependency on emacs is hard to stomach, particularly for collaborative efforts, (though with docker and the increasing size of various runtimes, this may be less relevant.) If it were trivially easy to write build processes that extracted or ran babel programs without needing to be running from within emacs? What if there were an org-export CLI tool?

Docker Isn't A Build System

I can’t quite decide if this title is ironic or not. I’ve been trying really hard to not be a build system guy, and I think I’m succeeding--mostly--but sometimes things come back at us. I may still be smarting from people proposing “just using docker” to solve any number of pretty irrelevant problems.

The thing is that docker does help solve many build problems: before docker, you had to either write code that supported any possible execution environment. This was a lot of work, and was generally really annoying. Because docker provides a (potentially) really stable execution environment, it can make a lot of sense to do your building inside of a docker container, in the same way that folks often do builds in chroot environments (or at least did). Really containers are kind of super-chroots, and it’s a great thing to be able to give your development team a common starting point for doing development work. This is cool.

It’s also the case that Docker makes a lot of sense as a de facto standard distribution or deployment form, and in this way it’s kind of a really fat binary. Maybe it’s too big, maybe it’s the wrong tool, maybe it’s not perfect, but for a lot of applications they’ll end up running in containers anyway, and treating a docker container like your executable format makes it possible to avoid running into issues that only appear in one (set) of environments.

At the same time, I think it’s important to keep these use-cases separate: try to avoid using the same container for deploying that you use for development, or even for build systems. This is good because “running the [deployment] container” shouldn’t build software, and it’ll also limit the size of your production containers, and avoid unintentionally picking up dependencies. This is, of course, less clear in runtimes that don’t have a strong “compiled artifacts” story, but is still possible.

There are some notes/caveats:

  • Dockerfiles are actually kind of build systems, and under the hood they’re just snapshotting the diffs of the filesystem between each step. So they work best if you treat them like build systems: make the steps discrete and small, keep the stable deterministic things early in the build, and push the more ephemeral steps later in the build to prevent unnecessary rebuilding.
  • “Build in one container and deploy in another,” requires moving artifacts between containers, or being able to run docker-in-docker, which are both possible but may be less obvious than some other workflows.
  • Docker’s “build system qualities,” can improve the noop and rebuild-performance of some operations (e.g. the amount of time to rebuild things if you’ve just built or only made small changes.) which can be a good measure of the friction that developers experience, because of the way that docker can share/cache between builds. This is often at the expense of making artifacts huge and at greatly expanding the amount of time that the operations can take. This might be a reasonable tradeoff to make, but it’s still a tradeoff.