Methods of Adoption

Before I started actually working as a software engineer full time, writing code was this fun thing I was always trying to figure out on my own, and it was fun, and I could hardly sit down at my computer without learning something. These days, I do very little of this kind of work. I learn more about computers by doing my job and frankly, the kind of software I write for work is way more satisfying than any of the software I would end up writing for myself.

I think this is because the projects that a team of engineers can work on are necessarily larger and more impactful. When you build software with a team, most of the time the product either finds users (or your end up without a job.) When you build software with other people and for other people, the things that make software good (more rigorous design, good test discipline, scale,) are more likely to be prevalent. Those are the things that make writing software fun.

Wait, you ask “this is a lisp post?” and “where is the lisp content?” Wait for it…

In Pave the On/Off Ramps1 I started exploring this idea that technical adoption is less a function of basic capabilities or numbers of features in general, but rather about the specific features that support and further adoption and create confidence in maintenance and interoperability. A huge part of the decision process is finding good answers to “can I use these tools as part of the larger system of tools that I’m using?” and “can I use this tool a bit without needing to commit to using it for everything?”

Technologies which are and demand ideological compliance are very difficult to move into with confidence. A lot of technologies and tools demand ideological compliance, and their adoption depends on once-in-a-generation sea changes or significant risks.2 The alternate method, to integrate into people’s existing workflows and systems, and provide great tools that work for some usecases and to prove their capability is much more reliable: if somewhat less exciting.

The great thing about Common Lisp is that it always leans towards the pragmatic rather than the ideological. Common Lisp has a bunch of tools--both in the langauge and in the ecosystem--which are great to use but also not required. You don’t have to use CLOS (but it’s really cool), you don’t have to use ASDF, there isn’t one paradigm of developing or designing software that you have to be constrained to. Do what works.


I think there are a lot of questions that sort of follow on from this, particularly about lisp and the adoption of new technologies. So let’s go through the ones I can think of, FAQ style:

  • What kind of applications would a “pave the exits” support?

    It almost doesn’t matter, but the answer is probably a fairly boring set of industrial applications: services that transform and analyze data, data migration tools, command-line (build, deployment) tools for developers and operators, platform orchestration tools, and the like. This is all boring (on the one hand,) but most software is boring, and it’s rarely the case that programming langauge actually matters much.

    In addition, CL has a pretty mature set of tools for integrating with C libaries and might be a decent alternative to other langauges with more complex distribution stories. You could see CL being a good langauge for writing extensions on top of existing tools (for both Java with ABCL and C/C++ with ECL and CLASP), depending.

  • How does industrial adoption of Common Lisp benefit the Common Lisp community?

    First, more people writing common lisp for their jobs, which (assuming they have a good experience,) could proliferate into more projects. A larger community, maybe means a larger volume of participation in existing projects (and more projects in general.) Additionally, more industrial applications means more jobs for people who are interested in writing CL, and that seems pretty cool.

  • How can CL compete with more established languages like Java, Go, and Rust?

    I’m not sure competition is really the right model for thinking about this: there’s so much software to write that “my langauge vs your langauge” is just a poor model for thinking about this: there’s enough work to be done that everyone can be successful.

    At the same time, I haven’t heard about people who are deeply excited about writing Java, and Go folks (which I count myself among) tend to be pretty pragmatic as well. I see lots of people who are excited about Rust, and it’s definitely a cool langauge though it shines best at lower level problems than CL and has a reasonable FFI so it might be the case that there’s some exciting room for using CL for higher level tasks on top of rust fundamentals.


  1. In line with the idea that product management and design is about identifying what people are doing and then institutionalizing this is similar to the urban planning idea of “paving cowpaths,” I sort of think of this as “paving the exits,” though I recognize that this is a bit force.d ↩︎

  2. I’m thinking of things like the moment of enterprise “object oriented programing” giving rise to Java and friends, or the big-data watershed moment in 2009 (or so) giving rise to so-called NoSQL databases. Without these kinds of events you the adoption of these big paradigm-shifting technologies is spotty and relies on the force of will of a particular technical leader, for better (and often) worse. ↩︎

Signs of Alignment

This is a post in my alignment series. See the introductory post Finding Alignment for more context.


I really want to dig into some topics related to building alignment and figuring out when you’re aligned as a contributor, or when the people you’re working with are falling out of alignment with you and/or your team or organization, but I think it’s worth it to start slow and chew on a big question: What it feels like when you and your team are well aligned, and why that’s a good thing.

To my mind, when you have a foundation of alignment, and an understanding of what the business goals are for your organization, then it becomes really easy to work independently, because you know what’s important, you know what needs to happen next and the people your working for/with can be confident that you’ll be moving in the right direction, and don’t need to do as much monitoring. Every so often, teams find this, and can really grind on it and deliver great features and products on the basis of this. It takes a long time (months!) for a team to gel like this, and sometimes teams don’t quite get there.

This isn’t to say that needing more guidance and wokring less independently means that you’re unaligned just that you (or the people you’re working with/for) are newer to the team, or there’s been a change recently and everyone needs more touch points to build alignment. One of the risks of hiring people and growing teams that are really well aligned is that the change in team dynamic can throw off alignment, and I think this is one of the reasons that teams sometimes struggle to grow. In any case, while alignment is great and it doesn’t happen for free, and it’s fine for it to be a thing you’re working on.

Alignment also reduces a lot of potentially contentious conversations and interactions: when you have alignment within a team or between teams you have a framework for prioritizing decisions: the most possible things that have the largest positive impact on the goals that you have are more important than… everything else. It all ends up being pretty simple. Sometimes you have to spend a bit of time on something that’s locally lower priority if another team depends on it, or if you’re helping someone learn something, but for the most part alignment helps you move toward the right direction.

When teams (and contributors) lack alignment, it’s easy for low priority work to get done, or projects that don’t end up supporting the business goals and so fail to find use (projects fail for other reasons, some of which are expected, so failed projects don’t necessarily indicate miss-alignment). An unaligned team can end up competing with peer teams and internal collaborators. If some parts of a team or organization are well aligned and other’s aren’t, resentment and frustration can brew between teams. Basically, without alignment you can--if you’re lucky--skate by with a little wasted effort, but often alignment deficits are a blight that can threaten a team’s ability to be productive and make it really hard to retain great team members.

Not everything is an alignment problem: teams and projects fail for technical or logistical reasons. Sometimes conflicts emerge between collaborators who are well aligned but working on disconnected projects, or hold different concerns within a project. Alignment is a framework for understanding how organizations can move together and be productive particularly as they grow, and in this I hope that this has been helpful!

Tips for More Effective Multi-Tasking

I posted something about how I organize my own work, and I touched on “multi-tasking,” and I realized immediately that I had touched something that required a bit more explanation.

I feel like a bit of an outlier to suggest that people spend time learning how to multitask better, particularly when the prevaling conventional wisdom is just “increase focus,” “decrease multitasking,” reduce “context switches,” between different tasks. It’s as if there’s this mythical word where you can just “focus more” taking advantage of longer blocks of time, with fewer distractions, and suddenly be able to get more done.

This has certainly never been true of my experience.

I was, perhaps unsurprisingly, a bit disorganized as a kid. Couldn’t sit still, forgot deadlines, focused inconsistently on things: sometimes I was unstoppable, and sometimes nothing stuck. As an adult, I’ve learned more about myself and I know how to provide the kind of structure I need to get things done, even for work that I find less intrinsically fascinating. Also I drink a lot more caffeine. I’m also aware that with a slightly different brain or a slightly different set of coping strategies, I would struggle a lot more than I do.

There are a lot of reasons why it can be difficult to focus, but I don’t think the why matters much here: thinking pragmatically about how to make the most of the moments we do have, the focus that’s available. Working on multiple things just is, and I think to some extent its a skill that we can cultivate or at least approximate. Perhaps some of the things I do would be useful to you:

  • fit your tasks to the attention you have. I often write test code later in the day or during my afternoon slump between 2-3:30, and do more complicated work with my morning coffee between 9:30 and 11:30, and do more writing later in the day. There are different of tasks, and knowing what kinds of work makes sense for which part of the day can be a great help.
  • break tasks apart as small as you can do, even if it’s just for yourself. It’s easy to get a little thing done, and bigger tasks can be intimidating. If the units of work that you focus on are the right size it’s possible to give yourself enough time to do the work that you need to do and intersperse tasks from a few related projects.
  • plan what you do before you do it, and leave yourself notes about your plan. As I write code I often write a little todo list that contains the requirements for a function. This makes it easy to pick something up if you get interrupted. My writing process also involves leaving little outlines of paragraphs that I want to write or narrative elements that I want to pass.
  • leave projects, when possible, at a stopping point. Make it easy for yourself to pick it back up when you’re ready. Maybe this means making sure that you finish writing a test or some code, rather than leaving a function half written. When writing prose, I sometimes finish a paragraph and write the first half of the next sentence, to make it easier to pick up.
  • exercise control what and when you do things. There are always interruptions, or incoming mesages and alerts that could require our attention. There are rarely alerts that must cause us to drop what we’re currently working on. While there are “drop everything” tasks sometimes, most things are fine to come back to in a little while, and most emails are safe to ignore for a couple of hours. It’s fine to quickly add something to a list to come back to later. It’s also fine to be disrupted, but having some control over that is often helpful.
  • find non-intrusive ways to feel connected. While it should be possible to do some level of multitasking as you work, there are some kind of interruptions that take a lot of attention. When you’re focusing on work, checking your email can be a distraction (say), but it can be hard to totally turn off email while you’re working. Rather than switch to look at my email on some cadence throughout the day, I (effectively,) check my phone far more regularly just to make sure that there’s nothing critical, and can go much longer between looking at my email. The notifications I see are limited, and may messages never trigger alerts. I feel like I know what’s going on, and I don’t get stuck replying to email all day.1

This is, more or less, what works for me, and I (hope) that there’s something generalizable here, even if we do different kinds of work!


  1. Email is kind of terrible, in a lot of ways: there’s a lot of it, messages come in at all times, people are bad at drafting good subject lines, a large percentage of email messages are just automated notifications, historically you had to “check it” which took time, and drafting responses can take quite a while, given that the convention is for slightly longer messages. I famously opted out of email, basically for years, and gleefully used all the time I wasn’t reading email to get things done. The only way this was viable, was that I’ve always had a script that checks my mail and sends me a notification (as an IM) with the From and Subject line of most important messages, which gives me enough context to actually respond to things that were important (most things aren’t) without needing to actually dedicate time to looking at email. ↩︎

Pave the On and Off Ramps

I participated in a great conversation in the #commonlisp channel on libera (IRC) the other day, during which I found a formulation of a familar argument that felt more clear and more concrete.

The question--which comes up pretty often, realistically--centered on adoption of Common Lisp. CL has some great tools, and a bunch of great libraries (particularly these days,) why don’t we see greater adoption? Its a good question, and maybe 5 year ago I would have said “the libraries and ecosystem are a bit fragmented,” and this was true. It’s less true now--for good reasons!--Quicklisp is just great and there’s a lot of coverage for doing common things.

I think it has to do with the connectivity and support at the edges of a project, an as I think about it, this is probably true of any kind of project.

When you decide to use a new tool or technology you ask yourself three basic questions:

  1. “is this tool (e.g. language) capable of fulfilling my current needs” (for programming languages, this is very often yes,)
  2. “are there tools (libraries) to support my use so I can focus on my core business objectives,” so that you’re not spending the entire time writing serialization libraries and HTTP servers, which is also often the case.
  3. “will I be able to integrate what I’m building now with other tools I use and things I have built in the past.” This isn’t so hard, but it’s a thing that CL (and lots of other projects) struggle with.

In short, you want to be able to build a thing with the confidence that it’s possible to finish, that you’ll be able to focus on the core parts of the product and not get distracted by what should be core library functionality, and finally that the thing you build can play nicely with all the other things you’ve written or already have. Without this third piece, writing a piece of software with such a tool is a bit of a trap.

We can imagine tools that expose data only via quasi-opaque APIs that require special clients or encoding schemes, or that lack drivers for common databases, or integration with other common tools (metrics! RPC!) or runtime environments. This is all very reasonable. For CL this might look like:

  • great support for gRPC

    There’s a grpc library that exists, is being maintained, and has basically all the features you’d want except support for TLS (a moderately big deal for operational reasons,) and async method support (not really a big deal.) It does depend on CFFI, which makes for a potentially awkward compilation story, but that’s a minor quibble.

    The point is not gRPC qua gRPC, the point is that gRPC is really prevalent globally and it makes sense to be able to meet developers who have existing gRPC services (or might like to imagine that they would,) and be able to give them confidence that whatever they build (in say CL) will be useable in the future.

  • compilation that targets WASM

    Somewhat unexpectedly (to me, given that I don’t do a lot of web programming,) WebAssembly seems to be the way deploy portable machine code into environments that you don’t have full control over,1 and while I don’t 100% understand all of it, I think it’s generally a good thing to make it easier to build software that can run in lots of situation.

  • unequivocally excellent support for JSON (ex)

    I remember working on a small project where I thought “ah yes, I’ll just write a little API server in CL that will just output JSON,” and I completely got mired in various comparisons between JSON libraries and interfaces to JSON data. While this is a well understood problem it’s not a very cut and dry problem.

    The thing I wanted was to be able to take input in JSON and be able to handle it in CL in a reasonable way: given a stream (or a string, or equivalent) can I turn it into an object in CL (CLOS object? hashmap?)? I’m willing to implement special methods to support it given basic interfaces, but the type conversion between CL types and JSON isn’t always as straight forward as it is in other languages. Similarly with outputting data: is there a good method that will take my object and convert it to a JSON stream or string? There’s always a gulf between what’s possible and what’s easy and ergonomic.

I present these not as a complaint, or even as a call to action to address the specific issues that I raise (though I certianly wouldn’t complain if it were taken as such,) but more as an illustration of technical decision making and the things that make it possible for a team or a project to say yes to a specific technology.

There are lots of examples of technologies succeeding from a large competitive feild mostly on the basis of having great interoperability with existing solutions and tools, even if the core technology was less exciting or innovative. Technology wins on the basis of interoperability and user’s trust, not (exactly) on the basis of features.


  1. I think the one real exception is runtimes that have really good static binaries and support for easy cross-compiling (e.g. Go, maybe Rust.) ↩︎

Finding Alignment

I keep making notes for writing a series of essays about alignment, the management concept, and it’s somewhere in between a blog post and a book, so maybe I’ll make it a series of blog posts. This is the introduction.


Alignment is this kind of abstract thing that happens when you have more than one entity (a person, working group, or team) working on a project (building or doing something). Leaving aside, for a moment, “entity” and “project,” when efforts well aligned in that all of the effort is in persuit of the same goal and collaborators do work in support of each other. When efforts are out of alignment, collaborators can easily undermine eachother or persue work that doesn’t support the larger goal.

Being well aligned sounds pretty great, you may think, “why wouldn’t you always just want to be aligned?” And I think deep down people want to be aligned, but it’s not obvious: as organizations grow and the problems that the organizations address become bigger (and are thus broken down into smaller parts,) it’s easy for part of a team to fall out of alignment with the larger team. It’s also the case that two parts of an organization may have needs or concerns that appear to be at odds with each other which can cause them to fall out of alignment.

Consider building a piece of software, as I often do: you often have a group of people who are building features and fixing bugs (engineers), and another group of people who support and interact with the people who are using the software (e.g. support, sale, or product management depending). The former group wants to build the product and make sure that it works, and the later group wants to get (potential) users using the software. While their goals are aligned in the broad sense, in practice there is often tension either between engineers who want things to be correct and complete before shipping them and product people who want to ship sooner or conversely between engineers who want to ship software early and product people who want to make sure the product actually works before it sees use. In short, while the two teams might be aligned on the larger goal, these teams often struggle to find alignment on narrower issues. The tension between stability and velocity is perennial and teams must work together to find alignment on this (and other issues.)

While teams and collaborators want to be in alignment, there are lots of reasons why a collaborator might fall out of alignment. The first and most common reason is that managers/leaders forget to build alignment: collaborators don’t know what the larger goals are or don’t know how the larger goals connect to the work that they’re doing (or should be doing!) If there’s redundancy in the organization that isn’t addressed', collaborators might end up compeating against eachother or defending their spheres or fifedomes. This is exacerbated if two collaborators or groups have overlapping areas of responsibility. Also, when the businesses falter and leaders don’t have a plan, collaborators can fall out of alignment to protect their own projects and jobs. It’s also the case that collaborators interests change over time, and they may find themselves aligned in general, but not to the part of the project that they’re working on. When identified, particularly, early, there are positive solutions to all these problems.

Alignment, when you have it feels great: the friction of collaboration often falls away because you can work independently while trusting that your collaborators are working toward the same goal. Strong alignment promotes prioritization, so you can be confident that you’re always working on the parts of the problem that are the most important.

Saying “we should strive to be aligned,” is not enough of a solution, and this series of posts that I’m cooking up addresses different angles of alignment: how to build it, how to tell when you’re missing alignment, what alignment looks like between different kinds of collaborators (individuals, teams, groups, companies,) and how alignment interacts with other areas and concepts in organizational infrastructure (responsibility, delegation, trust, planning.)

Stay tuned!

Against Testify

For a long time I’ve used this go library testify, and mostly it’s been pretty great: it provides a bunch of tools that you’d expect in a testing library, in the grand tradition of jUnit/xUnit/etc., and managed to come out on top in a field similar libraries a few years ago. It was (and is, but particularly then) easy to look at the testing package and say “wouldn’t it be nice if there were a bit more higher-level functionality,” but I’ve recently come around to the idea that maybe it’s not worth it.1 This is a post to collect and expand upon that thought, and also explain why I’m going through some older projects to cut out the dependency.

First, and most importantly, I should say that testify isn’t that bad, and there’s definitely a common way to use the library that’s totally reasonable. My complaint is basically:

  • The “suite” functionality for managing fixtures is a bit confusing: first it’s really easy to get the capitalization of the Setup/Teardown (TearDown?) functions wrong and have part of your fixture not run, and they’re enough different from “plain tests” to be a bit confusing. Frankly, writing test cases out by hand and using Go’s subtest functionality is more clear anyway.
  • I’ve never used testify’s mocking functionality, in part because I don’t tend to do much mock-based testing (which I see as a good thing,) and for the cases where I want to use mocks, I tend to prefer either hand written mocks or something like mockery.
  • While I know “require” means “halt on failure” and “assert” means “continue on error,” and it makes sense now, “assert” in most2 other languages means “halt on failure” so this is a bit confusing. Also while there are cases where you do want continue on error semantics for test assertions, (I suppose,) it doesn’t come up that often'
  • There are a few warts, with the assertions (including requires,) most notably that you can create an “assertion object” that wraps a *testing.T, which is really an anti-pattern, and can cause assertion failures to be reported at the wrong level.
  • There are a few testify assertions that have some wonky argument structure, notably that Equal wants arguments in expected, actual form but Len wants arguments in object, expected form. I have to look that up every time.
  • I despise the failure reporting format. I typically run tests in my text editor and then use “jump to failure” point when a test fails, and testify assertions aren’t well formed in the way that basically every other tool are (including the standard library!)3 such that it’s fussy to find a failure when it happens.

The alternative is just to check the errors manually and use t.Fatal and t.Fatalf to halt test execution (and t.Error and t.Errorf for the continue on error case.) So we get code that looks like this: :

// with testify:
require.NoErorr(t, err)

// otherwise:
if err != nil {
     t.Fatal(err)
}

In addition to giving us better reporting, the second case looks like code that is more typical of code that you might write outside of test code, and so gives you a chance to use the production API which can help you detect any awkwardness but also serve as a kind of documentation. Additionally, if you’re not lazy, the failure messages that you pass to Fatal can be quite useful in explaining what’s gone wrong.

Testify is fine and it’s not worth rewriting existing tests to exclude the dependency (except maybe in small libraries) but for new code, give it a shot!


  1. I must also confess that my coworker played some role in this conversion. ↩︎

  2. I’d guess all, but I haven’t done a survey. ↩︎

  3. Ok, the stdlib failures have the problem, where the failures are just attributed to the filename (no path) of the failure, which doesn’t work great in the situation where you have a lot of packages with similarly named files and you’re running tests from the root of the project. ↩︎

emt -- Golang Error Tools

I write a lot of Go code, increasingly so to the point that I don’t really write much code in other languages. This is generally, fine for me, and it means that most of the quirks of the language have just become sort of normal to me. There are still a few things that I find irritating, and I stumbled across some code at work a few weeks ago that was awkwardly aggregating errors from a collection of goroutines and decided to package up some code that I think solves this pretty well. This is an introduction and a story about this code.

But first, let me back up a bit.

The way that go models concurrency is very simple: you start gorountines, but you have to explicitly manage their lifecycle and output. If you want to get errors out of a thread you have to collect them somehow, and there’s no standard library code that does this so there are a million bespoke solutions to this, and while every Go programmer has or will eventually write a channel or some kind of error aggregator to collect errors from a goroutine, it’s a bit dodgy because you have to stop thinking about whatever thing you’re working on to write some thread-safe, non-deadlocking aggregation code, which inevitably means even more goroutines and channels and mutexes or some such.

Years ago, I wrote this type that I called a “catcher” that was really just a slice of errors and a mutex, wrapped up with [Add(error)]{.title-ref} and [Resolve() error]{.title-ref} methods, and a few other convenience methods. You’d pass or access the catcher from different goroutines and never really have to think much about it. You get “continue-on-error” semantics for thread pools, which is generally useful, and you never accidentally deadlock on a channel of errors that you fumbled in some way. This type worked its way into the logging package that I wrote for my previous team and got (and presumably still gets) heavy use.

We added more functionality over time: different output formats, support for error annotation when it came and also the ability to have a catcher annotate incoming errors with a timestamp for long running applications of the type. The ergonomics are pretty good, and it helped the team spend more time implementing core features and thinking about the core problems of the product’s domain and less time thinking about managing errors in goroutines.

When I left my last team, I thought that it’d be good to take a step back from the platform and tools that I’d been working on and with for the past several years, but when I saw some code a while back that implemented its own error handling again something clicked, and I wanted just this thing. '

So I dug out the old type, put it in a new package, dusted off a few cobwebs, improved the test coverage, gave it a cool name, and reworked a few parts to avoid forcing downstream users to pickup unnecessary dependencies. It was a fun project, and I hope you all find it useful!

Check out emt! Tell me what you think!

Rescoping the Engineering Interview

It’s not a super controversial to assert that the software engineering interview process is broken, but I think it’s worthwhile to do that anyway. The software engineering interview is broken. There are lots of reasons for this:

  • interview processes are overoptimized for rejecting candidates that aren’t good, that they often reject candidates that are good. This isn’t a problem if it happens occasionally, but it’s really routine.
  • it’s difficult to design an interview process that’s works consistently well across different levels and different kinds of roles, and companies/teams can easily get into a place where they really can only hire one type or level of engineer.
  • while many engineering teams know that the hiring process is biased, most of the attempts to mitigate this focus on the bias of the interviewer by making interview processes more consistent across candidate or easier to score objectively, while abdicating for the ways that the process can be biased toward certain kinds of candidates.

I’ve been part of lots of conversations over the years about “making interviews better,” and many of the improvements to the process that come out of these conversations don’t do much and sometimes exacerbate the biases and inefficiencies of the process. I think also, that the move toward remote work (and remote interviewing,) has presented an underrealized opportunity to revisit some of these questions and hopefully come up with better ways of interviewing and building teams.

To unwind a bit, the goals of an interview process should be:

  • have a conversation with a candidate to ensure that you can communicate well with them (and them with you!) and can imagine that they’ll fit into your team or desired role.
  • identify skills and interests, based on practical exercises, review of their past work (e.g. portfolio or open source work,) that would complement your team’s needs. Sometimes takes “figure out if the person can actually write code,” but there are lots of ways to demonstrate and assess skills.
  • learn about the candidates interests and past projects to figure out if there’s alignment between the candidate’s career trajectory and the team you’re building.

Most processes focus on the skills aspect and don’t focus enough on the other aspects. Additionally, there are a bunch of common skills assessments that lots of companies use (and copy from eachother!) and most of them are actually really bad. For example:

  • live coding exercises often have to be really contrived in order to fit within an hour interview, and tend to favor algorithims problems that folks either have memorized because they recently took a class or crammed for interviews. As engineers we almost never write code like this, and the right answer to most of these problems is “use a library function”, so while live coding is great for getting the opportunity to watch a candidate think/work on a problem, success or failure aren’t necessarily indicative of capability or fit.
  • take home coding problems provide a good alternative to live coding exercises, but can be a big imposition timewise particularly people on people who have jobs while interviewing. Often take home exercises also require people to focus more on buildsystems and project-level polish rather than the kind of coding that they’re likely to do more of. The impulse with take home problems is to make them “bigger,” and while these problems can be a little “bigger” than an hour, a lot of what you end up looking at with these problems is also finishing touches so keeping it shorter is also a good plan.
  • portfolio-style reviews (e.g. of open source contributions or public projects,) can be great in many situations, particularly when paired with some kind of session where the candidate can provide context, but lots of great programmers don’t have these kinds of portfolios because they don’t program for fun (which is totally fine!) or because their previous jobs don’t have much open source code. It can also be difficult to assess a candidate in situations where these work samples are old, or are in codebases with awkward conventions or requirements.

There isn’t one solution to this, but:

  • your goal is to give candidates the opportunity to demonstrate their competencies and impress you. Have an interview menu1 rather than an interview process, and let candidates select the kind of problem that they think will be best for them. This is particularly true for more senior candidates, but I think works across the experience spectrum.

  • if you want to do a programming or technical problem in real time, there are lots of different kinds of great exercises, so avoid having another candidate implement bubble sort, graph search, or reverse a linked list. Things like:

    • find a class (or collection of types/functions) in your codebase that you can share and have the candidate read it and try and understand/explain how it works, and then offer up suggestions for how to improve it in some way. I find this works best with 100/200 lines of code, and as long as you can explain idioms and syntax to them, it doesn’t matter if they know the language. Reading code you don’t know '
    • provide the candidate with a function that doesn’t have side effects, but is of moderate length and have them write tests for all the edge cases. It’s ok if the function has a bug that can be uncovered in the course of writing tests, but this isn’t particularly important.
    • provide the candidate with a set of stubs and a complete test suite and have them implement the interface that matches the test cases. This works well for problems where the class in question should implement a fairly pedestrian kind of functionality like “a hash map with versioned values for keys,” or “implement an collection/cache that expires items on an LRU basis.”
    • have the candidate do a code review of a meaningful change. This is an opportunity to see what it’s like to work with them, to give them a view into your process (and code!), and most importantly ask questions, which can provide a lot of insight into their mindset and method.

    I think that the menu approach also works well here: different people have different skills and different ways of framing them, and there’s no real harm in giving people a choice here.

  • independent/take home/asynchronous exercises tend to be better (particularly for more senior candidates,) as it more accurately mirrors the way that we, as programmers work. At the same time, it’s really easy to give people problems that are too big or too complex or just take too long to solve well. You can almost always get the same kind of signal by doing smaller problems anyway. I also believe that offering candidates some kind of honoraria for interview processes are generally a good practice.

  • one of the big goals of the interview processes is to introduce a candidate to the team and give them a sense for who’s on the team and how they operate, which I think has given rise to increasingly long interview sequences. Consider pairing up interviewers for some or all of your interview panel to give candidates greater exposure to the team without taking a huge amount of their time. This is also a great way to help your team build skills at interviewing.

  • assessing candidates should also balance the candidates skills, alignment with the team, with the team’s needs and capacity for ramping new members. Particularly for organizations that place candidates on teams late in the process, it’s easy to effectively have two processes (which just takes a while,) and end up with “good” candidates that are just haphazardly allocated to teams that aren’t a good fit.

These are hard problems, and I think its important both to be open to different ways of interviewing and reflecting on the process over time. One of the great risks is that a team will develop an interview process and then keep using it even if it turns out that the process becomes less effective as interviewers and needs change. Have (quick) retrospectives about your interview process to help make sure that stays fresh and effective.'


I think this is a follow up, in someways, to my earlier post on Staff Engineering. If you liked this, check that out!


  1. To be clear, I think the interview menu has to be tailored to candidates and roles. There’s a danger of decision paralysis, so recruiters and hiring managers should definitely use part of their time with the candidate to select a good interview plan. The options need to make sense for the role, the interviewers need to prepare, and the hiring manager/recruiter should be able to eliminate options from the menu that don’t make sense for the candidates background. ↩︎