Systems Administrators are the Problem

For years now, the idea of the terrible stack, or the dynamic duo of Terraform and Ansible, from this tweet has given me a huge amount of joy, basically anytime someone mentions either Terraform or Ansible, which happens rather a lot. It's not exactly that I think that Terriform or Ansible are exactly terrible: the configuration management problems that these pieces of software are trying to solve are real and actually terrible, and having tools that help regularize the problem of configuration management definitely improve things. And yet the tools leave things wanting a bit.

Why care so much about configuration management?

Configuration matters because every application needs some kind of configuration: a way to connect to a database (or similar), a place to store its output, and inevitably other things, like a dependencies, or feature flags or whatever.

And that's the simple case. While most things are probably roughly simple, it's very easy to have requirements that go beyond this a bit, and it turns out that while a development team might--but only might--not have requirements for something that qualifies as "weird" but every organization has something.

As a developer, configuration and deployment often matters a bunch, and it's pretty common to need to make changes to this area of the code. While it's possible to architect things so that configuration can be managed within an application (say), this all takes longer and isn't always easy to implement, and if your application requires escalated permissions, or needs a system configuration value set then it's easy to get stuck.

And there's no real way to avoid it: If you don't have a good way to manage configuration state, then infrastructure becomes bespoke and fragile: this is bad. Sometimes people suggest using image-based distribution (so called "immutable infrastructure,") but this tends to be slow (images are large and can take a while to build,) and you still have to capture configuration in some way.

But how did we get here?

I think I could weave a really convincing, and likely true story about the discipline of system administration and software operations in general and its history, but rather than go overboard, I think the following factors are pretty important:

  • computers used to be very expensive, were difficult to operate, and so it made sense to have people who were primarily responsible for operating them, and this role has more or less persisted forever.
  • service disruptions can be very expensive, so it's useful for organizations to have people who are responsible for "keeping the lights on," and troubleshoot operational problems when things go wrong.
  • most computer systems depend on state of some kind--files on disks, the data in databases--and managing that state can be quite delicate.
  • recent trends in computing make it possible to manipulate infrastructure--computers themselves, storage devices, networks--with code, which means we have this unfortunate dualism of infrastructure where it's kind of code but also kind of data, and so it feels hard to know what the right thing to do.

Why not just use <xyz>

This isn't fair, really, but and you know it's gonna be good when someone trivializes an adjacent problem domain with a question like this, but this is my post so you must endure it, because the idea that there's another technology or way of framing the problem that makes this better is incredibly persistent.

Usually <xyz>, in recent years has been "Kubernetes" or "docker" or "containers," but it sort of doesn't matter, and in the past solutions platforms-as-a-service (e.g. AppEngine/etc.) or backend-as-a-service (e.g. parse/etc.) So let's run down some answers:

  • "bake configuration into the container/virtual machine/etc. and then you won't have state," is a good idea, except it means that if you need to change configuration very quickly, it becomes quite hard because you have to rebuild and deploy an image, which can take a long time, and then there's problems of how you get secrets like credentials into the service.
  • "use a service for your platform needs," is a good solution, except that it can be pretty inflexible, particularly if you have an application that wasn't designed for the service, or need to use some kind of off-the-shelf (a message bus, a cache, etc.) service or tool that wasn't designed to run in this kind of environment. It's also the case that the hard cost of using platforms-as-a-service can be pretty high.
  • "serverless" approaches something of a bootstrapping problem, how do you manage the configuration of the provider? How do you get secrets into the execution units?

What's so terrible about these tools?

  • The tools can't decide if configuration should be described programatically, using general purpose programming languages and frameworks (e.g. Chef, many deployment tools) or using some kind of declarative structured tool (Puppet, Ansible), or some kind of ungodly hybrid (e.g. Helm, anything with HCL). I'm not sure that there's a good answer here. I like being able to write code, and I think YAML-based DSLs aren't great; but capturing configuration creates a huge amount of difficult to test code. Regardless, you need to find ways of being able to test the code inexpensively, and doing this in a way that's useful can be hard.
  • Many tools are opinionated have strong idioms in hopes of helping to make infrastructure more regular and easier to reason about. This is cool and a good idea, it makes it harder to generalize. While concepts like immutability and idempotency are great properties for configuration systems to have, say, they're difficult to enforce, and so maybe developing patterns and systems that have weaker opinions that are easy to comply with, and idioms that can be applied iteratively are useful.
  • Tools are willing to do things to your systems that you'd never do by hand, including a number of destructive operations (terraform is particularly guilty of this), which erodes some of their trust and inspires otherwise bored ops folks, to write/recapitulate their own systems, which is why so many different configuration management tools emerge.

Maybe the tools aren't actually terrible, and the organizational factors that lead to the entrenchment of operations teams (incumbency, incomplete cost analysis, difficult to meet stability requirements,) lead to the entrenchment of the kinds of processes that require tools like this (though causality could easily flow in the opposite direction, with the same effect.)

API Ergonomics

I touched on the idea of API ergonomics in Values for Collaborative Codebases, but I think the topic is due a bit more exploration. Typically you think about an API as being "safe" or "functionally complete," or "easy to use," but "ergonomic" is a bit far afield from the standard way that people think and talk about APIs (in my experience.)

I think part of the confusion is that "API" gets used in a couple of different contexts, but let's say that an API here are the collection of nouns (types, structures,) and verbs (methods, functions) used to interact with a concept (hardware, library, service). APIs can be conceptually really large (e.g. all of a database, a public service), or quite small and expose only a few simple methods (e.g. a data serialization library, or some kind of hashing process.) I think some of the confusion is that people also use the term API to refer to the ways that services access data (e.g. REST, etc.) and while I have no objection to this formulation, service API design and class or library API design feel like related but different problems.

Ergonomics, then is really about making choices in the design of an API, so that:

  • functionality is discoverable during programming. If you're writing in a language with good code completion tools, then make sure methods and functions are well located and named in a way to take advantage of completion. Chainable APIs are awesome for this.
  • use clear naming for functions and arguments that describe your intent and their use.
  • types should imply semantic intent. If your programming language has a sense of mutability (e.g. passing references verses concrete types in Go, or const (for all its failings) in C++), then make sure you use these markers to both enforce correct behavior and communicate intent.
  • do whatever you can to encourage appropriate use and discourage inappropriate use, by taking advantage of encapsulation features (interfaces, non-exported/private functions, etc.), and passing data into and out of the API with strongly/explicitly-typed objects (e.g. return POD classes, or enumerated values or similar rather than numeric or string types.)
  • reduce the complexity of the surface area by exporting the smallest reasonable API, and also avoiding ambiguous situations, as with functions that take more than one argument of a given type, which leads to cases where users can easily (and legally) do the wrong thing.
  • increase safety of the API by removing or reducing and being explicit about the API's use of global state. Avoid providing APIs that are not thread safe. Avoid throwing exceptions (or equivalents) in your API that you expect users to handle. If users pass nil pointers into an API, its OK to throw an exception (or let the runtime do it,) but there shouldn't be exceptions that originate in your code that need to be handled outside of it.

Ergonomic interfaces feel good to use, but they also improve quality across the ecosystem of connected products.

Easy Mode, Hard Mode

I've been thinking more recently about that the way that we organize software development projects, and have been using this model of "hard mode" vs "easy mode" a bit recently, and thought it might be useful to expound upon it.


Often users or business interests come to software developers and make a feature request. "I want it to be possible to do thing faster or in combination with another operation, or avoid this class of errors." Sometimes (often!) these are reasonable requests, and sometimes they're even easy to do, but sometimes a seemingly innocuous feature or improvement are really hard sometimes, engineering work requires hard work. This isn't really a problem, and hard work can be quite interesting. It is perhaps, an indication of an architectural flaw when many or most easy requests require disproportionately hard work.

It's also the case that it's possible to frame the problem in ways that make the work of developing software easier or harder. Breaking problems into smaller constituent problems make them easier to deal with. Improving the quality of the abstractions and testing infrastructure around a problematic area of code makes it easier to make changes later to an area of the code.

I've definitely been on projects where the only way to develop features and make improvement is to have a large amount of experience with the problem domain and the codebase, and those engineers have to spend a lot of consentrated time building features and fighting against the state of the code and its context. This is writing software in "hard mode," and not only is the work harder than it needs to be, features take longer to develop than users would like. This mode of development makes it very hard to find and retain engineers because of the large ramping period and consistently frustrating nature of the work. Frustration that's often compounded by the expectation or assumption that easy requests are easy to produce.

In some ways the theme of my engineering career has been work on taking "hard mode projects" reducing the barriers to entry in code bases and project so that they become more "easy mode projects": changing the organization of the code, adding abstractions that make it easier to develop meaningful features without rippling effects in other parts of the code, improving operational observability to facilitate debugging, restructuring project infrastructure to reduce development friction. In general, I think of the hallmarks of "easy mode" projects as:

  • abstractions and library functions exist for common tasks. For most pieces of "internet infrastructure" (network attached services,) developer's should be able to add behavior without needing to deal with the nitty gritty of thread pools or socket abstractions (say.) If you're adding a new REST request, you should be able to just write business logic and not need to think about the applications threading model (say). If something happens often (say, retrying failed requests against upstream API,) you should be able to rely on an existing tool to orchestrate retries.
  • APIs and tools are safe ergonomic. Developers writing code in your project should be able to call into existing APIs and trust that they behave reasonably and handle errors reasonably. This means, methods should do what they say, and exported/public interfaces should be difficult to use improperly, and (e.g. expected exception handling/safety, as well as thread safety and nil semantics ad appropriate.) While it's useful to interact with external APIs defensively, you can reduce the amount of effort by being less defensive for internal/proximal APIs.
  • Well supported code and operational infrastructure. It should be easy to deploy and test changes to the software, the tests should run quickly, and when there's a problem there should be a limited number of places that you could look to figure out what's happening. Making tests more reliable, improving error reporting and tracing, exposing more information to metrics systems, to make the behavior of the system easier to understand in the long term.
  • Changes are scoped to support incremental development. While there are lots of core technical and code infrastructure work that support making projects more "easy mode" a lot of this is about the way that development teams decide to structure projects. This isn't technical, ususally, but has more to do with planning cadences, release cadences, and scoping practices. There are easier and harder ways of making changes, and it's often worthwhile to ask yourself "could we make this easier." The answer, I've found, is often "yes".

Moving a project from hard to easy mode is often in large part about investing in managing technical debt, but it's also a choice: we can prioritize things to make our projects easier, we can make small changes to the way we approach specific projects that all move projects toward being easier. The first step is always that choice.

The Org Mode Product

As a degenerate emacs user, as it were, I have of course used org-mode a lot, and indeed it's probably the mode I end up doing a huge amount of my editing in, because it's great with text and I end up writing a lot of text. I'm not, really, an org mode user in the sense that it's not the system or tool that I use to stay organized, and I haven't really done much development of my own tooling or process around using orgmode to handle document production, and honestly, most of the time I use reStructuredText as my preferred lightweight markup language.

I was thinking, though, as I was pondering ox-leanpub, what even is org-mode trying to do and what the hell would a product manager do, if faced with org-mode.

In some ways, I think it sucks the air out of the fun of hacking on things like emacs to bring all of the "professionalization of making software" to things like org-mode, and please trust that this meant with a lot of affection for org-mode: this is meant as a thought experiment.


Org has a lot going on:

  • it provides a set of compelling tools for interacting with hierarchical human-language documents.
  • it's a document markup and structure system,
  • the table editing features are, given the ability to write formula in lisp, basically a spreadsheet.
  • it's a literate programming environment, (babel)
  • it's a document preparation system, (ox)
  • it's a task manager, (agenda)
  • it's a time tracking system,
  • it even has pretty extensive calendar management tools.

Perhaps the thing that's most exciting about org-mode is that it provides functionality for all of these kinds of tasks in a single product so you don't have to bounce between lots of different tools to do all of these things.

It's got most of the "office" suite covered, and I think (particularly for new people, but also for people like me,) it's not clear why I would want my task system, my notes system, and my document preparation system to all have their data intermingled in the same set of files. The feeling is a bit unfocused.

The reason for this, historically makes sense: org-mode grew out of technically minded academics who were mostly using it as a way of preparing papers, and who end up being responsible for a lot of structuring their own time/work, but who do most of their work alone. With this kind of user story in mind, the gestalt of org-mode really comes together as a product, but otherwise it's definitely a bit all over the place.

I don't think this is bad, and particularly given its history, it's easy to understand why things are the way they are, but I think that it is useful to take a step back and think about the workflow that org supports and inspires, while also not forgetting the kinds of workflows that it precludes, and the ways that org, itself, can create a lot of conceptual overhead.

There are also some gaps, in org, as a product, which I think grow out of this history, and I think there are

They are, to my mind:

  • importing data, and bidirectional sync. These are really hard problems, and there've been some decent projects over the years to help get data into org, I think org-trello is the best example I can think about, but it can be a little dodgy, and the "import story" pales in comparison to the export story. It would be great if:
    • you could use the org interface to interact with and manipulate data that isn't actually in org-files, or at least where the system-of-record for the data isn't org. Google docs? Files in other formats?
  • collaborating with other people. Org-mode files tend to cope really poorly with multiple people editing them at the same time (asynchronously as with git,) and also in cases where not-everyone uses org-mode. One of the side effects of having the implementation of org-features so deeply tied to the structure of text in the org-format, it becomes hard to interact with org-data outside of emacs (again, you can understand why this happens, and it's indeed very lispy,), which means you have to use emacs and use org if you want to collaborate on projects that use org.
    • this might look like some kind of different diff-drivers for git, in addition to some other more novel tools.
    • bi-directional sync might also help with this issue.
  • beyond the agenda, building a story for content that spans multiple-file. Because the files are hierarchical, and org provides a great deal of taxonomic indexing features, you really never need more than one org-file forever, but it's also kind of wild to just keep everything in one file, so you end up with lots of org-files, and while the agenda provides a way to filter out the task and calendar data, it's sort of unclear how to mange multi-file systems for some of the other projects. It's also the case, that because you can inject some configuration at the file level, it can be easy to get stuck.
  • tools for interacting with org content without (interactive or apparent) emacs. While I love emacs as much as the next nerd, I tend to think that having a dependency on emacs is hard to stomach, particularly for collaborative efforts, (though with docker and the increasing size of various runtimes, this may be less relevant.) If it were trivially easy to write build processes that extracted or ran babel programs without needing to be running from within emacs? What if there were an org-export CLI tool?

Docker Isn't A Build System

I can't quite decide if this title is ironic or not. I've been trying really hard to not be a build system guy, and I think I'm succeeding--mostly--but sometimes things come back at us. I may still be smarting from people proposing "just using docker" to solve any number of pretty irrelevant problems.

The thing is that docker does help solve many build problems: before docker, you had to either write code that supported any possible execution environment. This was a lot of work, and was generally really annoying. Because docker provides a (potentially) really stable execution environment, it can make a lot of sense to do your building inside of a docker container, in the same way that folks often do builds in chroot environments (or at least did). Really containers are kind of super-chroots, and it's a great thing to be able to give your development team a common starting point for doing development work. This is cool.

It's also the case that Docker makes a lot of sense as a de facto standard distribution or deployment form, and in this way it's kind of a really fat binary. Maybe it's too big, maybe it's the wrong tool, maybe it's not perfect, but for a lot of applications they'll end up running in containers anyway, and treating a docker container like your executable format makes it possible to avoid running into issues that only appear in one (set) of environments.

At the same time, I think it's important to keep these use-cases separate: try to avoid using the same container for deploying that you use for development, or even for build systems. This is good because "running the [deployment] container" shouldn't build software, and it'll also limit the size of your production containers, and avoid unintentionally picking up dependencies. This is, of course, less clear in runtimes that don't have a strong "compiled artifacts" story, but is still possible.

There are some notes/caveats:

  • Dockerfiles are actually kind of build systems, and under the hood they're just snapshotting the diffs of the filesystem between each step. So they work best if you treat them like build systems: make the steps discrete and small, keep the stable deterministic things early in the build, and push the more ephemeral steps later in the build to prevent unnecessary rebuilding.
  • "Build in one container and deploy in another," requires moving artifacts between containers, or being able to run docker-in-docker, which are both possible but may be less obvious than some other workflows.
  • Docker's "build system qualities," can improve the noop and rebuild-performance of some operations (e.g. the amount of time to rebuild things if you've just built or only made small changes.) which can be a good measure of the friction that developers experience, because of the way that docker can share/cache between builds. This is often at the expense of making artifacts huge and at greatly expanding the amount of time that the operations can take. This might be a reasonable tradeoff to make, but it's still a tradeoff.

My Code Review Comments

I end up reviewing a lot of code, and while doing code review (and getting reviews) used to take up a lot of time, I think I've gotten better at doing reviews, and knowing what's important to comment on and what doesn't

  • The code review process is not there to discover bugs. Write tests to catch bugs, and use the code review process to learn about a specific change, and find things that are difficult to test for.
  • As yourself if something is difficult to follow, and comment on that. If you can't figure out what something is doing, or you have to read it more than once, then that's probably a problem.
  • Examine and comment on the naming of functions. Does the function appear to do what the name indicates.
  • Think about the interface of a piece code:
    • What's exported or public?
    • How many arguments do your functions take?
  • Look for any kind of shared state between functions, particularly data that's mutable or stored in globally accessable, or package local structures.
  • Focus your time on the production-facing, public code, and less on things like tests and private/un-exported APIs. While tests are important, and it's important that there's good test coverage (authors should use coverage tooling to check this), and efficient test execution, beyond this high level aspect, you can keep reading?
  • Put yourself in the shoes of someone who might need to debug this code and think about logging as well as error handling and reporting.
  • Push yourself and others to be able to get very small pieces of code reviewed at a time. Shorter reviews are easier to process and while it's annoying to break a logical component into a bunch of pieces, it's definitely worth it.

Values for Collaborative Codebases

After, my post on what I do for work I thought it'd be good to describe the kinds of things that make software easy to work on and collaborative. Here's the list:

  1. Minimal Documentation. As a former technical writer this is sort of painful, but most programning environments (e.g. languages) have idioms and patterns that you can follow for how to organize code, run tests and build artifacts. it's ok if your project has exceptional requirements that require you to break the rules in some way, but the conventions should be obvious and the justification for rule-breaking should be plain. If you adhere to convention, you don't need as much documentation. It's paradoxical, because better documentation is supposed to facilitate accessibility, but too much documentation is sometimes an indication that things are weird and hard.
  2. Make it Easy To Run. I'd argue that the most difficult (and frustrating) part of writing software is getting it to run everywhere that you might want it to run. Writing software that runs, even does what you want on your own comptuer is relatively, easy: making it work on someone else's computer is hard. One of the big advantages of developing software that runs as web apps means that you (mostly) get to control (and limit) where the software runs. Making it possible to easily run a piece of software on any computer it might reasonably run (e.g. developer's computers, user's computers and/or production environments.) Your software itself, should be responsible for managing this environment, to the greatest extent possible. This isn't to say that you need to use containers or some such, but having packaging and install scripts that use well understood idioms and tools (e.g. requirements.txt, virtualenv, makefiles, etc.) is good.
  3. Clear Dependencies. Software is all built upon other pieces of software, and the versions of those libraries are important to record, capture, and recreate. Now it's generally a good idea to update dependencies regularly so you can take advantage of improvements from upstream providers, but unless you regularly test against multiple versions of your dependencies (you don't), and can control all of your developer and production environments totally (you can't), then you should provide a single, centralized way of describing your dependencies. Typically strategies involve: vendoring dependencies, using lockfiles (requirements.txt and similar) or build system integration tends to help organize this aspect of a project.
  4. Automated Tests. Software requires some kind of testing to ensure that it has the intended behavior, and tests that can run quickly and automatically without requiring users to exercise the software manually is absolutely essential. Tests should run quickly, and it should be possible to run a small group of tests very quickly to support iterative development on a specific area of the code. Indeed, most software development can and should be oriented toward the experience of writing tests and exercising new features with tests above pretty much everything else. The best test suites exercise the code at many levels, ranging from very small unit tests to ensure the correct behavior of the functions and methods, to higher level tests that test the functionality of higher-order functions and methods, and full integration tests that test the entire system.
  5. Continuous Integration. Continuous integration system's are tools that support development and ensure that changes to a code pass a more extensive range of tests than are readily available to developers. CI systems are also useful for automating other aspects of a project (releases, performance testing, etc.) A well maintained CI environment provide the basis for commonality for larger projects with a larger number for projects larger groups of developers, and is all but required to ensure a well supported automated test system and well managed dependency.
  6. Files and Directories Make Sense. Code is just text in files, and software is just a lot of code. Different languages and frameworks have different expectations about how code is organized, but you should be able to have a basic understanding of the software and be able to browse the files, and be able to mostly understand what components are and how they relate to each other based on directory names, and should be able to (roughly) understand what's in a file and how the files relate to eachother based on their names. In this respect, shorter files, when possible are nice, are directory structures that have limited depth (wide and shallow,) though there are expections for some programming language.
  7. Isolate Components and Provide Internal APIs. Often when we talk about APIs we talk about the interface that users access our software, but larger systems have the same need for abstractions and interfaces internally that we expose for (programmatic) users. These APIs have different forms from public ones (sometimes,) but in general:
    • Safe APIs. The APIs should be easy to use and difficult to use incorrectly. This isn't just "make your API thread safe if your users are multithreaded," but also, reduce the possibility for unintended side effects, and avoid calling conventions that are easy to mistake: effects of ordering, positional arguments, larger numbers of arguments, and complicated state management.
    • Good API Ergonomics. The ergonomics of an API is somewhat ethereal, but it's clear when an API has good ergonomics: writing code that uses the API feels "native," it's easy to look at calling code and understand what's going on, and errors that make sense and are easy to handle. It's not simply enough for an API to be safe to use, but it should be straightforward and clear.

How to Become a Self-Taught Programmer

i am a self taught programmer. i don't know that i'd recommend it to anyone else there are so many different curricula and training programs that are well tested and very efficacious. for lots of historical reasons, many programmers end up being all or mostly self taught: in the early days because programming was vocational and people learned on the job, then because people learned programming on their own before entering cs programs, and more recently because the demand for programmers (and rate of change) for the kinds of programming work that are in the most demand these days. and knowing that it's possible (and potentially cheaper) to teach yourself, it seems like a tempting option.

this post, then, is a collection of suggestions, guidelines, and pointers for anyone attempting to teach themselves to program:

  • focus on learning one thing (programming language and problem domain) at a time. there are so many different things you could learn, and people who know how to program seem to have an endless knowledge of different things. knowing one set of tools and one area (e.g. "web development in javascript," or "system administration in python,") gives you the framework to expand later, and the truth is that you'll be able to learn additional things more easily once you have a framework to build upon.

  • when learning something in programming, always start with a goal. have some piece of data that you want to explore or visualize, have a set of files that you want to organize, or something that you want to accomplish. learning how to program without a goal, means that you don't end up asking the kinds of questions that you need to form the right kinds of associations.

  • programming is all about learning different things: if you end up programming for long enough you'll end up learning different languages, and being able to pick up new things is the real skill.

  • being able to clearly capture what you were thinking when you write code is basically a programming super power.

  • programming is about understanding problems [1] abstractly and building abstractions around various kinds of problems. being able break apart these problems into smaller core issues, and thinking abstractly about the problem so that you can solve both the problem in front of you and also solve it in the future are crucial skills.

  • collect questions or curiosities as you encounter them, but don't feel like you have to understand everything, and use this to guide your background reading, but don't feel like you have to hunt down the answer to every term you hear or see that you don't already know immediately. if you're pretty rigorous about going back and looking things up, you'll get a pretty good base knowledge over time.

  • always think about the users of your software as you build, at every level. even if you're building software for your own use, think about the future version of yourself that will use that software, imagine that other people might use the interfaces and functions that you write and think about what assumptions they might bring to the table. think about the output that your program, script, or function produces, and how someone would interact with that output.

  • think about the function as the fundamental building block of your software. lower level forms (i.e. statements) are required, but functions are the unit where meaning is created in the context of a program. functions, or methods depending, take input (arguments, ususally, but sometimes also an object in the case of methods) and produce some output, sometimes with some kind of side-effect. the best functions:

    • clearly indicate side-effects when possible.
    • have a mechanism for reporting on error conditions (exceptions, return values,)
    • avoid dependencies on external state, beyond what is passed as arguments.
    • are as short as possible.
    • use names that clearly describe the behavior and operations of the function.

    if programming were human language (english,) you'd strive to construct functions that were simple sentences and not paragraph's, but also more than a couple of words/phrases, and you would want these sentences to be clear to understand with limited context. if you have good functions, interfaces are more clear and easier to use, code becomes easier to read and debug, and easier to test.

  • avoid being too weird. many programmers are total nerds, and you may be too, and that's ok, but it's easier to learn how to do something if there's prior art that you can learn from and copy. on a day-to-day basis, a lot of programming work is just doing something until you get stuck and then you google for the answer. If you're doing something weird--using a programming language that's less widely used, or in a problem space that is a bit out of mainstream, it can be harder to find answers to your problems.

Notes

[1]I use the term "problem" to cover both things like "connecting two components internally" and also more broadly "satisfying a requirement for users," and programming often includes both of these kinds of work.