Crafts at Scale

I've been kntting off and on since 2002 or 2003 (or so) but have been particularly "on" in the last couple of years. When I started working as a computer programmer (without formal training as such,) I quipped that I learned how to program from hand knitting. This is a simplification--of course--but it's not that incorrect. Knitting is a system with some basic fundamentals (stitches, yarn, needle), a lot of variables (gauge, tension), repeated procedures, and a hell of a lot of prior art. This is a lot like programming.

Spinning too, has many of the same properties, but similarities aside they feel like different kinds of crafts: where knitting feels like you're applying a set of understood procedures to produce something that's unique, spinning is often about figuring out how to apply the same procedures in a way that prodcues consistent result. This makes sense you want to produce a quantity of yarn that's on average similar enough that when you knit (or I suppose weave,) you have good consistent results. In many ways, spinning leads naturally to an idea of "production" or "scale" as an aspect of craft.

Just to be clear, these kinds of crafts should be fun and rewarding on their own merits. If you want to spin and are excited and happy to make and have yarn with variable thickness, or where every skein is unique, then do that. For me, particularly now, I find the problem of figuring out how to be consistent while spinning a couple of pounds of wool over the course of a few weeks to be really exciting and entrancing.

The kind of knitting that I've been doing recently has had some of these production/scale aspects as well: knitting with very similar white yarns removes color and minimizes texture as a variable. While I've been knitting roughly the same sock at production scale, the sweater's I've been working on have some bespoke aspects, though the process is broadly similar. There's something so compelling about being able to understand my craft and procedure so thoroughly that I can make things that aren't wonky with confidence.

Programming is also very much like thsi for me these days. I spent years as a programmer trying to figure out how code worked, and how basic fundamental systems and protocols worked (e.g. webservers, Linux, databases,) and now I know how to build most things, or feel confident in my ability to figure out how to build a new thing when needed. The exciting things about software engineering is more about making the software work at large scale, the processes that allow increasingly large teams of engineers work together effectively, and being able to figure out the right solution for the problem users have.

I'm currently somewhere on the 7th 100 gram skein of approximately worsted weight, 3-ply merino yarn. My consistency isn't quite where I want it, but if you look at all of the skeins they seem roughly related, so I think I'll be able to make a sweater easily from it. I have two more skeins after this one. My plan from here is to alternate spinning batches of white yarn with spinning batches of not-white/natural colored wool for variety. Probably mostly 3 ply for now, though I may give 2 ply a go for one of them.

I'm knitting a white seamless style sweater, using Elizabeth Zimmerman's method for bottom up sweaters. I've changed many of the numbers and some of the proportions, but nothing particularly fundamental about the process. I've knit 3 sweaters back to back with this same process, though this is the first with this specific yarn. I do have enough of this yarn to knit 3 or 4 sweaters, which I find both daunting and exciting, taken as a whole. With the sleeves done, I'm about halfway to the underarms on the body. I want to try knitting a saddle shouldered garment for this one.

Why Spinning?

A few years ago, I sent my spinning wheel away because I was living in a very small apartment with two very attentive cats. While I've been living in an apartment with more room (and doors!) for a few years now, only this week has my wheel returned: I realized that I missed spinning, and it's not like soothing hobbies are unwelcome these days.

I started spinning about 15 years ago, and did it a bunch for a few years and then more or less stopped for a long time. It's been interesting to start up again, and discover that my hands/body more or less remembered exactly how to do it. I had a few hours and about 200g of yarn to spin before some of the finer points came back and now I've spun a couple more skeins closer to my intention.

The other human asked "What do you like about spinning?"--well the question was phrased more like "is handspun yarn better?", but I will paraphrase to better capture intent. There are, of course, a few answers:

  • the act of spinning is quite satisfying. Sometimes it's enough for things to be fun and satisfying even if they aren't productive.
  • the yarn can be sort of nifty, and although I've spun a lot of yarn, I have mostly not knit much with handspun yarn. I tend to like consistent and fine (fingering) yarns in my own knitting, and machines just do better at making this kind of yarn, so I end up giving a lot of handspun away to friends who I know will knit it better.
  • spinning gives you a lot of control over the wool (and kind of sheep) that go into the yarn you get, in a way that just doesn't scale up to larger production schemes. I quite enjoy being able to first select what kind of sheep the wool I use comes from and then decide what kind of yarn I want from it. When other people spin, you can usually only pick one of these variables.

I'm currently spinning some white merino roving that I've (apparently had for years.) There's a piece of paper in the bag that says "2 lbs" but between my practice skeins and whatever I did before I stopped, there's probably only about a pound and a half left: this is fine. Merino is great, but it's quite common and I knit a lot of merino. I've been working on getting a pretty stable 3-ply worsted weight yarn, and I'm roughly there. I like 3-ply because of the round construction, and worsted weight is about the heaviest yarn I'm really interested in knitting with or using (and it's easy to design with/for!)

My next few spinning projects are with wool from different breeds of sheep (BFL! Targhee! Rambouillet!) though mostly undyed (and largly white), and mostly in larger batches (a pound or two.) I've never really gotten into hand-dyed roving, and mostly really enjoy spinning undyed wool: in most cases dying the finished garment or the yarn before knitting leads to the best result anyway. I guess one of the most The thing I like about spinning, in a lot of ways, is that it lets me focus on the wool and the sheep.

As a spinner, I'm far more interested in the wool and the sheep, in much the same way that as a knitter I've become far more interested in the structure of what I'm knitting than the color or the yarn. This feels entirely consistent to me: as a spinner I'm far more interested in the process and the wool than I am in yarn, and as a knitter I'm far more interested in using the yarn to explore the structure. Somehow, the yarn itself isn't the thing that compells me, despite being kind of at the center of the process.

Anyway, back to the wheel!

The Most Plain Knitting

Last night I finished knitting a sweater that I'd been working on for either a while (pictures in this twitter thread) or not all that long, and promptly started the next sweater.

Also last weekend I handed off a bag of undyed (white) knitting to a friend of mine who is way more excited about dying than I am. This includes 13 or 14 pairs of socks (in a few different batches,) and a sweater that I knit. We also found someone who the sweater is more likely to fit than me, and I always quite like finding homes for wayward sweaters.

I have a couple of long flights for work trips coming up so I wanted to make sure that I wasn't bringing a sweater that I was two-thirds of the way through and would likely finish. The next sweater is the 4th I've made from this yarn, and the 3rd plain sweater. I've made two plain raglans, and this last one was a crew neck.

By now I have a reasonable set of numbers/patterns for a "fingering weight sweater that basically fits an adult medium/small" that I've been honing, and enough yarn stashed to make about 9 of these sweaters. That should get me through the winter.

The crew neck is a touch lower than I think it needed to be, but it looks pretty smart. The thing about knitting Elizabeth Zimmerman-style seamless sweaters is that for the entire time you're knitting the yoke section it really does not seem like it's going to work out, so you have low-key panic the entire time, and then somehow, magically it all does. The key to success is to not overthink things too much and not fuck around.

I think this last sweater had a bit too much fucking around with the neckline, so it looks a bit weird (to my somewhat exacting tastes) where the ragland decreases interact with the neck shaping. The front of the neck could have been higher, and I think I could have done like 3-4 sets of short rows near the end to get the right effect for the front. Perhaps one of the next few sweaters can be another attempt at a raglan.

My plan for the next/current sweater is to do set-in sleeves with a crew neck. I have the math all worked out, so that seems like it might be fun. I've also never done EZ's saddle shoulder (or hybrid) yoke, so that seems like some fun winter knitting. Regardless, saddles and set in sleeves are mostly constructed the same way, so I can wait quite a while to make a decision. After about 18 months of mostly knitting socks (and having gotten ~30 pairs done,) a (minor) change seems good.

Isolation Reading

I spent a few days last week isolating after attending a larger social event in a friends apartment in a (mostly) unfamiliar neighborhood and I got to spend a few days enjoying (a dear friend's) book collection.

I don't have many paper books left: enough moves and small new York city apartments, combined with vague personal preference for e-ink have left me with only about 100 books, but I do sometimes enjoy reading paper books when I'm visiting someone else. My perfect vacation has always been some combination of "drinking too much coffee and reading books," and given that I'm kind of in an in-between moment job-wise right now, this was actually pretty much perfect.

I started out the week reading Slouching Toward Bethlehem (Joan Diddion) and finished it reading the first half of The God of Small Things (Arundhati Roy). It was pretty much everything.

I've always been an admirer of Diddon, but I've never read Bethlehem and I've meant to sort of spend a few years trawlling luxuriously through her backlist, but hadn't gotten around to it. The writing is perfect in exactly the sort of austere but precise way that I've come to expect. I'm doubly impressed also that she was so young when these essays were published.

I had in my mind that this was a book that was an account of the state of counter-culture in the 60s, and the title essay definitely is that, but having read the entire book over the course of a few days, I'm left with the impression that this book is really a big "why I left New York City in my late 20s" combined with a love letter to California from a returning native child, who remembers "the (really) old California" and what is by now "the (simply) old California".

The "why I left New York City in my late 20s" story is pretty familiar, and it's actually nice to see, now 60 years on, that people coming to New York in their 20s and then burning out or not figuring out how to be in New York sustainably is a very old story indeed. I'm also of course, heartened that she returned to the city for the last 25 years of her life. I hope that this also proves to be an enduring pattern for my generation.

I was also struck by the way that the reflection (and really, critique) of the counter culture managed to be very early but also consistent with what a lot of people were saying earlier. To my eyes, it's not particularly surprising but the date is a bit.

The God of Small Things is, of course, lush in all the ways that Slouching is austere. Almost provoking whiplash.

I typically find these sort of lush non-linear books to be a bit Extra. Lovely, to be sure, but the lushness and non-linearity can so distract from the plot or the characters or the impact. Lush and non-linear prose has also started to feel faddish and at least for me, a signifies of a certain kind of academic/"art school" approach to prose. This is not true at all of Small Things: the story directly and explicitly explores childhood memories and trauma in ways that are reflected both in the characters and the story telling. It extremely works.

As is, I suppose, the intent, the book and writing has me thinking a lot about imperialism [1] and the history therein, and I think there's a way that the non-linearity of the story telling manages to engage this fundamental question [2] "why do people fight for their servitude as if it were their salvation," and watching this

I'm not done yet with the book, but I'm excited to dig in more.

The next book on my friend's bookshelf that I'm excited by was a collection of Grace Paley stories and essays. I haven't really started it, yet, but I think I will soon.

[1]I wrote this sentence as "post/colonialism" but I think there are so many layers and intersections that expand have echos and impacts that are much larger than the history of the British in India, which isn't (and shouldn't!) be the at the center of the story, despite it's outsized and unrefutable impact.
[2]In a bit of my own non-linearity, I've been working on an essay that plays with this famous quote/question from Deleuze (derived from Riech, derived from Spinoza). The full (ish) quote is, "the fundamental problem of political philosophy is still precisely the one that Spinoza saw so clearly, and that Wilhelm Reich rediscovered: ‘Why do men fight for their servitude as stubbornly as though it were their salvation?’ How can people possibly reach the point of shouting: ‘More taxes! Less bread!’? As Reich remarks, the astonishing thing is not that some people steal or that others occasionally go out on strike, but rather that all those who are starving do not steal as a regular practice, and all those who are exploited are not continually out on strike."

Software Engineering for 2.0

I've been thinking about what I do as a software engineer for a while, as there seems to be a common thread through the kinds of projects and teams that I'm drawn toward and I wanted to write a few blog posts on this topic to sort of collect my thoughts and see if these ideas resonated with anyone else.

I've never been particularly interested in building new and exciting features. Hackathon's have never held any particular appeal, and the things I really enjoy are working on are on the spectrum of "stabilize this piece of software," or "make this service easy to operate" or "refactor this code to make support future development" and less "design and build some new feature." Which isn't to say that I don't like building new features or writing code, but that I'm more driven by the code and supporting my teammates than I am by the feature.

I think it's great that I'm different from software engineers who are really focused on the features, both because I think the tension between our interests pushes both classes of software engineer to do great things. Feature development keeps software and products relevant and addresses users' needs. Stabilization work makes projects last and reduces the incidence of failures that distract from feature work, and when there's consistent attention paid to aligning infrastructure [1] work with feature development of the long term, infrastructure engineers can significantly lower the cost of implementing a feature.

The kinds of projects that fall into these categories inculde the following areas:

  • managing application state and workload in larger distributed contexts. This has involved designing and implementing things like configuration management, deployment processes, queuing systems, and persistence layers.
  • concurrency control patterns and process lifecycle. In programming environments where threads are available, finding ways to ensure that processes can safely shut down, and errors can be communicated between threads and processes takes some work and providing mechanisms to shutdown cleanly, communicate abort signals to worker threads, and handle communication patterns between threads in a regular and expected way, is really important. Concurrency is a great tool, but being able to manage concurrency safely and predictably and in descret parts of the code are useful.
  • programming model and ergonomic APIs and services. No developers produces a really compelling set of abstractions on the first draft, particularly when they're focused on delivering different kinds of functionality. The revision and iteration process helps everyone build better software.
  • test infrastructure and improvements. No one thinks tests should take a long time or report results non-deterministically, and yet so many test are. The challenge is that tests often look good or seem reasonable or are stable when you write them, and their slow runtimes compound overtime, or orthogonal changes make them slower. Sometimes adding an extra check in some pre-flight test-infrastructure code ends ups causing tests that had been just fine, thank you to become problems. Maintaining and structure test infrastructure has been a big part of what I've ended up doing. Often, however, working back from the tests, it's possible to see how a changed interface or an alternate factoring of code would make core components easier to test, and doing a cleanup pass of tests on some regular cadence to improve things. Faster more reliable tests, make it possible to develop with greater confidence.

In practice this has included:

  • changing the build system for a project to produce consistent artifacts, and regularizing the deployment process to avoid problems during deploy.
  • writing a queuing system without any extra service level dependencies (e.g. in the project's existing database infrastructure) and then refactoring (almost) all arbitrary parallel workloads to use the new queuing system.
  • designing and implementing runtime feature flagging systems so operators could toggle features or components on-and-off via configuration options rather than expensive deploys.
  • replacing bespoke implementations with components provided by libraries or improving implementation quality by replacing components in-place, with the goal of making new implementations more testable or performant (or both!)
  • plumbing contexts (e.g. Golang's service contexts) through codebases to be able to control the lifecycle of concurrent processes.
  • implementing and migrating structured logging systems and building observability systems based on these tools to monitor fleets of application services.
  • Refactoring tests to reuse expensive test infrastructure, or using table-driven tests to reduce test duplication.
  • managing processes' startup and shutdown code to avoid corrupted states and efficiently terminate and resume in-progress work.

When done well (or just done at all), this kind of work has always paid clear dividends for teams, even when under pressure to produce new features, because the work on the underlying platform reduces the friction for everyone doing work on the codebase.

[1]It's something of an annoyance that the word "infrastructure" is overloaded, and often refers to the discipline of running software rather than the parts of a piece of software that supports the execution and implementation of the business logic of user-facing features. Code has and needs infrastructure too, and a lot of the work of providing that infrastructure is also software development, and not operational work, though clearly all of these boundaries are somewhat porous.

Systems Administrators are the Problem

For years now, the idea of the terrible stack, or the dynamic duo of Terraform and Ansible, from this tweet has given me a huge amount of joy, basically anytime someone mentions either Terraform or Ansible, which happens rather a lot. It's not exactly that I think that Terriform or Ansible are exactly terrible: the configuration management problems that these pieces of software are trying to solve are real and actually terrible, and having tools that help regularize the problem of configuration management definitely improve things. And yet the tools leave things wanting a bit.

Why care so much about configuration management?

Configuration matters because every application needs some kind of configuration: a way to connect to a database (or similar), a place to store its output, and inevitably other things, like a dependencies, or feature flags or whatever.

And that's the simple case. While most things are probably roughly simple, it's very easy to have requirements that go beyond this a bit, and it turns out that while a development team might--but only might--not have requirements for something that qualifies as "weird" but every organization has something.

As a developer, configuration and deployment often matters a bunch, and it's pretty common to need to make changes to this area of the code. While it's possible to architect things so that configuration can be managed within an application (say), this all takes longer and isn't always easy to implement, and if your application requires escalated permissions, or needs a system configuration value set then it's easy to get stuck.

And there's no real way to avoid it: If you don't have a good way to manage configuration state, then infrastructure becomes bespoke and fragile: this is bad. Sometimes people suggest using image-based distribution (so called "immutable infrastructure,") but this tends to be slow (images are large and can take a while to build,) and you still have to capture configuration in some way.

But how did we get here?

I think I could weave a really convincing, and likely true story about the discipline of system administration and software operations in general and its history, but rather than go overboard, I think the following factors are pretty important:

  • computers used to be very expensive, were difficult to operate, and so it made sense to have people who were primarily responsible for operating them, and this role has more or less persisted forever.
  • service disruptions can be very expensive, so it's useful for organizations to have people who are responsible for "keeping the lights on," and troubleshoot operational problems when things go wrong.
  • most computer systems depend on state of some kind--files on disks, the data in databases--and managing that state can be quite delicate.
  • recent trends in computing make it possible to manipulate infrastructure--computers themselves, storage devices, networks--with code, which means we have this unfortunate dualism of infrastructure where it's kind of code but also kind of data, and so it feels hard to know what the right thing to do.

Why not just use <xyz>

This isn't fair, really, but and you know it's gonna be good when someone trivializes an adjacent problem domain with a question like this, but this is my post so you must endure it, because the idea that there's another technology or way of framing the problem that makes this better is incredibly persistent.

Usually <xyz>, in recent years has been "Kubernetes" or "docker" or "containers," but it sort of doesn't matter, and in the past solutions platforms-as-a-service (e.g. AppEngine/etc.) or backend-as-a-service (e.g. parse/etc.) So let's run down some answers:

  • "bake configuration into the container/virtual machine/etc. and then you won't have state," is a good idea, except it means that if you need to change configuration very quickly, it becomes quite hard because you have to rebuild and deploy an image, which can take a long time, and then there's problems of how you get secrets like credentials into the service.
  • "use a service for your platform needs," is a good solution, except that it can be pretty inflexible, particularly if you have an application that wasn't designed for the service, or need to use some kind of off-the-shelf (a message bus, a cache, etc.) service or tool that wasn't designed to run in this kind of environment. It's also the case that the hard cost of using platforms-as-a-service can be pretty high.
  • "serverless" approaches something of a bootstrapping problem, how do you manage the configuration of the provider? How do you get secrets into the execution units?

What's so terrible about these tools?

  • The tools can't decide if configuration should be described programatically, using general purpose programming languages and frameworks (e.g. Chef, many deployment tools) or using some kind of declarative structured tool (Puppet, Ansible), or some kind of ungodly hybrid (e.g. Helm, anything with HCL). I'm not sure that there's a good answer here. I like being able to write code, and I think YAML-based DSLs aren't great; but capturing configuration creates a huge amount of difficult to test code. Regardless, you need to find ways of being able to test the code inexpensively, and doing this in a way that's useful can be hard.
  • Many tools are opinionated have strong idioms in hopes of helping to make infrastructure more regular and easier to reason about. This is cool and a good idea, it makes it harder to generalize. While concepts like immutability and idempotency are great properties for configuration systems to have, say, they're difficult to enforce, and so maybe developing patterns and systems that have weaker opinions that are easy to comply with, and idioms that can be applied iteratively are useful.
  • Tools are willing to do things to your systems that you'd never do by hand, including a number of destructive operations (terraform is particularly guilty of this), which erodes some of their trust and inspires otherwise bored ops folks, to write/recapitulate their own systems, which is why so many different configuration management tools emerge.

Maybe the tools aren't actually terrible, and the organizational factors that lead to the entrenchment of operations teams (incumbency, incomplete cost analysis, difficult to meet stability requirements,) lead to the entrenchment of the kinds of processes that require tools like this (though causality could easily flow in the opposite direction, with the same effect.)

API Ergonomics

I touched on the idea of API ergonomics in Values for Collaborative Codebases, but I think the topic is due a bit more exploration. Typically you think about an API as being "safe" or "functionally complete," or "easy to use," but "ergonomic" is a bit far afield from the standard way that people think and talk about APIs (in my experience.)

I think part of the confusion is that "API" gets used in a couple of different contexts, but let's say that an API here are the collection of nouns (types, structures,) and verbs (methods, functions) used to interact with a concept (hardware, library, service). APIs can be conceptually really large (e.g. all of a database, a public service), or quite small and expose only a few simple methods (e.g. a data serialization library, or some kind of hashing process.) I think some of the confusion is that people also use the term API to refer to the ways that services access data (e.g. REST, etc.) and while I have no objection to this formulation, service API design and class or library API design feel like related but different problems.

Ergonomics, then is really about making choices in the design of an API, so that:

  • functionality is discoverable during programming. If you're writing in a language with good code completion tools, then make sure methods and functions are well located and named in a way to take advantage of completion. Chainable APIs are awesome for this.
  • use clear naming for functions and arguments that describe your intent and their use.
  • types should imply semantic intent. If your programming language has a sense of mutability (e.g. passing references verses concrete types in Go, or const (for all its failings) in C++), then make sure you use these markers to both enforce correct behavior and communicate intent.
  • do whatever you can to encourage appropriate use and discourage inappropriate use, by taking advantage of encapsulation features (interfaces, non-exported/private functions, etc.), and passing data into and out of the API with strongly/explicitly-typed objects (e.g. return POD classes, or enumerated values or similar rather than numeric or string types.)
  • reduce the complexity of the surface area by exporting the smallest reasonable API, and also avoiding ambiguous situations, as with functions that take more than one argument of a given type, which leads to cases where users can easily (and legally) do the wrong thing.
  • increase safety of the API by removing or reducing and being explicit about the API's use of global state. Avoid providing APIs that are not thread safe. Avoid throwing exceptions (or equivalents) in your API that you expect users to handle. If users pass nil pointers into an API, its OK to throw an exception (or let the runtime do it,) but there shouldn't be exceptions that originate in your code that need to be handled outside of it.

Ergonomic interfaces feel good to use, but they also improve quality across the ecosystem of connected products.

Common Gotchas

This is a post I wrote a long time ago and never posted, but I've started getting back into doing some work in Common Lisp and thought it'd be good to send this one off.

On my recent "(re)learn Common Lisp" journey, I've happened across a few things that I've found frustrating or confusing: this post is a collection of them, in hopes that other people don't struggle with them:

  • Implementing an existing generic function for a class of your own, and have other callers specialize use your method implementation you must import the generic function, otherwise other callers will (might?) fall back to another method. This makes sense in retrospect, but definitely wasn't clear on the first go.
  • As a related follow on, you don't have to define a generic function in order to write or use a method, and I've found that using methods is actually quite nice for doing some type checking, at the same time, it can get you into a pickle if you later add the generic function and it's not exported/imported as you want.
  • Property lists seem cool for a small light weight mapping, but they're annoying to handle as part of public APIs, mostly because they're indistinguishable from regular lists, association lists are preferable, and maybe with make-hash even hash-tables.
  • Declaring data structures inline is particularly gawky. I sometimes want to build a list or a hash map in-line an alist, and it's difficult to do that in a terse way that doesn't involve building the structure programatically. I've been writing (list (cons "a" t) (cons "b" nil)) sort of things, which I don't love.
  • If you have a variadic macro (i.e. that takes &rest args), or even I suppose any kind of macro, and you have it's arguments in a list, there's no a way, outside of eval to call the macro, which is super annoying, and makes macros significantly less appealing as part of public APIs. My current conclusion is that macros are great when you want to add syntax to make the code you're writing clearer or to introduce a new paradigm, but for things that could also be a function, or are thin wrappers on for function, just use a function.