The Mid Career Shuffle

tl;dr> I’ve decided to take a new job as an early engineering manager/tech lead, at a data analytics/database startup. And I hope to begin blogging/journaling here more about software development and the way that software and databases interact with other things that grab my interest.

This post doesn’t really have a narrative. It’s just a collection of thoughts. I’m sort of out of the habit of writing blog posts, and one of the things that’s gotten me stuck on writing in the past year or so is feeling like the narrative of any given blog post is not quite right.1 But we have to start somewhere, and the story will happen over the next few posts.


In the middle of 2020 I left a job at a company that I’d been at for almost 9 years. It seemed like the world was on fire, what it meant to work at a globally distributed tech company was changing (and still is!), and it felt like a good time to make a change. But to what? I’ve spent the last few years working on different projects and exporing different things: I’ve worked on cool distributed systems problems, I’ve worked on product-focused (backend) engineering, I’ve worked on familiar latency-sensitive compute provisioning and orchestration, and a little bit this and that between.

I’ve identified a few things about jobs and working in this process:

  • while remote work is (and has been for a while) definitely a reality of our world,2 the one thing you can’t really accommodate for is timezones. A friend, zmagg, claims (and I believe this) that time differences between 9 hours and 16 hours are basically unworkable as there isn’t really enough overlap to really collaborate.
  • if the center of gravity in a company is in a place, or in an office, remote teams really have to be intentional about making sure to include people who are outside of that bubble.
  • the interesting parts of software engineering is what happens between people building software on a team: programming is a means to an end, and while the tools change every so often, how you build together is really fascinating (and hard)!

My new job is different from any job I’ve ever had: First, I’m going to be working on building and developing teams and helping support the engineers and the engineering teams rather than working directly/full-time on the software. I probably will end up doing some software work too. This isn’t that novel: I’ve definitely done “the management thing” a few times of times, but more of my time will be doing this, and I get to be more intentional about how I approach it. Second, my work (mostly) has been in support of products that other people--largely my coworkers develop, in the sense that the work that I’m drawn to is the internal infrastructure of software. I like infrastructure a lot, and I think I tend to focus on these kinds of problems because I feel like it gives me the biggest opportunity to make an impact. In this sense, in a lot of ways this role is very similar to my previous jobs: building the infrastructure to enable and support the people working around me. Only this time it is (not entirely) software. I’m actually looking forward to mapping concrete business goals to actual research and development problems and projects, helping to provide context and prioritize projects without needing to filter through layers of indirection.


I have been blogging in one form or another (mostly this form) to greater and lesser (mostly lesser) extent for a (really) really long time. The landscape of the way we interact on the internet is changing: twitter is really a shell of its former self, “big” social media networks (facebook, instagram, ticktock, twitter, etc.) have really segmented by generation and region, and little social media sites are still niche.

For my part, I’ve been spending more time in “group chat,” with sort of wacky combinations of random friends. This isn’t new for me, and I take a lot of joy of building out very small communities of folks. Also, as I think about it, the thing I most want to do is have coversations with people. Sometimes that conversation is longer form (like this,) and sometimes, it’s more one-to-many (like this sort of half baked telegram channel idea that’s just sort of a “things I would have said on twitter” but as a chat,) but rarely is it “I would like to become a publisher,” or I want to use blogging as a way to market some other entrepreneurial effort. Not that there are anything wrong with these reasons for blogging.

It’s easy (as a software engineer, though I think this probably holds true for anyone who process for “building” thing requires a lot of just “thinking,” about things,) to do a lot of the thinking part in private and to only show up with the “right” solution (with maybe a few alternatives for rhetorical effect,) or to mostly focus our writing on conclusions reached rather than questions asked.

So perhaps, (and we’ll see if holds,) I can use this space more as a journal and as a collection of questions and (perhaps) failed attempts and less of news feed.


  1. I think I often get stuck on where to start, and how much exposition to provide for the topic. I think too much time as a technical writer where my job was to explain difficult concepts means I can (at least try) to start to start to early, and then feel like I get to the end of a post before I’ve said what I think needs to be said. The answer to this might be to just write more and post more, so the exposition is just in the scroll back rather than trying to do too much in one post. And to always err on the side of not enough exposition. Let me know, though if you have an opinion on that one. ↩︎

  2. companies are often necessarily global in scope, and having a team or project spread between different offices isn’t all that different than when it is spread between a bunch of people’s homes. Once you have more than one location, n locations are pretty much the same. ↩︎

The Most Forgotten CI Feature

Developers love to complain about their tools, particularly continuous integration (CI) software, everyone has a pet idea of how to make it faster to run or make the results easier to interpret, or more useful. They’re all wrong.1

I spent a few years as the, I guess tech lead(?),2 of a team that was developing a CI tool/platform. It was a cool project, big scale (IO-bound workloads! Cloud!) in an organization that grew rapidly and was exactly the right age for the organization to believe in CI, but also predate the maturity of really mature off-the-shelf CI systems (e.g. Github Actions.)

We did a lot of cool things, but I think the most useful thing we ever built was a notification system that let people know when their tests were done running.

That’s it.

See, people were always asking “I want my test to run in 5 minutes, or 10 minutes,” so that I don’t lose too much time waiting for the results and I can avoid getting distracted or losing focus. You could spend a lot of time making things faster, and in some cases this is a great idea: slow things are bad, compute time is expensive (particularly in aggregate), and for trivial things you do want a fast response time.

The problem is really that sometimes things can’t be made all that much faster without an exceptional amount of effort, and while compute time is expensive sometimes you end up spending significantly more on faster machines or more machinesfor increased parallelism, which can result in for only modest gains.

This of course misses the point: human attention spans are incredibly fickle and while really well focused and disciplined engineers might be able to wait for 5 minutes, anything longer than that and most people will have moved on and at that point it might as well taken an hour. While there are some upper bounds and pragmatic aspects on this just because if a build takes 2 hours (say) a developer can only really has time to try 1 or 2 things out in a given work day, but it does mean that execution times between a minute and about 20 minutes are functionally the same.

So, just notify people when their build is done. Don’t beat distraction by being really fast, beat distraction by interrupting the distraction. People don’t need to spend their day looking at a dashboard if the dashboard tells them (gently) to come back when their task is done.

Why doesn’t every CI tool do this? It might be the feature that every developer wants, and yet, there’s no really good “tell me when my build is done,” feature in any of the popular tools. It’s a hard feature to get right, and there are a lot of tricky bits:

  • People generally don’t want to get emails; emails would be easy, but they’re not a good way to send a quick--largely ephemeral--reminder. While you can pull emails out of commits (which isn’t a great strategy most of the time,) or (presumably) usernames if you’re on a platform like GitHub, there are some important user settings that you have to store somewhere.
  • Who to notify for a particular build is a little hard, it turns out. People often want to opt into notifications and be able to only receive notifications that are important to them. Sometimes the person who wants the notification isn’t the set of the authors of a commit, or the user that submitted the branch/pull request.
  • When to send a notification isn’t clear either. Whenall tasks in the build complete? What if some tasks are blocked from starting because one thing failed? Is “do what I mean” notifications something like “notify me on the first failure, or when all (runable) tasks succeed”? If a task fails (and generates a notification,) and then a task is restarted and then the build goes on to succeed do you send a second notification (probably?) Not only are these hard questions, but different teams might want different semantics.

It’s one of those things that seems simple, but there’s just enough complexity that it’s hard to build and hard to get right. Easier, a bit to do when all of your CI platform is developed in house, but only a bit, and (probably for the better) there aren’t many specific tasks Anyway, I hope someone builds something like this, I’d certainly use it.


  1. Well not all of them. The problem is that developers are very likely to get stuck spinning in weird local optimizations and it’s really hard to think about CI features as a user. Developing CI software is also a fun ride because all your users are also engineers, which is a tough crowd↩︎

  2. In retrospect I realize I was doing tech-lead things, but my title and how we talked about I what I was doing at the time wasn’t indicative of this reality. Ah well. ↩︎

Reintroducing Grip

Once upon a time, I wrote this logging library for Go called grip and used it a lot at a couple of jobs, where it provided a pretty great API and way to think about running logging and metrics infrastructure for an application. For the last year and a half, I’ve mostly ignored it. I didn’t want to be that guy about loggers, so I mostly let it drop, but I was looking at some other logging systems and I felt inspired to unwrap it a bit and see if I could improve things and if the time away would inspire me to rethink any of the basic assumptions.

This post is about a lot of things (and maybe it will spill over,) but generally:

  • the practice and use of logging, and how that is changing,
  • adoption of libraries out side of the standard library for common infrastructure,
  • the process of changing grip, and also digging into some more specific changes.

Logging Trends

Grip came about sort of at the height of the structured logging craze, and I think still focuses more on being a way to provide not just the ability to do slightly cooler than a collection of print statements but also be the primary way your software transmits events. Because all of the parts of grip are pretty plugable, we used this for everything from normal application logging to our entire event notification system, and kind of everything in between, including a sort of frightful amount of managing the output of a distributed build farm. The output of these messages went to all of the usual notification targets (e.g. email, slack, github, jira,) but also directly to log aggregation services (e.g. splunk) and the system log without being written to a file, which just made the services much easier to operate.

Like me, grip is somewhat opinionated. A lot of loggers compete based on “speed” (typically of writing messages to standard output or to any file) or on features (automatic contextual data collection and message annotation, formatting tools, hierarchical filtering, etc.) but I think grip’s selling point is really:

  • great interfaces for doing all of the things that you normally do with a logger, and all the things you want to do.
  • provide ways of logging data that doesn’t involve building/formatting strings, and lets the normal data.
  • make logging work just as well for the small CLI process as the large distributed application, and with similar amounts of ease. Pursuant to this I really think that any logging system or architecture that requires lots of extra stuff (agents, forwarders, rotation tools, etc.) to actually be able to use the logs is overkill. There’s nothing particularly exciting about sending logs over the network (or to a file, or a local socket,) and it should be possible to reduce the amount of complexity in this part of the deployment, which is good for just about everyone.
  • provide good interfaces and paradigms for replacing or custimizing core logging behavior. It should be possible for applications to write their own tools for sending log messages to places, or functions to filter and format log messages. This is a logging tool kit that can grow and develop with your application.
    • in the x hierarchy include implementations for many messaging formats, and tools: use grip implementations to send alerts to telegram, email, slack, jira, etc. Also, support for sending logs _directly to wherever they need to be (splunk, syslog, etc.). Whever the logs need to be, grip should be able to get you there.
    • The core grip package provides logging multiplexers, buffers, and batching tools to help control and manage the flow of logging from your application code to whatever logging backend you’re targeting.'
    • Implementations and bridges between grip’s interfaces and other logging framework and tools. Grip attempts to make it possible to become the logging system for existing applications without needing to replace all logging call sites immediately (or ever!)
    • Tools to support using grip interfaces and backends inside of other tools: you can use a grip Sender as a writer, or access a standard-library logger implementation that writes to the underlying grip logger.
  • logging should be fast, but speed of logging only really matters when data volume is really high, which is usually a problem for any logger: when data volume is lower even a really slow logger is still faster than the network or the file system. Picking a logger that’s fast, given this, is typically a poor optimization. Grip should be fast enough for any real world application and contains enough flexibility to provide an optimized path for distributing log messages as needed.
  • provide features to support some great logging paradigms:
    • lazy construction of messages: while the speed of writing messages to their output handler typically doesn’t matter, sometimes building log messages can be intensive. Recently I ran into a case where calling [Sprintf]{.title-ref} at the call site of the logger for messages that were not logged (e.g. debug messages,) had material effects on the memory usage of the program. While string formating is a common case (and grip has “[Logf]{.title-ref}” style handlers that are lazy by default,) we found a bunch of metrics collection workloads that had similar sorts of patterns and lazy execution of these metrics tended to help a lot.
    • conditional logging, rather than wrapping a log statement in an [if]{.title-ref} block, I found that you could have logging handlers that
    • randomized logging or sampled logging: In some cases, I want to have log messages that only get logged half the time or something similar. In some high-volume code paths logging all the time is too much, and never is also too much, but it’s possible to devise a happy medium. Grip, for years now, has implemented this in terms of the conditional logging functionality.
    • structured logging, rather than logging strings, it’s sometimes nice to just pass data, in the form of a special structure or just a map and let the logging system package deal with the output formating and transmission of messages. In general, line-oriented JSON makes for a good logging format, assuming most or enough of your messages have related structures and your log viewing and processing tools support this kind of data, although alternate human-readable string formats should also be available.

None of these things changed in the rewrite: by default, grip mostly just looks and works like the Go standard library logger (and even uses the logger under the hood for a lot of things,) but it was definitely fun to look at more contemporary logging practices and tools and think about what makes a logging library compelling (or not).

Infrastructure Libraries

Infrastructure libraries are an interresting beast: ideally every dependency carries some kind of maintenance cost, so you want to minimize the number of dependencies you require. At the same time, not using libraries is also bad because it means you end up writing more code and that has even more maintenance costs. It’s also the case that software engineers love writing infrastructure code and are often quite opinionated about the ergonomics of their infrastructure libraries.

On top of that, you make infrastructure software decisions once and then are sort of stuck with them for a really long time. I’ve changed loggers in projects and it’s rough, and in general you want to choose libraries: as an application developer you have the great feeling that no one’s differentiating feature is going to have anything to do with the logger1 and you want something that’s battle tested and familiar to everyone. Sometimes--as in the logging package in Python--the standard library has a library just works and everyone just uses that; other times, there are one or two options that most project uses (e.g. log4j in java, etc.).

Even if grip is great, it seems unlikely that everyone (or anyone?) will switch to using grip over some of the other more popular options. I’m OK with that, I’m not sure that beyond writing a few blog posts I’m really that excited about doing the kind of marketing and promotion that it might take to promote a logging library like this, and at the same time the moment for a new logging library might have already passed.

Grip Reborn

The “new grip” is a pretty substantial change. A lot of implementation details changed and I deleted big parts of the interface that didn’t quite make sense, or that I thought were a bit fussy to use. Basically just sanding off a lot of awkward edges. The big changes:

  • greatly simplified dependencies, with more packages and an x hierarchy for extensions. The main grip package, and it’s primary sub packages’ no longer has any external dependencies (beyond github.com/tychoish/fun.) Any logging backend or message type that has additional dependencies are in x.
  • I deleted a lot of code. There were a lot of things that just weren’t needed, there was an extra interface, a bunch of constructors for various objects that weren’t useful. I also simplified the concept of levels/priority within the logging system.
  • simpler high level logging interface. I used to have an extra interface and package to hold all of the Info, Error, (and so forth), and I cut a lot of that out and just made a Logger type in the main package which just wraps an underlying interface and doesn’t need to be mutable, and doing this made it possible to simplify a lot of the code.
  • added some straight forward handlers to attach loggers to contexts. I think previously, I was split on the opinion that loggers should either be (functionally) global or passed explicitly around as objects, and I think I’ve come around to the idea that loggers maybe ought to hang off the context object, but contextual loggers, global loggers, and logging objects are all available.
  • the high level logging interface is much smaller, with handlers for all the levels and formatting, line, and conditional (e.g. f, ln, When) logging. I’m not 100% sold on having ln, but I’m pretty pleased with having things be pretty simple and streamlined. The logger interface, as with the standard logger is mirrored in the grip package, with a shim.
  • new message.Builder interface and methods that provides a chainable interface for building messages without needing to mess with the internals of the message package which might be ungainly at logging call sites.
  • new KV message type: this makes it possible to have structured logging payloads without using map types, which might prove easier easier to integrate with the new zerolog and zap backends.
  • I have begun exploring in the [series]{.title-ref} package, what it’d mean to have a metrics collection and publication system that is integrated into a logger. I wrote probably too much code, but I am excited to do some more work to do some more work in this area.

  1. Arguably, in a CI platform, most of the hard problems have something to do with logging, so this is an exception, but perhaps one of the only exceptions ↩︎

New Go Module: tychoish/fun

This is a follow up to my New Go Modules post about a project that I’ve been working on for the past few months: github.com/tychoish/fun.

fun is a collection of simple libraries using generics to do a collection of relatively mundane things, with a focus on well-built tools to make it easier for developers to solve higher level problems without needing to re-implement low level infrastructure, or use some rougher parts of the go standard library. It has no dependencies outside of the standard library, and contains a number of pretty cool tools, which were fun to write. Some of the basic structures were:

  • I wrote linked list implementations (single and double), adapted a single-ended Queue implementation that a former coworker did as part of an abandoned branch of development, and wrote a similar Deque. (fun/pubsub)

  • I adapted and developed a “broker” which uses the queues above to be able to do one-to-many pubsub implementations. (fun/pubsub)

  • I made a few high-level standard library synchronization tools (sync.Map, sync.WaitGroup, sync.Pool, and atomic.Value) even more higher level, with the help generics and more than a few stubborn opinions. (fun/adt; atomic data types)

  • I revisited an error aggregation tool that I wrote years ago, and made it a bunch faster, less quirky, and more compatible with the current state of the art with regards to error handling (e.g. errors.Is and errors.As). (fun/erc). I also wrote some basic error mangling tools including a more concise errors.Join, tools to unwind/unwrap errors, and a simple error type ers.Error to provide for constant errors (fun/ers)

  • I wrote an iterator interface and a collection of function wrappers and types (in the top level package), for interacting with iterators (or really, streams in one way or another,) and decorating those handlers and processors with common kinds of operations, modifications, and simple semantics.

    I don’t know that it will catch on, but I’ve written (and rewritten) a lot of worker pools and stream processing things to use thse tools, and it’s been a useful programming model. By providing the wrappers and the iteration, users can implement features almost functionally (which should be easy to test.)

  • I built a collection of service orchestration tools to manage long running application services (e.g. http servers, worker pools, long running subprocesses, etc.) and a collection of context helpers to make it easier to manage the lifecycle of the kind of long-running applications I end up working on most of the time. Every time I’ve joined a new project… ever, I end up doing some amount of service orchestration work, and this is the toolkit I would want. (fun/srv)

  • I wrote some simple testing frameworks and tools for assertions (fun/assert) halt-on-failure, but with a tesitfy-inspired interface, and better reporiting along with a mirrored, fail-but-continue fun/assert/check, along with fun/testt (“test-tee” or “testy”) which has a bunch of simple helpers for using contexts, and fun/ensure, which is a totally different take on an assertion library.

I don’t want this post to be documentation about the project; there are a lot of docs in the README and on the go documentation, also the implementations are meant to be pretty readable, so feel free to dive in there. I did want to call out a few ways that I’ve begun using this library and the lessons it’s taught me in my day to day work.

  • I set a goal of writing code that’s 100% covered by tests. This is hard, and only possible in part because it’s a stateless library without dependencies, but I learned a lot about the code I was writing, and feel quite confident in it as a result.

  • I also set a goal of having no dependencies outside of the module and the standard library: I didn’t want to require users opt in using a set of tools that I liked, or that would require on going maintenance to update and manage. Not only is this a good goal in terms of facilitating adoption, it also constrained what I would do, and forced me to write things with external extensibility: there had to be hooks, interfaces and function types had to be easy to implement, and I decided to limit scope for other things.

  • Having a more complete set of atomic types and tools (particularly the map and the typed atomic value, also the typed integer atomic types in the standard library are great), has allowed me to approach concurrent programming problems without doing crazy things with channels or putting mutexes everywhere. I don’t think either channels or mutexes are a problem in the hands of a practiced practitioner, but having a viable alternative means it’s one less thing to go wrong, and you can save the big guns (mutexes) for more complex synchronization problems.

  • Linked lists are super cool. I’ve previously taken the opinion that you shouldn’t implement your own sequence types ever, and mostly avoided writing one of these before now. Having now done it, and now having truly double-ended structures means things like “adding something to the beginning” or “inserting into/removing from the middle” of a sequence isn’t so awkward.

    The experimental slices library makes this a little less awkward with standard library slices/arrays, and they are proably faster.

  • It was really fun to take the concept of an interator and then build out from this concept to build tools that would make them easy to use, and got some good filtering, parallel processing, and map/reduce tools. I definitely learned a bunch but also I think (and have found) these tools useful.

  • I’ve long held that most go applications should be structured so that you shouldn’t really need to think too much about concurrency when writing business logic. I’ve previously tried to posit that the solution to this was to provide robust queue processing tools (e.g. amboy), but I think that’s too heavy weight, and it was cool to be able to think about the solution to this concept from a different angle.

Anyway, give it a try, and tell me what you think! Also pull requests are welcome!

New Go Modules

I’ve been doing a little big of programming for fun in recent months, and I wanted to do a little big of blogging here to explain myself, so expect that over the next few days.

By way of creating the facade of suspense--which the astute among you will be able to spoil by way of looking at my github--this post isn’t going to mention the actual projects, but rather answer two questions that are more interesting from a higher level:

  • Why have you done this?
  • What have you learned about yourself as a programmer?

Why Have You Done This?

First, while I’ve been writing Go for a long time, I must confess that I was a long time generic skeptic and hold out. Writing some code for myself and I wanted to get a better feeling for how they worked, now that they’re here.

In my day to day work, I have found myself writing similar code again and again: while I have definitely have a class of problems that I tend to work on, I’d be ok if I never wrote another function to bridge the gap between a [context.Context]{.title-ref} and a [sync.WaitGroup]{.title-ref} (and I’m not holding my breath for this in the standard library.)

Finally, over the years I’ve written a few projects that I’ve worked on professionally have been open source, and a few of them I’ve even rather liked, and I wanted to see what it would be like to revisit some of these projects with (potentially) wiser eyes and fingers.

What Have You Learned?

I actually think I’ve learned a lot:

  • While I’m a big fan of the part of software development which is “deleting code” in my professional work, it was really interesting to take that to code that I knew I’d written (or directed the writing of) and been able to see what I could cut.
  • I’m better at writing tests and writing testable code than I was even a few years ago. While I believe in writing defensive code, I found myself writing an implementation and then going through and deleting a lot of code that wasn’t useful or was (fundamentally) superstitious when I discovered that there was no way to trigger a potential case.
  • Because I’ve spent a few years mostly working on relatively high level projects and not having the bandwidth to work on lower (mid?) level infrastructural code, in practice I’ve spent more time assessing other people’s libraries and I was keenly aware that I was developing code that would only be used if someone chose to, and in thinking about how those decisions are made.
  • This isn’t new, but I care a lot about the erognomics of software. I think 5 or 6 years ago, ergonomics meant “code which was easy to use and provented users from doing wrong things,” and I think my view of erognomic code has expanded to include interfaces that are easy and obvious to use, and promote the users of my code to write better code.

Crafts at Scale

I’ve been kntting off and on since 2002 or 2003 (or so) but have been particularly “on” in the last couple of years. When I started working as a computer programmer (without formal training as such,) I quipped that I learned how to program from hand knitting. This is a simplification--of course--but it’s not that incorrect. Knitting is a system with some basic fundamentals (stitches, yarn, needle), a lot of variables (gauge, tension), repeated procedures, and a hell of a lot of prior art. This is a lot like programming.

Spinning too, has many of the same properties, but similarities aside they feel like different kinds of crafts: where knitting feels like you’re applying a set of understood procedures to produce something that’s unique, spinning is often about figuring out how to apply the same procedures in a way that prodcues consistent result. This makes sense you want to produce a quantity of yarn that’s on average similar enough that when you knit (or I suppose weave,) you have good consistent results. In many ways, spinning leads naturally to an idea of “production” or “scale” as an aspect of craft.

Just to be clear, these kinds of crafts should be fun and rewarding on their own merits. If you want to spin and are excited and happy to make and have yarn with variable thickness, or where every skein is unique, then do that. For me, particularly now, I find the problem of figuring out how to be consistent while spinning a couple of pounds of wool over the course of a few weeks to be really exciting and entrancing.

The kind of knitting that I’ve been doing recently has had some of these production/scale aspects as well: knitting with very similar white yarns removes color and minimizes texture as a variable. While I’ve been knitting roughly the same sock at production scale, the sweater’s I’ve been working on have some bespoke aspects, though the process is broadly similar. There’s something so compelling about being able to understand my craft and procedure so thoroughly that I can make things that aren’t wonky with confidence.

Programming is also very much like thsi for me these days. I spent years as a programmer trying to figure out how code worked, and how basic fundamental systems and protocols worked (e.g. webservers, Linux, databases,) and now I know how to build most things, or feel confident in my ability to figure out how to build a new thing when needed. The exciting things about software engineering is more about making the software work at large scale, the processes that allow increasingly large teams of engineers work together effectively, and being able to figure out the right solution for the problem users have.


I’m currently somewhere on the 7th 100 gram skein of approximately worsted weight, 3-ply merino yarn. My consistency isn’t quite where I want it, but if you look at all of the skeins they seem roughly related, so I think I’ll be able to make a sweater easily from it. I have two more skeins after this one. My plan from here is to alternate spinning batches of white yarn with spinning batches of not-white/natural colored wool for variety. Probably mostly 3 ply for now, though I may give 2 ply a go for one of them.

I’m knitting a white seamless style sweater, using Elizabeth Zimmerman’s method for bottom up sweaters. I’ve changed many of the numbers and some of the proportions, but nothing particularly fundamental about the process. I’ve knit 3 sweaters back to back with this same process, though this is the first with this specific yarn. I do have enough of this yarn to knit 3 or 4 sweaters, which I find both daunting and exciting, taken as a whole. With the sleeves done, I’m about halfway to the underarms on the body. I want to try knitting a saddle shouldered garment for this one.

Why Spinning?

A few years ago, I sent my spinning wheel away because I was living in a very small apartment with two very attentive cats. While I’ve been living in an apartment with more room (and doors!) for a few years now, only this week has my wheel returned: I realized that I missed spinning, and it’s not like soothing hobbies are unwelcome these days.

I started spinning about 15 years ago, and did it a bunch for a few years and then more or less stopped for a long time. It’s been interesting to start up again, and discover that my hands/body more or less remembered exactly how to do it. I had a few hours and about 200g of yarn to spin before some of the finer points came back and now I’ve spun a couple more skeins closer to my intention.

The other human asked “What do you like about spinning?”--well the question was phrased more like “is handspun yarn better?”, but I will paraphrase to better capture intent. There are, of course, a few answers:

  • the act of spinning is quite satisfying. Sometimes it’s enough for things to be fun and satisfying even if they aren’t productive.
  • the yarn can be sort of nifty, and although I’ve spun a lot of yarn, I have mostly not knit much with handspun yarn. I tend to like consistent and fine (fingering) yarns in my own knitting, and machines just do better at making this kind of yarn, so I end up giving a lot of handspun away to friends who I know will knit it better.
  • spinning gives you a lot of control over the wool (and kind of sheep) that go into the yarn you get, in a way that just doesn’t scale up to larger production schemes. I quite enjoy being able to first select what kind of sheep the wool I use comes from and then decide what kind of yarn I want from it. When other people spin, you can usually only pick one of these variables.

I’m currently spinning some white merino roving that I’ve (apparently had for years.) There’s a piece of paper in the bag that says “2 lbs” but between my practice skeins and whatever I did before I stopped, there’s probably only about a pound and a half left: this is fine. Merino is great, but it’s quite common and I knit a lot of merino. I’ve been working on getting a pretty stable 3-ply worsted weight yarn, and I’m roughly there. I like 3-ply because of the round construction, and worsted weight is about the heaviest yarn I’m really interested in knitting with or using (and it’s easy to design with/for!)

My next few spinning projects are with wool from different breeds of sheep (BFL! Targhee! Rambouillet!) though mostly undyed (and largly white), and mostly in larger batches (a pound or two.) I’ve never really gotten into hand-dyed roving, and mostly really enjoy spinning undyed wool: in most cases dying the finished garment or the yarn before knitting leads to the best result anyway. I guess one of the most The thing I like about spinning, in a lot of ways, is that it lets me focus on the wool and the sheep.

As a spinner, I’m far more interested in the wool and the sheep, in much the same way that as a knitter I’ve become far more interested in the structure of what I’m knitting than the color or the yarn. This feels entirely consistent to me: as a spinner I’m far more interested in the process and the wool than I am in yarn, and as a knitter I’m far more interested in using the yarn to explore the structure. Somehow, the yarn itself isn’t the thing that compells me, despite being kind of at the center of the process.

Anyway, back to the wheel!

The Most Plain Knitting

Last night I finished knitting a sweater that I’d been working on for either a while (pictures in this twitter thread) or not all that long, and promptly started the next sweater.

Also last weekend I handed off a bag of undyed (white) knitting to a friend of mine who is way more excited about dying than I am. This includes 13 or 14 pairs of socks (in a few different batches,) and a sweater that I knit. We also found someone who the sweater is more likely to fit than me, and I always quite like finding homes for wayward sweaters.

I have a couple of long flights for work trips coming up so I wanted to make sure that I wasn’t bringing a sweater that I was two-thirds of the way through and would likely finish. The next sweater is the 4th I’ve made from this yarn, and the 3rd plain sweater. I’ve made two plain raglans, and this last one was a crew neck.

By now I have a reasonable set of numbers/patterns for a “fingering weight sweater that basically fits an adult medium/small” that I’ve been honing, and enough yarn stashed to make about 9 of these sweaters. That should get me through the winter.

The crew neck is a touch lower than I think it needed to be, but it looks pretty smart. The thing about knitting Elizabeth Zimmerman-style seamless sweaters is that for the entire time you’re knitting the yoke section it really does not seem like it’s going to work out, so you have low-key panic the entire time, and then somehow, magically it all does. The key to success is to not overthink things too much and not fuck around.

I think this last sweater had a bit too much fucking around with the neckline, so it looks a bit weird (to my somewhat exacting tastes) where the ragland decreases interact with the neck shaping. The front of the neck could have been higher, and I think I could have done like 3-4 sets of short rows near the end to get the right effect for the front. Perhaps one of the next few sweaters can be another attempt at a raglan.

My plan for the next/current sweater is to do set-in sleeves with a crew neck. I have the math all worked out, so that seems like it might be fun. I’ve also never done EZ’s saddle shoulder (or hybrid) yoke, so that seems like some fun winter knitting. Regardless, saddles and set in sleeves are mostly constructed the same way, so I can wait quite a while to make a decision. After about 18 months of mostly knitting socks (and having gotten ~30 pairs done,) a (minor) change seems good.