Against Testify

For a long time I've used this go library testify, and mostly it's been pretty great: it provides a bunch of tools that you'd expect in a testing library, in the grand tradition of jUnit/xUnit/etc., and managed to come out on top in a field similar libraries a few years ago. It was (and is, but particularly then) easy to look at the testing package and say "wouldn't it be nice if there were a bit more higher-level functionality," but I've recently come around to the idea that maybe it's not worth it. [1] This is a post to collect and expand upon that thought, and also explain why I'm going through some older projects to cut out the dependency.

First, and most importantly, I should say that testify isn't that bad, and there's definitely a common way to use the library that's totally reasonable. My complaint is basically:

  • The "suite" functionality for managing fixtures is a bit confusing: first it's really easy to get the capitalization of the Setup/Teardown (TearDown?) functions wrong and have part of your fixture not run, and they're enough different from "plain tests" to be a bit confusing. Frankly, writing test cases out by hand and using Go's subtest functionality is more clear anyway.
  • I've never used testify's mocking functionality, in part because I don't tend to do much mock-based testing (which I see as a good thing,) and for the cases where I want to use mocks, I tend to prefer either hand written mocks or something like mockery.
  • While I know "require" means "halt on failure" and "assert" means "continue on error," and it makes sense now, "assert" in most [2] other languages means "halt on failure" so this is a bit confusing. Also while there are cases where you do want continue on error semantics for test assertions, (I suppose,) it doesn't come up that often'
  • There are a few warts, with the assertions (including requires,) most notably that you can create an "assertion object" that wraps a *testing.T, which is really an anti-pattern, and can cause assertion failures to be reported at the wrong level.
  • There are a few testify assertions that have some wonky argument structure, notably that Equal wants arguments in expected, actual form but Len wants arguments in object, expected form. I have to look that up every time.
  • I despise the failure reporting format. I typically run tests in my text editor and then use "jump to failure" point when a test fails, and testify assertions aren't well formed in the way that basically every other tool are (including the standard library!) [3] such that it's fussy to find a failure when it happens.

The alternative is just to check the errors manually and use t.Fatal and t.Fatalf to halt test execution (and t.Error and t.Errorf for the continue on error case.) So we get code that looks like this:

// with testify:
require.NoErorr(t, err)

// otherwise:
if err != nil {
     t.Fatal(err)
}

In addition to giving us better reporting, the second case looks like code that is more typical of code that you might write outside of test code, and so gives you a chance to use the production API which can help you detect any awkwardness but also serve as a kind of documentation. Additionally, if you're not lazy, the failure messages that you pass to Fatal can be quite useful in explaining what's gone wrong.

Testify is fine and it's not worth rewriting existing tests to exclude the dependency (except maybe in small libraries) but for new code, give it a shot!

[1]I must also confess that my coworker played some role in this conversion.
[2]I'd guess all, but I haven't done a survey.
[3]Ok, the stdlib failures have the problem, where the failures are just attributed to the filename (no path) of the failure, which doesn't work great in the situation where you have a lot of packages with similarly named files and you're running tests from the root of the project.

emt -- Golang Error Tools

I write a lot of Go code, increasingly so to the point that I don't really write much code in other languages. This is generally, fine for me, and it means that most of the quirks of the language have just become sort of normal to me. There are still a few things that I find irritating, and I stumbled across some code at work a few weeks ago that was awkwardly aggregating errors from a collection of goroutines and decided to package up some code that I think solves this pretty well. This is an introduction and a story about this code.

But first, let me back up a bit.

The way that go models concurrency is very simple: you start gorountines, but you have to explicitly manage their lifecycle and output. If you want to get errors out of a thread you have to collect them somehow, and there's no standard library code that does this so there are a million bespoke solutions to this, and while every Go programmer has or will eventually write a channel or some kind of error aggregator to collect errors from a goroutine, it's a bit dodgy because you have to stop thinking about whatever thing you're working on to write some thread-safe, non-deadlocking aggregation code, which inevitably means even more goroutines and channels and mutexes or some such.

Years ago, I wrote this type that I called a "catcher" that was really just a slice of errors and a mutex, wrapped up with Add(error) and Resolve() error methods, and a few other convenience methods. You'd pass or access the catcher from different goroutines and never really have to think much about it. You get "continue-on-error" semantics for thread pools, which is generally useful, and you never accidentally deadlock on a channel of errors that you fumbled in some way. This type worked its way into the logging package that I wrote for my previous team and got (and presumably still gets) heavy use.

We added more functionality over time: different output formats, support for error annotation when it came and also the ability to have a catcher annotate incoming errors with a timestamp for long running applications of the type. The ergonomics are pretty good, and it helped the team spend more time implementing core features and thinking about the core problems of the product's domain and less time thinking about managing errors in goroutines.

When I left my last team, I thought that it'd be good to take a step back from the platform and tools that I'd been working on and with for the past several years, but when I saw some code a while back that implemented its own error handling again something clicked, and I wanted just this thing. '

So I dug out the old type, put it in a new package, dusted off a few cobwebs, improved the test coverage, gave it a cool name, and reworked a few parts to avoid forcing downstream users to pickup unnecessary dependencies. It was a fun project, and I hope you all find it useful!

Check out emt! Tell me what you think!

Rescoping the Engineering Interview

It's not a super controversial to assert that the software engineering interview process is broken, but I think it's worthwhile to do that anyway. The software engineering interview is broken. There are lots of reasons for this:

  • interview processes are overoptimized for rejecting candidates that aren't good, that they often reject candidates that are good. This isn't a problem if it happens occasionally, but it's really routine.
  • it's difficult to design an interview process that's works consistently well across different levels and different kinds of roles, and companies/teams can easily get into a place where they really can only hire one type or level of engineer.
  • while many engineering teams know that the hiring process is biased, most of the attempts to mitigate this focus on the bias of the interviewer by making interview processes more consistent across candidate or easier to score objectively, while abdicating for the ways that the process can be biased toward certain kinds of candidates.

I've been part of lots of conversations over the years about "making interviews better," and many of the improvements to the process that come out of these conversations don't do much and sometimes exacerbate the biases and inefficiencies of the process. I think also, that the move toward remote work (and remote interviewing,) has presented an underrealized opportunity to revisit some of these questions and hopefully come up with better ways of interviewing and building teams.

To unwind a bit, the goals of an interview process should be:

  • have a conversation with a candidate to ensure that you can communicate well with them (and them with you!) and can imagine that they'll fit into your team or desired role.
  • identify skills and interests, based on practical exercises, review of their past work (e.g. portfolio or open source work,) that would complement your team's needs. Sometimes takes "figure out if the person can actually write code," but there are lots of ways to demonstrate and assess skills.
  • learn about the candidates interests and past projects to figure out if there's alignment between the candidate's career trajectory and the team you're building.

Most processes focus on the skills aspect and don't focus enough on the other aspects. Additionally, there are a bunch of common skills assessments that lots of companies use (and copy from eachother!) and most of them are actually really bad. For example:

  • live coding exercises often have to be really contrived in order to fit within an hour interview, and tend to favor algorithims problems that folks either have memorized because they recently took a class or crammed for interviews. As engineers we almost never write code like this, and the right answer to most of these problems is "use a library function", so while live coding is great for getting the opportunity to watch a candidate think/work on a problem, success or failure aren't necessarily indicative of capability or fit.
  • take home coding problems provide a good alternative to live coding exercises, but can be a big imposition timewise particularly people on people who have jobs while interviewing. Often take home exercises also require people to focus more on buildsystems and project-level polish rather than the kind of coding that they're likely to do more of. The impulse with take home problems is to make them "bigger," and while these problems can be a little "bigger" than an hour, a lot of what you end up looking at with these problems is also finishing touches so keeping it shorter is also a good plan.
  • portfolio-style reviews (e.g. of open source contributions or public projects,) can be great in many situations, particularly when paired with some kind of session where the candidate can provide context, but lots of great programmers don't have these kinds of portfolios because they don't program for fun (which is totally fine!) or because their previous jobs don't have much open source code. It can also be difficult to assess a candidate in situations where these work samples are old, or are in codebases with awkward conventions or requirements.

There isn't one solution to this, but:

  • your goal is to give candidates the opportunity to demonstrate their competencies and impress you. Have an interview menu [1] rather than an interview process, and let candidates select the kind of problem that they think will be best for them. This is particularly true for more senior candidates, but I think works across the experience spectrum.

  • if you want to do a programming or technical problem in real time, there are lots of different kinds of great exercises, so avoid having another candidate implement bubble sort, graph search, or reverse a linked list. Things like:

    • find a class (or collection of types/functions) in your codebase that you can share and have the candidate read it and try and understand/explain how it works, and then offer up suggestions for how to improve it in some way. I find this works best with 100/200 lines of code, and as long as you can explain idioms and syntax to them, it doesn't matter if they know the language. Reading code you don't know '
    • provide the candidate with a function that doesn't have side effects, but is of moderate length and have them write tests for all the edge cases. It's ok if the function has a bug that can be uncovered in the course of writing tests, but this isn't particularly important.
    • provide the candidate with a set of stubs and a complete test suite and have them implement the interface that matches the test cases. This works well for problems where the class in question should implement a fairly pedestrian kind of functionality like "a hash map with versioned values for keys," or "implement an collection/cache that expires items on an LRU basis."
    • have the candidate do a code review of a meaningful change. This is an opportunity to see what it's like to work with them, to give them a view into your process (and code!), and most importantly ask questions, which can provide a lot of insight into their mindset and method.

    I think that the menu approach also works well here: different people have different skills and different ways of framing them, and there's no real harm in giving people a choice here.

  • independent/take home/asynchronous exercises tend to be better (particularly for more senior candidates,) as it more accurately mirrors the way that we, as programmers work. At the same time, it's really easy to give people problems that are too big or too complex or just take too long to solve well. You can almost always get the same kind of signal by doing smaller problems anyway. I also believe that offering candidates some kind of honoraria for interview processes are generally a good practice.

  • one of the big goals of the interview processes is to introduce a candidate to the team and give them a sense for who's on the team and how they operate, which I think has given rise to increasingly long interview sequences. Consider pairing up interviewers for some or all of your interview panel to give candidates greater exposure to the team without taking a huge amount of their time. This is also a great way to help your team build skills at interviewing.

  • assessing candidates should also balance the candidates skills, alignment with the team, with the team's needs and capacity for ramping new members. Particularly for organizations that place candidates on teams late in the process, it's easy to effectively have two processes (which just takes a while,) and end up with "good" candidates that are just haphazardly allocated to teams that aren't a good fit.

These are hard problems, and I think its important both to be open to different ways of interviewing and reflecting on the process over time. One of the great risks is that a team will develop an interview process and then keep using it even if it turns out that the process becomes less effective as interviewers and needs change. Have (quick) retrospectives about your interview process to help make sure that stays fresh and effective.'


I think this is a follow up, in someways, to my earlier post on Staff Engineering. If you liked this, check that out!

Doubled Hat Pattern

Last year I wrote a draft of a book about knitting that I'm working on revising and also drafting something of a sequal to. The book contains a discussion of some fundamental techniques but mostly describes the process for knitting a collection of projects, mostly sweaters, but a few other things as well. The chapters exist somewhere between an unconventional pattern and a long form account of the design and construction process of several specific garments, though I hope there's a sort of companionable air about it, even if the details end up being mostly technical.

In any case, this post is an attempt at the same form, more or less, but focused on a hat that I recently completed.


I'm going to be knitting a hat with a sort of unconventional empirical construction. There's not a lot of preparation work that you need: no gauge, no sizing information, no counting stitches (unless you want,) just knitting and figuring it out as you go along. The hat itself is a simple beanie-style knitted cap, with a "lining" for extra warmth and potentially comfort.


Cast on 16 stitches. Your gauge probably doesn't matter, within reason. I chose a fingering weight wool on the heavier side of fingering, and US size 0 needles. 16 stitches is about an inch and a half or two inches: from these stitches you'll knit a strip of fabric that will encircle your head, so better to keep it narrower than 3 or 4 inches at the outside. I cast on using the long tail method, and I made sure that there was a generous tail left over afterwards as I intended to take advantage of this tail.

Knit, in garter stitch, until the strip is long enough to fit around your head.

I, for my part, made the strip 21 inches or so, around. My head is (unfortunately) 24 inches around, and I think if or when I do it again, I'd make it shorter: maybe 19 or 20 inches around. You can figure out the length empirically, buy placing the knitting around your head and seeing what fits. It's okay to stretch the band a bit for a closer fit, but because there's going to be another layer of knitting on the inside of the hat, it's even expected that the hat will be a little bit big at this point.

When the strip is large enough, bind off, but do not break the yarn. You should have the tail from the cast on be on the same side of the work as the end of the working yarn from where you cast off.

With the same working yarn that you just bound off with, pick up stitches, knitwise, along the side of the strip, creating one sititch in every garter "ridge." When you get all the way around the strip, join and knit in the round. Knit about an inch plain, and then begin shaping the crown.

I do this weird crown shaping that I wrote about here 15 years ago (!!) that I adapted from the toe shaping of a sock. I think it works better for hats than socks, and is great when you don't want to figure out how to evenly divide into 4 or 5 "spokes" and have a spiral decrease. Convienetly, it also structures the decreases so that you switch to double points relatively late in the process. It does something like: repeat "knit eight stitches, decrease once (e.g. knit two together)," all the way around a single round, and then knit 8 rows plain. Then replace 8 with 7: knit 7 stitches and decrease, repeating around, then knit 7 plain rows. Continue on in this manner, moving the decreases closer together in the decrease rounds, and moving the decrease rounds closer together. Eventually, all your stitches will be decreases, and you can just alternate "decrease and plain" rows until you have 8 stitches or something, and then graft the remaining stitches together. I definitely always have the feeling of totally winging the ending: worry not.

Once you take have taken care of the crown stitches, I break the yarn and weave in this end. Turning my attention back to the long tail, I sew up the cast on and bind off ends of the original strip, and have the tail ready and the lower edge of the hat. With this yarn I fuse in the remaining working yarn using a felted or sewn join, and pick up stitches along the remaining garter edge, again at a rate of one stitch for every garter ridge, all the way around.

Knit about an inch here, until the hat is your desired length: I like to have 4 or 5 inches between the lower edge of the hat and the start of the crown shaping, but this is a point of personal preference. When the hat is the proper length, purl the next row to provide a turning round, and then stop. It's important at this point to make some decisions about the lining of the hat:

  • if you plan to knit the interior hat with the same color and yarn as the exterior hat, purl a second row and continue.
  • if you want to switch colors, knit the next row with the new color, and then purl the following row in the new color, before continuing.
  • if you want to switch yarns to a different weight, be careful, but proceed as if you were changing colors (even if you're not!) and do increases or decreases as required so that the interior hat is either the same or slightly smaller than the exterior hat.
  • if you aren't changing yarn, or are changing between two colors of the same yarn, then you could omit all purl rounds, and just knit plain.

For my part, I switched colors and to a different yarn type with a substantially finer gauge, and increased rather a lot at this point. I think I probably increased a bit too much, though the hat still works fine. I think I'd probably tend to keep things more simple in the future.

Finally, knit the interior hat straight away until the distance between the lower edge (purl round(s)) and the beginning of the shaping row are the same, and then repeat the shaping for the interior hat, and finish it off. Fold the inner at into the outer hat and place on head.

Observations:

  • the hat will be quite warm, so knitting with finer yarn is probably better. Also because the hat is so heavy, it's viable to knit a bit loser than you might if it were single weight.
  • making sure that the inner hat's total length from the brim to the crown is the same as the interior measurement of the outer hat can be a bit tricky, but getting it right avoids flaring in either direction. Avoiding the purl/turning round entirely gives you a bit of wiggle room, if you like.
  • this is a weird hat: while the hat looks great while on my head, it doesn't quite lay flat. While I could have re-knit the crown to have a less aggressive decrease sequence (e.g. start with k9 k2tog, etc.) over more rows, I kind of like the flatter top look. Hats are super forgiving, and hats don't really need to lie flat anyway because heads are three dimensional.'

Current Work

I know it seems like I write a lot about knitting, and it is the case that knitting covers a lot of the "stuff I do" it's certainly not the only thing I'm doing, and I thought it'd be fun to quickly review a bunch of things:

  • As of this week, I've been working at Interchain GmbH on Tendermint Core which is a consensus engine for state machine replication. After spending a huge part of my career on projects that were either "enterprise technology" (e.g. writing documentation for database engines), or technical operations (e.g. systems administration), or mostly internal facing (e.g. developer tools,) it's been really interesting to work on something that is definitely core product engineering with a great team. My work has mostly focused on what I think of as "platform concerns:" service construction, networking, workload management, and test architecture: these are the things I really enjoy, so that's been great.
  • More recently I've begun reballancing my time work to spend some time (intentionally) on what I think of as "engineering issues" rather than "software issues." I'm still writing basically the same amount of code as I ever did, but I'm also thinking about how to support teams as they grow and function. At basically every organization and team that I've worked in, the main constraining factor in shipping features has always been coordination with other engineering projects and not really "how quickly can I write code" (I'm pretty fast, all told.) I've always thought that challenges of how people coordinate their labor and organize their efforts in distributed (conceptually, temporally, geographically) environments, is one of the cool/hard problems.
  • I've been cooking a lot more. I'm getting better at making simple dishes that last for a few meals that I enjoy eating. I've been really getting into bean and letils, and have also made some good lentils making a lot of white bean and sausage dishes, lentils, pasta sauces, roast veggies.
  • I'm knitting a lot! Most of the time I have 2 or 3 projects going: a sweater, a pair of socks that is easily portable for travel or times when I'm not at home, and more recently a series of plain white socks. I'm quite enjoying all of this. As a backdrop to knitting, I've been watching Poirot recently, which has been fun.
  • I have a couple of "writing about knitting" projects that I'm writing and preparing drafts of. These are mostly book-length (though on the short side,) type projects, and one needs more editing (which I'm hoping to hire someone to help with,) and one is roughly half way through a first draft. The idea is to provide a lot of technical depth about the craft of knitting--techniques, skills, and design--combined with discussions of projects (mostly from a process perspective,) with some personal reflections and anecdotes sprinkled in. It's been a fun exercise, both because writing about things you understand well is fun, but also because (as weird as this sounds) it's been nice to sort of explore the boundary between technical writing and more creative writing.
  • I've been doing a bit more things that I think of as "general personal care/growth:" reading more books just for fun and because reading is good for inspiration generally; doing duolingo every day (Russian, which I studied as a kid in school); upgrading a bunch of my personal computing practices (new laptop, better remote editing environments, staying on top of my email, switching to tmux, etc). I definitely go in cycles of paying greater and less attention to all of these sorts of things, but I think it's worth while to dedicate time and attention to these kinds of things.

I want to find more ways of writing little things quickly. There's that old quip "sorry for writing a 10 page letter, I didn't have enough time to write a one page letter," but also I think that I do most of my writing in the morning and tend to not do this on days when I'm working, though this seems like a tractable thing to reorganize and think through ways of doing more writing (and other projects!) throughout the week.

Emacs Stability

A while ago I packaged up my emacs configuration for the world to see/use and I'm pretty proud of this thing: it works well out of the box, it's super minimal and speedy, and has all of the features. I don't think it's the right solution for everyone, but I think there are a case of users for whom this configuration makes sense. I've definitely also benefited a lot for thinking about this "configuration" as a software project at least in terms of keeping things organized and polished and reasonably well tested. It's a good exercise.

Historically, I've used my emacs configuration ans as a sort of "fun side project" and while I tried to avoid spending too much time tweaking various things, it did feel like the kind of thing that was valuable (given how much time I spend in a text editor,) without being too distracting. Particularly, early in the pandemic, or during periods over the summer when I was between jobs.

Then, I put the configuration in a public repo, and I basically haven't made any meaningful changes since then. One part of this is clearly that I put a lot of time into polishing things in the initial push to get it released, and there haven't been many bugs that have inspried any kind of major development effort. Another part is that, the way I use an editor isn't really changing. I'm writing code and English and using a couple of applications (e.g. email and org-mode) within emacs, but I'm not really (often) adding new or different kinds of work, and while this isn't exacting from a blogging perspective* it is exciting from a "things just work perspective."


I have refered to myself as a degenerate emacs user. I've sometimes said unrepentant, but I think it's basically the same. I've also realized that, given that I've basically been using emacs the same way since 2008 or so, I'm kind of an old timer, even if it doesn't much feel like that, and there are lots of folks with longer histories.

I think I used care more about what tools other people used to edit text, even a couple of years ago, I thought that having good initial configuration and better out of the box experiences for emacs would lead to more people using emacs, which would be cool because they'd get to use a cool piece of software and we'd get more emacs users.

Increasingly, however, while I think emacs is great and people should use it, I'm less concerned: people should use what they want, and I think there will always be a enough people here and there who want to use emacs and that's good enough for me. I think having good out of the box experiences are important, but it's not a one-size fits all kind of situation. I also think that VS Code is pretty great software, and I like a lot of the implications for remote editing, even if I'm not particularly interested in it for myself.


Enjoy the repo, and let me know if there's anything terrible about it. I've been getting back into blogging recently, and have started tweaking a few things about the ways I use computers/emacs, mostly in terms of exploring tmux (hah!) and also considering avoiding GUI emacs entirely. Stay tuned if you're interested!

The Emacs Daemon GTK Bug, A Parable

There's this relatively minor Emacs bug that I've been aware of for a long time, years. The basic drift is that on Linux systems, when running with GTK/Emacs as a daemon, and the X11 session terminates for any reason the Emacs daemon terminates. Emacs daemons are great: you start Emacs once, and it keeps running independently of what ever windows you have open. You can leave files open in Emacs buffers and not have move between different projects with minimal context switching costs.

First of all, emacs's daemon mode is weird. I can't think of another application that starts as a daemon (in the conventional UNIX double-forking manner,) and then a client process runs and spawns GUI (potentially) windows. If there are other applications that work this way, there aren't many.

Nevertheless, being able to restart the window manager without loosing the current state of your Emacs session is one of the chief reasons to run Emacs in daemon mode, so this bug has always been a bit irksome. Also since it's real, and for sure a thing, why has it taken so long to address? Lets dig a little bit deeper.


There are two GNOME bugs related to this:

What's happening isn't interesting or complicated: Emacs calls an API, which behaves differently than Emacs expects and needs, but not (particularly) differently than GNOME expects or needs. Which means GNOME has little incentive to fix the bug--if they even could without breaking other users of this API.

Emacs can't fix the problem on their own, without writing a big hack around GNOME components, which wouldn't be particularly desirable or viable, and because this works fine with the other toolkit (and is only possible in some situations,) it doesn't feel like an Emacs bug.

We have something of a stalemate. Both party thinks the other is at fault. No one is particularly incentivized to fix the problem from their own code, and there is a work around, [1] albeit a kind of gnarly one.

This kind of issue feels, if not common, incredibly easy for a project--even one like emacs--to stumble into and quite easy to just never resolve. This kind of thing happens, in some form, very often and boundaries between libraries make it even more likely.

On the positive side, It does seem like there's recent progress on the issue, so it probably won't be another 10 years before it gets fixed, but who knows.

[1]To avoid this problem either: don't use GUI emacs windows and just use the terminal (fairly common, and more tractable as terminal emulators have improved a bunch in the past few years,) or use the Lucid GUI toolkit, which doesn't depend on GTK at all. The lucid build is ugly (as the widgets don't interact with GTK settings,) but its light weight and doesn't suffer the '

Tips for Casting On a Sweater

Having recently started knitting a new sweater I realized that there are a lot of little things that I do, that are worth collecting in one place:

  • Do not tie a slip not to start, simply twist the yarn around the needle as a basis for casting on the first stitch. This twist looks like a stitch, but isn't, you should decrease it at the end of the row, with the last stitch to complete the join.

  • Always used the German Twisted long tail cast on variant, which makes things a bit more elastic and just looks great, particularly when knitting ribbing, which I often do at the beginning of a sweater.

  • Wrap the yarn around the needle once per number of stitches that you need to cast on to estimate the length of the long tail that you'll need to cast on. I find this overestimates a bit, but I've rarely regretted having too-long of a tail rather than having too short. While you can start again from the beginning in the case that you run out of tail, you can also splice in a second yarn.

  • If you do run out of yarn while casting on for the sweater, and you've been using the long tail for the finger yarn (loops around the needle,) you can sometimes get a few extra stitches out of switching to having that needle

  • Place markers periodically to make it easier to count, roughly every 20 stitches or so, and I try and make sure that one of the markers gets placed half way through the round. For example, to cast on 228 stitches, I placed 12 markers every 19 stitches, and the 6th marker was the "half way" point.

    I did one sweater where I put markers every 32 stitches and one sweater where I put markers every 16, and found that I spent far more time casting on the one with fewer markers because I had to double check my counts more. They really help.

  • Cast on to a needle that's a bit bigger than the size you intend to use. I've been quite happy using a US 2.5 to cast on for a US 0 sweater. I've been using interchangeable needles, and being able to replace the larger needle for the smaller needle before beginning to knit has made things much easier to knit for the first row. It's also an option to hold two needles together for the cast on.

    I also have to think about not pulling on the "thumb yarn" at all, as this will also make things tigheer.

  • While it's good to be careful to avoid twisting the first row, if you do accidentally twist, undo the twist between the last and first stitch, which will hardly be noticeable.