Why Email Still Matters

There are so many sexy topics in computing and information technology these days. In light of all this potential excitement, I’m going to write about email. Which isn’t sexy or exciting.

This isn’t we should be clear, to say that email doesn’t matter, because it seems that email still matters a great deal. Rather that email is still a relevant and useful paradigm. What’s more, the email system (i.e. SMTP and associated tools) remains in many ways superior to all of the technologies and platforms that have attempted to replace email.

The Good

Email works. The servers (e.g. Postfix, Exim, Sendmai, but most Postfix) are stable, known, and very functional. While there are flaws in a lot of email clients, there are a lot of tools that exist for processing and dealing with email, and that makes it possible for everyone to interact with their email on their own terms, in a variety of contexts that make sense the them. And email is such that we can all use it and interact with each other without requiring that we all participate in some restrictive platform or interface.

In short, email is open, decentralized, standard, lightweight, push-based, and multi-modal.

Compare this to the systems that threaten to replace email: Facebook and social networking utilities, twitter, text messaging, real-time chat (i.e. IRC, IM, and Jabber). The advantages of email on these crucial, I think, dimensions are pretty clear.

The Bad

The problem, of course, with email is that it’s terribly difficult to manage to keep current with one’s email. Part of this problem is spam, part of the problem is “bacon,” or legitimate (usu sally automated) email that doesn’t require attention or is difficult to process, and it’s undeniable that a big part of of it is that most end user email clients are inefficient to use. And there’s the user error factor: most people aren’t very good at using email effectively.

It Gets Better

No really it does. But I don’t think we can wait for a new technology to swoop in and replace email. That’s not going to happen. While I’m not going to write a book on the subject, I think there are some simple things that most people can do to make email better:

1. Do use search tools to make the organization of email matter less. Why file things carefully, when you can quickly search all of your email to find exactly what you need.

2. Filter your email, within an inch of it’s life. Drop everything you can bare to. Put email lists into their own mail boxes. Dump “work” or “client” email into its own folders. Successful filtering means that almost nothing gets to your “inbox.”

3. Use your inbox as a hotlist of things that need attention. Move email that needs responses to your inbox, and move anything that got through your filters to where it ought to be.

4. Use multiple email addresses that all redirect to a single email box. You only want to ever have to check one email system, but you probably want multiple people in multiple contexts to be able to reach you via email. This makes email filtering easier, and means that you just spend time working rather than time switching between email systems and wondering where messages are.

5. When writing emails, be brief and do your damnedest to give the people you’re writing with something concrete to respond to. Emails that expect responses but are hard to respond to are among the worst there are, because you have to say something there’s nothing worth saying.

6. Avoid top posting (i.e. responding to an email with the quoted material from previous exchanges below your respone.) When appropriate interleave your responses in their message to increase clarity and context without needing to be overly verbose.

7. Email isn’t real time. If you need real time communication use some other medium. Don’t feel like you need to respond to everything immediately. Managing expectations around email is a key to success.

That addresses most of the human problem. The technological problem will be solved by addressing spam, by building simpler tools that are easier to use effectively and support the best kind of email behaviors.

Why Email will Improve

1. Email is great in the mobile context. It’s not dependent upon having a net connection which is good when you depend on wireless.

2. Email is a given. Having email is part of being a digital citizen and we mostly assume that everyone has an email. The largest burden with most new technologies is often sufficient market share to make a “critical mass” rather than some sort of threshold of innovation.

3. Email is both push-based (and delivery times are pretty fast) and asynchronous. Though this doesn’t sound sexy, there aren’t very many other contemporary technologies that share these properties.

Onward an Upward!

Persistent SSH Tunels with AutoSSH

Rather than authenticate to a SMTP server to send email, which is fraught with potential security issues and hassles, I use a SSH tunnel to the machine running my mail server. This is automatic, easy to configure both for the mail server and mail client, and incredibly secure. It’s good stuff.

The downside, if there is one, is that the tunnel has to be active to be able to send email messages, and SSH tunnels sometimes disconnect a bit too silently particularly on unstable (wireless) connections. I (and others, I suspect) have had some success with integrating the tunnel connection with pre- and post- connection hooks, so that the network manager automatically creates a tunnel after connecting to the network. but this is a flawed solution that produces uneven results.

Recently I’ve discovered this program called “AutoSSH,” which creates an SSH tunnel and tests it regularly to ensure that the tunnel is functional. If it isn’t, AutoSSH recreates the tunnel. Great!

First start off by getting a copy of the program. It’s not part of the OpenSSh package, so you’ll need to download it separately. It’s in every pacakge management repository that I’ve tried to get it from. So installation, will probably involve one of the following commands at your system’s command line:

apt-get install autossh
pacman -S autossh
yum install autossh
port install autossh

When that’s done, you’ll issue a command that resembles the following

autossh -M 25 -f tychoish@foucault.cyborginstitute.net -L 25:127.0.0.1:25

Really, the important part here is the “autossh -M 25” part of the command. This tells autossh to watch (“monitor”) port number 25 on the local system for a tunnel. The rest of the command (e.g. “-f -L 127.0.0.1:25:127.0.0.1:25 mailserver@tychoish.com -N") is just a typical call to the ssh program.

Things to remember:

  • If you need to create a tunnel on a local port with numbered lower than 1000, you’ll need to run the autossh command as root.
  • SSH port forwarding only forwards traffic from a local port to a remote port, through an SSH connection. All traffic is transmitted over the wire on port 22. Unless you establish multiple tunnels, only traffic sent to the specific local port will be forwarded.
  • Perhaps it’s obvious, but there has to be some service listening on the specified remote end of the tunnel, or else the tunnel won’t do anything.
  • In a lot of ways, depending on your use case autossh, can obviate the need for much more complex VPN setups for a lot of deployments. Put an autossh command in an @reboot cronjob, with an account that has ssh keys generated, and just forget about it for encrypting things like database traffic and the like.

Onward and Upward!

On Romance

I don’t read romance literature.

It’s not my thing, which isn’t saying much: there’s a lot of literature that I don’t tend to consider “my thing,” for one reason or another. I don’t really read fantasy, or horror, and I’m even picky within science fiction. There are enough books out there and there is only so much time. At least that’s what I tell myself.

Nevertheless, Susan Groppi wrote a great post about coming out as a reader of romance that I found useful. I’m also reminded of comments that N. K. Jemison made about the in progress merging of the fantasy and romance genres (sorry if I’ve miss-cited this), and I’ve been thinking about how I view Romance fiction, and perhaps a bit more generally about genre fiction ghettos.

In general, I think Romance has merit, both because it’s entrancing and I think fiction which captures people’s imaginations and interest I worthwhile and important to not dismiss because it’s commercial, or the readership/writers are largely women. There are potential problems with romance, at least insofar as we typically envision it: with strong hetero tendencies, an idealization of monogamy as a social practice and marriage as an institution, and the potential to accept a very conventional conceptualization of gender. I’m sure some romance literature has been able to engage and trouble these troupes productively, but I think it’s a potential concern.

Having said that, I’m not sure that Romance has a lot of future as a genre. This is to say that I think many of the elements of romance--female characters, and an engagement with sexuality and relationships--will increasingly merge into other genres. Romance as an independent genre will linger on, but I think the “cool stuff happening in the Romance field,” will probably eventually move out into corners of other genres: thriller, fantasy, maybe science fiction.

Actually, as I think about this, it’s probably backwards. I think it’s less that Romance doesn’t have a future, as it is that the future of most popular literature lies in engaging with romance-elements and other aspects of romance stories the context of non-romance specific styles. This kind of thing is happening, and I think it’ll probably continue to happen.

I wish I could speak with greater certainty about the reasons why romance literature enjoy higher readership, or what elements of romance stories can be transplanted to other genres, but I think these are probably questions which are beyond the scope of this post. Thanks for reading!

Knitting Resumed

So I’ve started knitting again.

Shocking.

I’ve not been knitting very much in recent months, because I’ve had less time, because I’ve been focusing my energy on other projects: keeping my head above water, writing, dancing, singing, etc. It’s a shame too, because knitting is a great deal of fun, it’s pretty rewarding, and it’s something that I’m incredibly good at.

I suppose at one point there were a lot of knitters who read this blog, but I suspect many of them don’t so much any more. Anyway, I hope this post won’t alienate everyone who reads this.

First up, for some project review:

  • I have a cabled sweater in progress that I’m working on the first sleeve at the moment. I’m afraid I’m going to run out of yarn (but I have a plan!) and frankly cables have never really been “my thing,” but this is the only thing that remains that I was working on when I left the Midwest, so there’s nostalgia and I do want to finish it. The biggest problem, I think, is that unless I move north a lot, I’m not going to live in a place where I can really wear something like this.
  • I have a sweater in jumper-weight Shetland (i.e. fingering weight) wool that I’m knitting for a friend. I have the collar and the sleeves to do, but it’s very plain and a very straightforward knit. I just need to do it. This is probably the next thing on my list. It’s been on hiatus since May.
  • I’m making socks. As I write this, I have just completed a pair of socks that I started in May. They’re simple, and plain (which is how my socks tend to be) using Dyebolical Yarn. I also got to use a set of Blackthron Needles that my mother got for my birthday. Both are quite wonderful. I think I’ve discovered how much knitting can be done on a commute, and I do expect to do a lot more commute knitting, but I need to find a way to balance knitting with writing and reading on the train. Perhaps some sort of morning/night split.

Aside from finishing the socks and immediately casting another pair, I’ve been doing a lot of “yarn stash” reorganization and trimming. This last week or so I’ve gotten inspired to reevaluate all of the stuff that I have to see what I really need in my life and what I’m just keeping because it’s there. I’ve been through my clothing, the book collection, and the yarn.

Although I’ve done “stash culls” before I felt like my collection of yarn had a lot of stuff in it that I got without any intention of a project, or for any reason other than “I might like to make something with it some day.” I’ve never really been a knitter when I’ve had a real budget for hobbies and entertainment, nor have I ever knitted at such a moderate pace. So I made the decision to not keep yarn around just for insulation, and just get the yarn that I really want to knit with rather than what I feel like I ought to knit with because it’s in the bin. It’s been quite liberating.

I’m also, as I sit knitting, thinking about the overlap between what I do professionally (documenting technical solutions and systems administration practices) and pattern writing for knitters.

There’s a lot of overlap in how I write and think about both, enough to inspire me to think about doing more knitting related writing. *As if I didn’t have enough projects already.

In any case, I don’t know that I’ll blog regularly about knitting as I continue to knit more, but it might come up from time to time. You have been warned.

Onward and Upward!

Phone Torched

I mentioned in a recent update post, that I had recently gotten a new cell phone, which given who I am and how I interact with technology means that I’ve been thinking about things like the shifting role of cell phones in the world, the way we actually use mobile technology, the ways that the technology has failed to live up to our expectations, and of course some thoughts on the current state of the “smart-phone” market. Of course.


I think even two years ago quasi-general purpose mobile computers (e.g. smart phones) were not nearly as ubiquitous as they are today. The rising tide of the iPhone has, I think without a doubt, raised the boat of general smart phone adoption. Which is to say that the technology reached a point where these kinds of devices--computers--are of enough use to most people that widespread adoption makes sense. We’ve reached a tipping point, and the iPhone was there at the right moment and has become the primary exemplar of this moment.

That’s probably neither here nor there.

With more and more people connected in an independent and mobile way to cyberspace, via either simple phones, (which more clearly matches Gibson’s original intentions for the term,) or via smart phones I think we might begin to think about the cultural impact of having so many people so connected. Cellphone numbers become not just convenient, but in many ways complete markers of identity and person-hood. Texting in most situations overtakes phone calls as the may way people interact with each other in cyberspace, so even where phone calls may be irrelevant SMS has become the unified instant messaging platform.

As you start to add things like data to the equation, I think the potential impact is huge. I spent a couple weeks with my primary personal Internet connection active through my phone, and while it wasn’t ideal, the truth is that it didn’t fail too much. SSH on Blackberries isn’t ideal, particularly if you need a lot from your console sessions, but it’s passable. That jump from “I really can’t cut this on my phone,” to “almost passable” is probably the hugest jump of all. The series of successive jumps over the next few years will be easier.

Lest you think I’m all sunshine and optimism, I think there are some definite short comings with contemporary cell phone technology. In brief:

  • There are things I’d like to be able to do with my phone that I really can’t do effectively, notably seamlessly sync files and notes between my phone and my desktop computer/server. There aren’t even really passable note taking applications.
  • There are a class of really fundamental computer functionality that could theoretically work on the phone, but don’t because the software doesn’t exist or is of particularly poor quality. I’m thinking of SSH, of note taking, but also of things like non-Gmail Jabber/XMPP functionality.
  • Some functionality which really ought to be more mature than it is (e.g. music playing) is still really awkward on phones, and better suited to dedicated devices (e.g. iPods) or to regular computers.

The central feature in all of these complaints is software related, and more an issue of software design, and an ability to really design for this kind of form factor. There are some limitations: undesirable input methods, small displays, limited bandwidth, unreliable connectivity, and so forth. And while some may improve (e.g. connectivity, display size) it is also true that we need to get better at designing applications and useful functionality in this context.

My answer to the problem of designing applications for the mobile context will seem familiar if you know me.

I’d argue that we need applications that are less dependent upon a connection and have a great ability to cache content locally. I think the Kindle is a great example of this kind of design. The Kindle is very much dependent upon having a data connection, but if the device falls offline for a few moments, in most cases no functionality is lost. Sure you can do really awesome things if you assume that everyone has a really fat pipe going to their phone, but that’s not realistic, and the less you depend on a connection the better the user experience is.

Secondly, give users as much control over the display, rendering and interaction model that their software/data uses. This, if implemented very consistently (difficult, admittedly,) means that users can have global control over their experience, and users won’t be confused by different interaction models between applications.

Although the future is already here, I think it’s also fair to say that it’ll be really quite interesting to see what happens next. I’d like a chance to think a bit about the place of open source on mobile devices and also the interaction between the kind of software that we see on mobile devices and what’s happening in the so-called “cloud computing” world. In the mean time…

Outward and Upward!

Running Multiple Emacs Daemons on a Single System

Surely I’m not the only person who’s wanted to run multiple distinct instances of the emacs daemon at once. Here’s the use case:

I use one laptop, but I work on a number of very distinct projects many of which involve having a number of different buffers open, most of which don’t overlap with each other at all. This wouldn’t be a huge problem except that I’ve easily gotten up to two hundred buffers open at once. It can get a bit confusing. Particularly since I never really need to touch my work related stuff when I’m writing blags, and my blogging and website design buffers never intersect with fiction writing.

If I weren’t using emacs in daemon mode (that is, invoked with the “emacs --daemon” command) I’d just open separate instances of emacs. The problem with that is, when X11 crashes (as it is so wont to do) the emacs instances crash too and that’s no good. Under normal conditions if you start emacs as a daemon, you can only run one at a time, because it grabs a socket and the emacsclient program isn’t smart enough to be able to decide which instance of emacs you want. So it’s a big ball of failure.

Except I figured out a way to make this work.

In your .emacs file, at the very beginning, put the following line:

(setq server-use-tcp t)

In the default configuration, the emacs daemon listens on a UNIX/system socket. However, in emacs can also, with the above option set, can also listen for connections over TCP. I’ve not yet figured out how to create the required SSH tunnel to make this particularly cool, but it makes this use case possible.

Now, when you start emacs, use commands in the following form:

emacs --daemon=tychoish
emacs --daemon=work

Each server process creates a state file in the “~/.emacs.d/server/” folder. If you are using version control on this file, you may want to consider explicitly ignoring this folder to avoid confusion.

To open an emacs client (i.e. an emacs frame attached to the emacs daemon,) use commands in the following form

emacsclient --server-file=tychoish -c -n
emacsclient --server-file=work -c -n

You may append a file name to open a specific client with one of these emacsclient invocations, or use any of the other emacsclient options. Although these commands are long, I have integrated them into my default zsh configuration as aliases, and as key shortcuts in my window manager. So opening a frame on a specific emacs instance isn’t particularly difficult.

And that’s it. It just works. I have the following two lines in my user’s crontab file:

@reboot    emacs --daemon=tychoish
@reboot    emacs --daemon=work

These lines ensure that the usual (and expected) named emacs daemons are started following reboot. More generally, the @reboot cronjob is great for making the “my computer just rebooted, and now I have to fuss over it for ten minutes before I can work” problem seem much less daunting.

In conclusion I’d like to present one piece of unsolicited advice, and ask a question the assembled.

  • Advice: Even though it’s possible to create a large number of emacs instances, and on modern systems the required RAM is pretty low, avoid this temptation. The more emacs instances you have to juggle the greater the chance that you’ll forget what buffers are open in what instance. Undesirable.
  • Question: Is there a way to get the value of server-name in into emacs lisp so that I can if things against it? Haven’t figured this one out yet, but it seems like it would be nice for conditionally loading buffers and things like org-mode agenda. Any ideas?

Onward and Upward!

Sing the Shapes

I wrote a bit about sacred harp singing for a few months about a year ago, when I was really starting to get into it, and then I mostly stopped. I’ve had a few singing related experiences recently that I think are worth recounting, even if they’re a bit disjointed. So I’ll just hop in and hope that it adds up to something in the end. Also, if you’re not familiar with Sacred Harp Singing, I’m sorry if there isn’t a lot of sub tittling. Thanks for reading!


I was hanging out with R.F. and we were flipping through my copy of the sacred harp, and he was trying to get how the relative pitching thing works (having more formal experience singing with choirs and what not, and a sense of pitch that’s way more closely tied to a piano than mine.) and he said something like “so this one would start ‘here?'” I think it was 300, and I have no clue how “right,” I was or what inspired this, but his pitch was about a step and a half (I think,) high, and so I gave something that was more or less where I thought the song was supposed to sit. We sang through a little bit of it, and it seemed to work.

I’ve never really had a lot of interest in being able to offer pitches to a class of Sacred Harp singers, beyond the very selfish ability to lead signings without needing to make sure that someone who can offer keys in attendance.


I’m working on memorizing the book--strategically, of course-- as I can. This makes signings more fun because you can look at people, while singing rather than having your nose in a book the whole time. While there aren’t songs that I can safely leave the book closed for the shapes, I know the tunes (mostly bass parts) and words to most of the common ones (e.g. 178, 155, 89, 312b, 355, 300, 146, 148, 153, 112, 422, 209, 189, 186,) save a few middle verses that are sung rarely. I don’t think of my memory as being particularly good for this kind of information, but it’s nice to have reality prove you wrong.


One of the things that made Sacred Harp “click” for me when I really started to get into it was that I had the good sense to sing bass. My voice is pretty low, so this seems to fit, and I think staying in one section for a long time helped solidity my sense of the music.

Since March/April, or thereabouts, I’ve started singing tenor (the lead/melody) a bit. It’s a stretch for my voice, and I’m slightly more prone to loosing track of the key when singing higher notes (a not uncommon problem,) but it’s good for my brain, and I think it makes me a better singer and leader. I’ve mostly done this at local singings, and smaller signings when there are enough basses, or for a few songs at a bigger singing when the mood strikes.

I’m thinking of doing this more often, and at more singings, as part of an effort to become a better singer.


I think it’s easy (at least for me) particularly in accounts like this to focus on the singing, the technical aspects of the music, and the texts used. And all of these components contribute to what makes singing so great: its a gestalt experience, but I think its easy to gloss over the best part of being a singer. Which is, of course, all the other singers.

Being a “community guy,” I think it might be easy for me to wax poetic about how great sacred harp singings are--and they are--but I think there’s something deeper and specific about singing communities that make them more accepting, more engaged, more inclusive than other communities (dancing, writing, professional,) that I’ve been involved in

Maybe it’s that singing is a more transcendent experience that the focal points of other communities to begin with so people are willing to connect a bit more. Maybe the fact that singings are sometimes (often?) held in people’s homes is a factor. Maybe the extreme inclusiveness combined with the somewhat substantial learning curve creates the right environment to foster a strong and self selecting community. Perhaps all of the travel to all day singings and conventions, combined with the effort to arrange socials, unifies the community.

I’m not sure, but I’ve met a bunch of great people singing, and people with whom I share more than just sufficient common interest in a shared activity. I’m not sure every singing community is like this, but the conversations and connections I’ve had with other singers have been depthy, interesting, and have expanded beyond singing.

Key Git Concepts

Git is a very… different kind of software. It’s explicitly designed against the paradigm for other programs like it (version control/source management) and to make maters worse most of it’s innovations and eccentricities are very difficult to create metaphors and analogies around. This is likely because it takes a non-proscriptive approach to workflow (you can work with your collaborators in any way that makes sense for you) and more importantly it lets people do away with linearity. Git makes it possible, and perhaps even encourages, creators to give up an idea of a singular or linear authorship process.

That sounds great (or is at least passable) in conversation but it is practically unclear. But even when you sit down and can interact with a “real” git repository, it can still be fairly difficult to “grok.” And to make matter worse, there are a number of very key concepts that regular users of git acclimate to but that are still difficult to grasp from the ousted. This post, then, attempts to illuminate a couple of these concepts more practically in hopes of making future interactions with git less painful. Consider the following:

The Staging Area

The state of every committed object (i.e. file) as of the last commit is the HEAD. Every commit has a unique identifying hash that you can see when you run git log.

The working tree, or checkout, is the files you interact with inside of the local repository. You can checkout different branches, so that you’re not working in the “master” (default or trunk) branch of the repository, which is mostly an issue when collaborating with other people.

If you want to commit something to the repository, it must first be “staged” or added with the git add command. Use git status to see what files are staged and what files are not staged. The output of git diff generates the difference between the HEAD plus all staged changes, and all unstaged changes. To see the difference between all staged changes and HEAD use the “git diff --cached”.

The staging area makes it possible to construct commits in very granular sorts of ways. The staging area makes it possible to use commits, less like “snapshots” of the entire tree of a repository, and rather as discrete objects with that contain a single atomic change set. This relationship to commits is enhanced by the ability to do “squash merges” and squash a series of commits in a rebase, but it starts with the staging area.

If you’ve staged files incorrectly you can use the git reset command to reset this process. Used alone, reset is a non destructive operation.

Branches

The ability to work effectively in branches is the fundamental function of git, and probably also the easiest to be confused by. A branch in git, fundamentally, is just a different HEAD in the same repository. Branches within a single repository allow you to work on specific sets of changes (e.g. “topics”) and track other people’s changes, without needing to make modifications to the “master” or default branch of the repository.

The major confusion of branches springs from git’s ability to treat every branch of every potentially related repository as a branch of each other. Therefore it’s possible to push to and pull from multiple remote branches from a single remote repository and to push to and pull from multiple repositories. Ignore this for a moment (or several) and remember:

A branch just means your local repository has more than one “HEAD” against which you can create commits and “diff” your working checkout. When something happens in one of these branches that’s worth saving or preserving or sharing, you can either publish this branch or merge it into the “master” branch, and publishes these changes.

The goal of git is to construct a sequence of commits that represent the progress of a project. Branches are a tool that allow you to isolate changes within tree’s until you’re ready to merge them together. When the differences between HEAD and your working copy becomes to difficult to manage using git add and git reset, create a branch and go from there.

Rebase

Rebasing git repositories is scary, because the operation forces you to rewrite the history of a repository to “cherry pick” and reorder commits in a way leads to a useful progression and collection of atomic moments in a project’s history. As opposed to the tools that git replaces, “the git way” suggests that one ought to “commit often” because all commits are local operations, and this makes it possible to use the commit history to facilitate experimentation and very small change sets that the author of a body of code (or text!) can revert or amend over time.

Rebasing, allows you to take the past history objects, presumably created frequently during the process of working (i.e. to save a current state) and compress this history into a set of changes (patches) that reflect a more usable history once the focus of work has moved on. I’ve read and heard objects to git on the basis that it allows developers to “rewrite history,” and individuals shouldn’t be able to perform destructive operations on the history of a repository. The answer to this is twofold:

  • Git, and version control isn’t necessarily supposed to provide an consistently reliable history of a projects code. It’s supposed to manage the code, and provide useful tools to managing and using the history of a project. Because of the way the staging area works, sometimes commits are made out of order or a “logical history object” is split into two actual objects. Rebasing makes these non-issues.
  • Features like rebasing are really intended to happen before commits are published, in most cases. Developers will make a series of commits and then, while still working locally, rebase the repository to build a useful history and then publish those changes to his collaborators. So it’s not so much that rebasing allows or encourages revisionist histories, but that it allows developers to control the state of their present or the relative near future.

Bonus: The git stash

The git stash isn’t so much a concept that’s difficult to grasp, but a tool for interacting with the features describe above that is pretty easy to get. Imagine one of the following cases:

You’re making changes to a repository, you’re not ready to commit, but someone writes you an email, and says that they need you to quickly change 10 or 12 strings in a couple of files (some of which you’re in the middle of editing,) and they need this change published very soon. You can’t commit what you’ve edited as that might break something you’re unwilling to risk breaking. How do you make the changes you need to make without committing your in-progress changes?

You’re working in a topic branch, you’ve changed a number of files, and suddenly realized that you need to be working in a different branch. You can’t commit your changes and merge them into the branch you need to be using that would disrupt the history of that branch. How do you save current changes and then import them to another branch without committing?

Basically invoke git stash which saves the difference between the index (e.g. HEAD) and the current state of the working directory. Then do whatever you need to do (change branches, pull new changes, do some other work,) and then invoke git stash pop and everything that was included in your stash will be applied to the new working copy. It’s sort of like a temporary commit. There’s a lot of additional functionality within git stash but, that’s an entirely distinct bag of worms.

Onward and Upward!