Welcome to Coda

Sometimes the biggest jobs are the easiest. Last night I got inspired to make a sort of major change to my website: give up the “plain old blog” look and build a more intense “full featured-type” site. I thought this would be a good afternoon project for the weekend, so I made a list, mocked something up, and went to bed.

And then I got up this morning and in several hours, I was able to concoct what you see here. It’s “beta” in the tradition of web 2.0 (rough around the edges, but fully deployed.). I’m still not quite sure what wordpress is thinking on the tag archive pages, but maybe I’ll figure something out.

Here’s the larger plan: The regular daily blog posts, which I’m now calling “essays” in sense of “an attempt,” not a particular forum. The new kind of post will be shorter, more “bloggy” somewhere between the rest of the world’s typical blog post, and a twitter.

There are also new “static” pages, and separate syndication feeds if you want to have a little bit more control over how tychoish is syndicated for you.

Also, to readers in livejournal land, if you want all of everything, subscribe to the old tealart syndication feed. Otherwise my livejournal (which had previously just been cross posting all entries will now have a more cherry-picked selection of entries.)

I’ll have a more coherent post together on monday. I swear :)

Otherwise tell me what you think.

Future Knitting Project Ideas

So, once upon a time I had an idea for a knitting column were I’d write about a sweater, giving a brief description of a style, relevant techniques, and a construction method, but no pattern per se. Because I don’t follow patterns so much as elaborate on ideas, and I think it would be fun to read and a good inspiration for people who are comfortable making sweaters but what more ideas/fodder for the design process, which seems particularly relevant as many people who have started knitting in the last few years are becoming more and more proficient.

Needless to say, no one else thought that this was a great idea, and I dropped it for other projects.

But I’ve gotten back to knitting recently after a few months of slow knitting progress, and I’m starting to think about what I’m going to be knitting next (In part because I’m going to have three sweaters that I will probably finish at about the same time,) and I thought I’d resurrect the idea for the old column, though because this is at least theoretically a list of my future projects I don’t think that all of these designs are going to be completely original. Also, just sweaters. Here goes nothing:

  1. I’ve made a version of Henry VIII and I think I’d like to make another. The main problem with my existing sweater is that the yarn pills, and I followed the pattern too closely and in doing so neglected to make a number of customizations that I like to make for sweaters. Like I’d like to have some sort of open/v neck, and I tend to like a deeper yoke, shoulder section, and I haven’t met a Starmore sleeve that wasn’t 5 inches too short. So it would be nice to knit the sweater again (because it was fun) but to modify it. While the two color look works, it might be fun to have black in the foreground, and a couple of different grays in the background.
  2. I have recently gotten re-engaged in this fingering weight sweater that has some light cable work/accents, I’m thinking about starting another such sweater, I have some my eyes on some blue-grey shetland, and I think it might be fun to do a sweater with some more complex cable work. In order to forestall making any great plans (at all) about this sweater, I began by knitting a(n about) three inch strip around middle section of the sweater and then picking up and knitting the body down (and up) from this strip. I never did a gauge swatch, and I don’t think that I ever really counted the number of stitches.
  3. I really like how some cabled sweaters start at the bottom hem without any ribbing, and similarly I like sweaters where some (but not all) of the cables are continued down into the ribbing. I think it’s a cool look, and I’d like to take some of my color work designing habits/notions into designing a cabled sweater, and see how it goes. I’m thinking something with a few wider panels with a nine stitch braid cable running between the panels.
  4. I have a load of DK weight Navy yarn that I think would make for a great Gansey style sweater, with simple cables and texture patterns on the yoke and a fun shoulder strap.
  5. Most of my colorwork projects of the last few years have been escalating in terms of the size of the repeat (both in rows and in stitches,) and I think it might be fun to go back and do a sweater with a larger number of smaller repeats, potentially where the patterns didn’t all synchronize after every repeat. Because that’s the kind of knitting that screams “fun” to me at the moment.
  6. Actually I think my next project will be a medium/heavy weight jacket made from my own handspun, in a garter stitch design that I plan to… elaborate from something that will hopefully be appearing in Wool Gathering sometime this year. The pattern is for a vest, but I’d put sleeves on it, and probably do the yoke/shoulders in a totally different way, but that’s the method to my madness.

And so forth…

Anyone else have a sweater nagging inside of them? One of these something you want more elaboration on?

Onward and Upward!

You Can't Hack your Way to Freedom

Subtitle: Or, Why Open Source isn’t about Freedom.

There is a major segment of the open source/free software movement that believes that open source is important because having non-proprietary software is a key to individual liberation and freedom.

While this “camp” has done a lot for the open source movement, and in some respects they’re right: an educated user can deal with his own bugs, tweak the code, and verify that the software is secure. Also free software makes it possible for everyone, not just the very wealthy, from using very powerful tools. Money is still an issue around hardware, but free software helps. These features of free software are indeed powerful and likely make the undertaking worthwhile in its own right. So I don’t want to dismiss the political importance of this idea or faction but I’d like to offer another theory of why open source is so powerful and important.

The marker of a successful proprietary piece of software and a successful open source program are completely different.

Proprietary software is successful if people1 buy it. And when people buy anything really, for the most part they do a cost-benefit analysis, usually between features and cost. Does this do what I need it to? Will I have to buy something else to finish the task at hand? In this environment the most successful programs will be the best programs that do the most for the least amount of money.

So I guess I’m being an armchair economist in this, but I think that it makes a lot of sense for both developers and purchasers to keep the overall number of discrete programs down. Why develop and support (and buy on the other end) an address book program, a mail reading program, a mail composing program, a calendar program, a task manager when you could just get Outlook? Hell, why buy business/office software a la carte when you can get it as a suite?

Open source doesn’t need to operate like that,2 especially historically a good piece of open source software did one thing well. There are a lot of reasons for this. Unix works best when everything is a modular widget widgets, but getting a bunch of hackers to agree on how to accomplish more than one thing is sufficiently non-trivial to have had a great impact on the methods of the movement. Perhaps most importantly, there’s no need for any single piece of software to do everything because open source software doesn’t exist in a vacuum.

And this is the strength of the model. You could never market on or sell application on a large that did one thing really well, but if you asked it to do something else related, the developer would say “not my problem.”

The classic example is mutt, and email client that just reads email, while recently (after much “not my job” protesting,) mutt has added support for connecting to the servers that send and receive email, it historically hasn’t and I suspect most users still don’t use these features. Imagine if Outlook said “nope, sending email is someone else’s problem, I’m just a mail reader.” Mutt succeeds because it’s really good at reading email, but also because there are a lot of really great tools for doing other email related tasks. Fetchmail is a great and reliable program, but it only downloads email, and for sending email I’ve never had a problem with MSMTP, but I think there are a number of popular mail-sending options.

So you get that open source makes a more widget or ecosystem based computing environment viable and stable, but are still wondering why this is a good thing? Because it sounds that this kind of open source just makes things more complex? Right. Here’s why I think the ecosystem is the way to go:

  • It’s easier to build programs that only do one thing. A program either is really good at downloading your email or it isn’t, and it’s pretty easy to tell if that’s the case. More complex programs, can’t be as reliable as consistent.
  • This system is more responsive to technological development. If there’s a new revolution in email downloading, it’s easy enough to take fetchmail out of the picture and put some other widget in place that works better. Non-modular systems put you at the whim of someone else.
  • Your data is (more likely) to be accessible and open. The real reason that open source hackers are interested in open standards and formats is, if we rely on an ecosystem of widgets our data has to be readable by all of the different widgets. The only way to ensure that this is the case is to use open and standard data formats. This is good for the user and good for the programer as a creative constraint.
  • This model more closely reflects the way we actually think. Our minds are made up of a bunch of smaller abilities. The ability to recognize written words, the ability to parse those words for sounds and meaning, the ability to take what we read and relate it to things that we’ve seen and read in other contexts. In both the software and cognition the really cool things happen with novel collections of different ideas and tools.

But wait you say, OpenOffice and Ubuntu Linux aren’t widgets and they have very high feature counts. This is very true, and to be honest projects like GNOME/KDE and open office mystify me because they fail so amazingly, they’re too disjointed to really compete with desktop environments from proprietary makers and try to do too many things to really work ideally on their own.3

In the case of Ubuntu--like all linux distributions--the “product,” is a specific and tailored collection of widgets. And this is a pretty good open source business model: take a bunch of tools, customize them, make sure they install and work together, and then package them as some sort of suite. While I think that this software ecosystem thing is pretty cool, it’s not easy to get started with, and unless users really know what they’re doing it requires more than a bit of fidgeting. Distributions solve this problem and make a space for individuals and groups to monetize this “free” software. Which is of course good for (almost) everyone involved.

Anyway, I’ve been going on for way too long. I’m not--by a long shot--done thinking (and writing) about these issues, so expect a continuation of this soon.

Onward and Upward!


  1. In the case of software, I think people should be understood as cooperate IT devisions, more than individuals ↩︎

  2. There are clearly a lot of exceptions, particularly in this particularly moment, where we’re seeing projects that in this respect look more like proprietary software. This is I think in part intentional as a means of competing with proprietary software. And I’m building to the other “in part.” So lets wait for it, shall we? ↩︎

  3. Clearly this is just my opinion. Every so often I want to try and like GNOME, but I always find it to be a less then pleasurable experience. The GUIs don’t make a lot of sense unless you know what the shell commands that they’re wrapping are, to say nothing of the really poor use of space (that’s tangential, but probably my largest gripe with GNOME). Interestingly I started using a different window manager (awesome), which accesses gtk, and I was very surprised to find that some of the gnome apps were actually pretty decent. Who knew! ↩︎

Knitting Update

I’ve hinted in the last couple of weeks that I’ve been knitting more. While usually the summer heat doesn’t deter me from knitting very much, between the extra-time writing, working a lot, and the heat I hadn’t knit much since the spring. While I’ve certainly been more obsessed with knitting in the past, I think I’m back.

Though I stopped/slowed my knitting progress for several months, I still had unfinished projects and stash/project plans. So maybe it wasn’t so much that I stopped knitting as it was that the bottom fell out of my productivity. In any case,as I return, my primary project has/is to finish all of my outstanding projects. They are:

1. The gray sweater. I figure I started it in late 2005, and have been working on it sporadically since then. It’s fingering weight, and I’m knitting it with US 1 needles. It’s slow going, and mostly stocking stitch, though there are some cable accents, including a cable running from the collar to the cuff (via shoulder strap.) I’m done with everything except the sleeves and I’m making progress on the first sleeve. I expect to finish this in the next month-six weeks. 2. In preparation for knitting camp I started knitting a sweater. I had planned for it to be an EPS system yoke sweater, and that it would be something that I could knit at camp. Turns out that I knit other things while I was at camp, and after camp I was largely uninterested in knitting a yoke style sweater. So I ripped out a couple of inches of yoke and started knitting a sweater with shaped armholes. I’ve finished the body, but I need to knit the collar. Also the sleeves, and I’m totally undecided about how to shape the upper sleeve. I don’t really like capped sleeves, but I think I need to do something, so we’ll see. 3. The Latvian Dreaming sweater, which I’m almost to the shoulders, which I don’t have a plan for. I’m waiting in part till I’m done with plain yoke sweater before I figure out what I’m doing on this sweater.

(And, cough two shawls, but they don’t count.)

What are you working on?

Git Mail

“So basically, what I’m doing is, sorting the email on the server in a git repository, and then pushing and pulling to that as needed, or just working their over ssh.” I explained

“So what you’re saying, is that you’ve basically reinvented IMAP,” Chris said.

“Yeah, pretty much, except that this works,” I said.

“If you say so.”


I do say so. So this is what it comes down to:

IMAP got one thing right: we need a way of accessing our email that works on public computers, is machine independent, that works offline and online, and that keeps all these mail reading environments synchronized.

The problem is that IMAP is incredibly flakey and inconsistent. Messages that you’ve read suddenly become unread, messages that you’ve moved suddenly pop back into your inbox, it’s slow, and if you don’t have the right mail client and a server that’s tweaked in the right way, it might not really work at all.

If IMAP worked as well as it could, we’d all use it because, ideally it’s the best way to manage email. Everything is stored remotely but cached locally for offline use and backup, you can use multiple machines without worry. Instead I suspect most people who need these features use webmail if they need multi-machine email accounts,[^webmail] and that’s great, if it works for you. Gmail and the like are great pieces of software, I don’t mean to begrudge webmail, I just don’t enjoy the experience of the browser, and seek to avoid it except for browsing.

So while I’d given up on email that synced really well in the last couple of months (preferring to use mutt and procmail locally,) I still longed for this kind of email set up.

So I thought a bit and figured some things out, and here’s what I came up with. (A step by step explanation of how I get email):

1. A bunch of email addresses (for different contexts) are forwarded to a gmail account, which despamifies my email, and uses it’s own filters to sort and forward mail (using +address aliases) to a secret email account on the web server.

2. The webserver uses procmail to sort and deliver the email into a non-public (obviously) folder/git repository.

3. I have a series of scripts to manage the git/mail interaction depending on if I’m on an SSH connection or not, mostly this is straightforward, but here’s how the sync works (commands from the local perspective in parenthesis):

  1. via ssh I add and commit any new mail that may have arrived since the last sync to the repository on the server: :

    ssh <foo> git add * && git commit
    
  2. Locally I commit any change that’s been made to my mail directory since the last sync, and pull down the new mail from the server: :

    git add * && git commit && git pull
    
  3. Push my changes up, and everything merges and the repositories look the same now. and I reset the index of the remote server so to reflect the changes:1 :

    ssh <foo> git push && ssh <foo> git reset --hard
    
  4. Rejoice.2

  5. Set up a cronjob or a launchd daemon to do this automatically every, say 15 minutes, and then you get the illusion of having “push” email.

    If you get growl to notify you of the results of the pull (end of step 2) you’ll be able to see whether there’s new mail or not.

    I think you could probably set much of this up in post-commit/post-push hooks. And I think it goes without saying that if there are more than one “client” repository in play, you’ll want to pull changes more often than you sync.

So why would I (and you?) want to do this? Easy:

  • It’s quick. Git is fast, by design, and since email files have a lot of redundancy in the headers and what not, git can save a lot of space in the “tube” and speed up your email downloads.

  • It’s robust and flexible. If my server stooped working for some reason, I could set up another, or simply start using fetch-mail again. If the server went back up, I just push the changes up, and I’m back in business.

  • Git handles renames implicitly, and this is a really cool feature that I don’t think gets quite enough mention. Basically if I move or rename a file, and add the new file to git, (git add * does this) git realizes that I’ve moved the file. so as files move and get named other things, git realizes that it’s happened, deals with it and moves on.

  • Understandably, if you delete a file locally git won’t delete it from the repository “head” unless you tell it to. This bash script takes care of that: :

    for i in `git status | grep deleted`; do
       git rm --quiet  $i;
    done
    
  • It’s secure, or at least is if you do it right. All the email gets downloaded over SSH, so no clear text passwords (if you’re using public key authentication) and encrypted data transmission.

  • Oh yeah, and it’s not flakey like IMAP. Totally worthwhile.

  • If there is some sort of syncing problem, you know where it is, as opposed to IMAP making an executive decision without telling you and un-sorting all your email.

Any questions?

Onward and Upward!


  1. Git is really good at merging, and because it can track renaming implicitly, it does pretty well in this situation. There are some things you can do to basically ensure that you never have a conflicted merge (because after all, you’re never really editing these files, and so rarely concurrently. for the most part). Remember after pushing to the server, always reset it’s index. Under normal conditions we’d not push to a repository that had an index, so this usually isn’t an issue, but here where it’s important that the server’s repository have files, keeping that index “right” means you won’t undo your changes in successive commits. ↩︎

  2. My first instinct was to focus on pushing changes up early, and it turns out that this is the wrong thing to do, as it means potentially that the state of the upstream repository could be wonky when you want to pull. Pulling before you push, locally, prevents having more than one kind of change in any one commit, and makes the whole situation a bit more error proof. ↩︎

Book Recommendations

Sorry about the posting confusion this week. I’ll be out on Monday here and on Critical Futures, but everything will be back to normal on Tuesday.

This is a post that I’ve been working on for a while, and I’m not sure that it’s done but I’m fresh out of ideas for more book recommendations. But then, this is a blog, and there’s a comment form for a reason, so if I’ve been remiss and forgotten something important, please do remind me. Enjoy!

I have perpetual fears about not being well read enough. I think this is mostly an existential problem, as I read a bunch, and I’ve read a lot of stuff in my day. But there’s always more, particularly as I think about shifting my academic/intellectual specialty. In any case, I found myself a few weeks ago recommending a science fiction novel that I had read,1 and I thought it might be good to post a list of recommendations. Not just “books I like,” but “books I’d tell you to read if you were looking for something specific.”

For people who like military SF and some people who like Heinlein, John Scalzi’s Old Man’s War is a great deal of fun.

For a great novella/tightly wound plot, Samuel R. Delany’s Empire Star and for lingustic sf, the other half of the book, Babel-17. I’m a huge fan. For anyone interested in urban systems and community, Times Square Red/Times Square Blue Amazon is a great read, and is incidentally the most cited source of my college career.

For a very smart, but also very comforting and enjoyable space opera trilogy, Melissa Scott’s Five-Twelfths of Heaven Trilogy can’t be beat. Her Trouble and her Friends is also a great example of what cyberpunk can be when it’s not trying too hard to be the next New Thing.

I don’t find myself in a position to recommend mainstream fiction very much, but I am a big fan of Anne Lamott’s All New People, and Barbara Kingsolver’s Prodigal Summer, which are both clever and fun, and are unabashedly delightful.

Books about writing? Five or more years ago, I would have had a list of books about writing, but as a genre I’m not particularly convinced of their worth/utility or frankly interesting-ness. Anne Lamott’s Bird by Bird, is simply amazing, though not particularly for the specific writing related hints. I’ve always thought that Stephen King’s writing book is pompous and sort of unfocused, but admittedly I couldn’t even force myself to finish it.

That’s what come to mind, do you all have any good recommendations?


  1. I’m not above recommending a book on reputation alone, though I try to disclose this. ↩︎

Writing and Research

Ed: I so I totally meant to post this on Monday instead of `this post <http://tychoish.com/posts/upcoming/>`_, but the wires got crossed, and Monday’s post got out early. Sorry. --ty

The one thing that really undermines my argument against preemptive rewriting is the fact that I’m a compulsive outliner, at least for some things.

Since I’m currently in the “planning a new story/project” phase1 I’ve been thinking about the outlining process a bit.

The difference between blogging and writing is that in a blog post I maybe scribble something on a paper ahead of time, but the rhetorical thread comes out of my fingers and on to your screens pretty much straight through.2 Blogging is very much like the research process of fiction writing, in that it’s a process of taking an idea and figuring out what it means, and a lot of the possibilities in whatever thing or situation (blog and fiction respectively). In blogging, you pray someone’s interested or you’ve written something that gets folks irked enough to respond, in fiction you take all this thought work and then write about only the things that really need writing about.

And then I realized the other day, that I write the outlines of my stories--indeed my entire personal wiki notepad--to you, dear blog readers. At some point years ago I got a pretty good idea of who “you” are, and as a result generally my blog posts are all addressed to the same positional reader (even though that reader/group has changed a bit, and my conception of “the reader” has always been pure fantasy on my part.) Even though I’m the only one that reads this document, all the parts are written to the same person. It’s all very strange.

But I digress. Research.

I think the sort of stock advice to the new writer doing research is “don’t get too caught up in planning, writing is about writing, not planning to write,” and while I do think that the reminder to always keep the writing in mind, at the same time, there’s something really important and intense about the planning process that shouldn’t be ignored. It’s a lot of work, there are a lot of words that go into getting a story to the point where you can even think about starting to write in ernest. In the end I think it’s not so much about what you outline or what you read up on, as the act of doing it. The time spent in your head thinking about this world, and this idea.

Speaking of thought work, I have some to do.

Onward and Upward!


  1. It’s true that I’m not exactly finished with any of the existing stories/projects, I’ve realized that my slate of open projects are: Station Keeping (which is always ongoing, and I’m 8 episodes away from being done with the second season), Trailing Edge (which I thought was going to be a novel-scope project, but I was so wrong, and it turns out that I’m close to being done and taking a break), and I have another Knowing Mars story or two that I’m not particularly interested in writing just yet. And since I’m my own writing boss, I can work on a new story. ↩︎

  2. I write non-time sensitive entries in batches and post them later, mostly because I want to get blog posts going live up in the morning, and don’t want to have to cram to get them done every night, and I like this rhythm, but the it’s still very off the cuff, and minimally edited. But you knew that. ↩︎

10 Shell Tips

10 pearls of collected wisdom for the aspiring terminal/console/command line user. This assumes UNIX but little actual command line experience.

1. Customize your your .bash_profile or .bashrc files and continue to tweak this file as you learn more about how you use the shell. I even have a line in my profile that makes it easier for me to edit this page.

2. Never open files unless you have too. tools like less, more, cat, and grep should be enough to keep you going for most routine checks.

3. Having said that, getting to know your text editor really well should be on the top of your list of things to do. There’s something to be said for learning how to use vim, though I can understand if you might want to use something a little less sharp around the edges.

4. It’s good to be able to hack your way through bash/shell scripting and at least one other general purpose scripting/programing language. Like Perl or Python, but Ruby and PHP would work. Power users don’t necessarily need to be able to write brilliant programs, they just need to figure out how to glue other programs together.

5. Familiarize yourself with your operating system’s package manager, or get macports if you’re on a mac. Or get an operating system with a better package manager. To my tastes this means means getting a debian-based linux distribution, but there are others if this won’t work for you. These package installers make it so much easier to install software and have it work because other people do the testing. Compiling things on your own is ok, but package mangers are better. Learning how to use the cpan shell and ruby gems falls under this imperative.

6. Do first and script second. While you may be tempted to write nifty little scripts for all the things you think you’re going to do, don’t. Work first and figure out what your habits are and then write the scripts/macros/short cuts that will best serve you. That way you’ll use them.

7. Figure out how to schedule tasks/automate background tasks. If there’s an internet connection, and probably even if there isn’t, my computer checks my email every 8 minutes or so. Because I have a little “check email and tell tycho if there’s anything new” script set to run in a launchd deamon. You could use cron if you’re not using a mac, but the general idea is that if there are things that you know need to be done, regularly, you just tell the computer to do them, and then you don’t have to worry about it.

8. Read the manuals and google for help, but also relax. The terminal lets you do a lot of new things, and it saves you a lot of time. It’s also hard to learn and a lot of die-hard terminal users are also, to be blunt, assholes. I don’t want to recount the number of times that I’ve seen people rant on about proper forum, and how new folk ask too many questions. This is dumb, new people always ask questions, and the truth is that some things aren’t well documented. Also a lot of terminal assholeishness comes from a period of time, when certain operations took a lot of CPU power, and CPU power was more at a premium than it is today. Most contemporary computers, even ones that are a few years old, run so fast that even inefficient terminal applications still run incredibly fast on modern hardware, and can outperform the best GUIs. You’ll learn later.

9. Remember the Unix Philosophy. Basically that programs should do one thing well, and not complicate themselves with doing more than one thing. If you know that this is how things can and should work and you can learn how to work with this, then you’re in good shape. (It’s also ok to bend it a little bit, from time to time.) Also, if you’re an oddball like me--and using the command line for something, like writing fiction--slowly get a sense of what existing tools do, and figure out if its useful to you. Also know that just because something’s cool it doesn’t mean that it’s going to be useful for you.

10. Customize the appearance of your console window. Apple stocks everything with an ugly black text on white background thing, which makes my head hurt from the squinting. Readably sized fonts, good coding fonts, anti-assailing, colorizing your prompt, light text on dark backgrounds, and some transparency all make the terminal more functional and elegant.