In Favor of Simple Software

I’ve spent a little bit of time addressing some organizational and workflow angst in the past few weeks, and one thing I’d been focusing on had been to update and fine tune my emacs (text editor) and irssi (irc/chat) configuration. Part of my goal had been to use irssi-xmpp to move all of my chat/synchronous communication into one program; unfortunately I’ve not been able to get irssi-xmpp to build and function in a fully stable way. This is probably because I’m hard on software and not because of anything specific to the software itself.

In any case, this lead me to come to the following conclusion about these programs, as they are probably the two most central and most heavily used applications in my arsenal, and without a doubt are the applications that I enjoy using the most. I scribbled the following note a few days ago in preparation for this entry:

In many ways the greatest advance or feature that these programs provide isn’t some killer feature, it’s a simple but more powerful abstraction that allows users to interact with their problem domain. Emacs is basically a text-editing application framework, and provides users with some basic fundamentals for interacting with textual information, and a design that allows users to create text editing modalities or paradigms which bridge the divide between full blown applications and custom configurations. By the same token, Irssi is really a rather simple program that’s easy to script, and contains a number of metaphors that are useful for synchronous communication (chat.)

And we might be able to expand this even further: these are two applications that are not only supremely functional, but are so usable because they are software projects that really only make sense in context of free software.

I want to be very careful here: I don’t want to make the argument that free software isn’t or can’t be commercial, because that’s obviously not the case. At the same time, free software, like these applications needn’t justify itself in terms of “commercial features,” or a particular target market in order to remain viable. It’s not that these programs don’t have features, it is that they have every feature, or the potential for every feature, and are thus hard to comprehend and hard to sell. Even if it only takes a use over a short period of time for users to find them incredibly compelling.

The underlying core extensibility that both of these “programs” have is probably also something that is only likely to happen in the context of open source or free software. This isn’t to suggest that proprietary software doesn’t recognize the power or utility of extensible software, but I don’t think giving users so much control over a given application makes sense from a quality control perspective. Giving users the power to modify their experience of software in an open ended fashion, also gives them the power to break things horribly, and that just doesn’t make sense from a commercial perspective.

There’s probably also some hubris at play: free software applications, primarily these two, are written by hackers, with a target audience of other hackers. Who needs a flexible text editing application framework (e.g. emacs), but other programmers. And the primary users of IRC for the past 8-10 years have largely been hackers and developers and other “geek” types. irssi is very much written for these kinds of users. To a great extent, I think it’s safe to suggest that when hackers write software for themselves, this is what it looks like.

The questions that must linger is: why isn’t other software like this? (Or is it, and I’m missing it in my snobbishness,) and where is the happy medium between writing software for non-hackers and using great software (like these) to “make more hackers.”

Onward and Upward!

Focus and Context Switching

This post is inspired by Cory Doctorow’s Inventory about the technology he uses for his work and ongoing personal angst and gadget lust.

For the past six months or so I collapsed my entire computing existence into one, single, computer. It’s a nifty Thinkpad x200, a small laptop with just the right balance of usable screen space, sufficient computing power, and great build quality. And when I say “everything,” I mean it. I hook it up to a monitor and keyboard at the office and do my day-job (technical writing/sys admin stuff) on this system, I have a similar “desktop” situation at home, and I write fiction, do all of my email, write and blog posts, off of this system. I even have development web servers running here.

In a number of ways it’s great, and I wouldn’t trade this for the world. Everything just works the way I want it to, and I never have to worry that I’ve left some important edits to a file on a system elsewhere. Everything is always with me.

Now to be fair, I have additional computers. My old(der) desktop at home keeps backups of files, plays music, and does a number of other tasks. I have (and use) the server that this and other websites run on for some tasks, and I have another instance at the office that manages some work functions, but despite their varying physical distance from me at any given point, my interactions with these computers is always as if they’re remote. When I use a computer, it’s this one.

Now there isn’t a real problem here, except that from a workspace and mindspace perspective the context switching can be somewhat complicated and frustrating. While I’ve got most of the kinks worked out of the docking (monitors and keyboards) process figured out, it takes me a few moments to settle into or out of “laptop-mode” or “workdesk-mode” or “homedesk-mode.” While not having to worry if my files are all up to date, it’s also somewhat distracting for all my different projects to be open all the time. The article I’m working on for work is always open and a few key presses away from the novel I’m working on or the latest in-progress blog post.

Again, this isn’t a really huge issue, but it means that when I get somewhere and want to begin working I have to take a deep breath, and spend a moment or two getting going again. These do fit into the category of “first world problems,” and I’m not sure if there’s a really easy solution. I’ve toyed with a number of resolutions to this angst:

  • Distribute my existing machines such that I have a machine that lives at work and a machine that lives at home so that I can just sit down at a desk and start working without shenanigans. Given the available hardware, this might mean that I’d spend most of my time using systems that I’m not particularly fond of.
  • I’ve toyed with having a “tycho writing laptop” that wouldn’t have a web browser installed, for more distraction free writing. I’ve got my old laptop set up to do this, but I’m not likely to take it anywhere (i.e. for the commute,) so it might make sense to get a little netbook for the train for this function, but that seems like overkill.
  • I might redistribute more of my workload to servers rather than doing everything on the laptop. I’m thinking about having the terminal sessions that I use for email primarily live in screen sessions elsewhere.
  • I’ve thought about getting a second laptop, (like my main system at the moment,) both for redundancy and to help reduce the cost of switching between various contexts.

As I’ve been toying with this, I’ve been making a number of tweaks to my work flow to help address the pain of context switching, most of which are too trivial and too specific to outline here. Mostly I’ve been tweaking some customizations, improving how I use virtual desktops. While these tweaks have improved things greatly, better internal system management doesn’t solve the underlying issues: it takes time to reconnect to networks, to close tabs in the web browser, to get to the relevant files open in emacs, and navigate to the proper desktop. All the while other contexts (other files, other virtual desktops,) lurk nearby.

And figuring out how to solve this problem involves a certain amount of “head game” for me: avoiding having “the old laptop” be the primary computer for a given task, making sure that I don’t need a network connection for essential tasks. Assorted other weirdnesses.

If anyone recognizes features of this angst that in their own work they’ve managed to resolve, I’d love to hear about your setup.

Fan Fiction is Criticism

Thanks to `Shaun Duke <http://skiffyandfanty.wordpress.com>`_ for inspiring this little rant.

I must confess that I’m mostly uninvolved in the world of fan fiction these days, though I have traveled in “fanish” circles at various points in my past. It’s not because I don’t think fans have interesting things to say about literature and media, or that I don’t think what’s happening in fandom important and fascinating. No, I’m mostly withdrawn because I have too much on my plate and participating in fandom doesn’t really contribute to the specific goals I have at this moment. But I sometimes feel that way about social science.

In any case, I’d like to put forth the following arguments for viewing fan fiction as a form a literary criticism rather than a literary attempt in it’s own right:

  • Fan fiction is a form of literary criticism. Sure it’s casual, sure it’s written in the forum of a story, but the fan fictioner and the critic both write from the same core interest in interpreting texts and using varying readings of texts to create larger understandings of our world.
  • The fact that fan fiction looks like a story, is mostly distracting to what’s happening in these texts. Fan fiction, has always been written in communities. The people who read fan fiction are largely the people who write fan fiction. Fan fiction inspires
  • The quality of fan fiction is also largely irrelevant to the point of whether fan fiction is worthwhile. More so than other forms of writing, fan fiction is less about the technical merits of the text, and more about the discursive process under which the texts are created. Better quality writing makes better fan fiction, certainly but I don’t think fan fiction centers on those kinds of values.
  • Copyright, and the “intellectual property” status of fan fiction is also sort of moot. It’s true that if we’re being honest fan fiction impinges upon the copyright of the original author. At the same time, fan fiction doesn’t really hurt creators: people aren’t confused that fan fiction is “real fiction,” fan fiction by and large doesn’t divert sales from “real fiction,” and so forth. Sure, it’s a bit weird for some others to find other people playing in their sand boxes, but the truth is that authors have never had a great deal of control over what happens to their work post-publication, so it’s fair.

Additionally, I think that fan fiction accomplishes something that are incredibly powerful and worthwhile that “normal” fiction cannot accomplish. Writing fan fiction can be, I’d wager, an incredibly effective educational experience for new writers, particularly genre fiction writers. By providing a very fast feedback loop with an audience of readers and writers (and lovers of literature and story telling.) Not to mention the fact that because fan fiction tends to be somewhat ephemeral and there’s a wealth of inspiration and impetus for fiction, fan writers can write a lot, and if they choose in a very productive sort of way.

And that is almost certainly a good thing.

The Overhead of Management

Every resource, every person, every project, every machine you have to manage comes with an ongoing cost. This is just as true of servers as is it is of people who work on projects that you’re in charge of or have some responsibility for, and while servers and teammates present very different kinds of management challenges, working effectively and managing management costs across contexts is (I would propose) similar. Or at least similar enough to merit some synthetic discussion.

There’s basically only one approach to managing “systems administration costs,” and that’s to avoid it as much as possible. This isn’t to say that sys admins avoid admining, but rather we work very hard to ensure that systems don’t need administration. We write operating systems that administer themselves, we script procedures to automate most tasks as much as possible (the Perl programing language was developed and popularized for use of easing the administration of UNIX systems,) and we use tools manage larger systems more effectively.

People, time, and other resources cannot be so easily automated, and I think in response there are two major approaches (if we can create a somewhat false dichotomy for a moment:)

On the one hand there’s the school of thought that says “admit and assess management costs early, and pay them up front.” This is the corporate model in many ways. Have (layers upon layers of) resources dedicated to managing management costs, and then let this “middle management” make sure that things get done in spite of the management burden. On servers this is spending a lot of time choosing tools, configuring the base system, organizing the file system proactively, and constructing a healthy collection of “best practices.”

By contrast, the other perspective suggests that management costs should only be paid when absolutely necessary. make things, get something working and extant and then if something needs to be managed later, do it then and only as you need. On some level this is inspiring philsophy behind the frequent value of favoring “working code” over “great ideas” in the open source world.1 Though I think they phrase it differently, this is the basic approach that many hacker-oriented start ups have taken, and it seems to work for them. On the server, this approach is the “get it working,” approach, and these administrators aren’t bothered by having to go in every so often to “redo” how things are configured, and I think on some level this kind of approach to “management overhead” grows out of the agile world and the avoidance of “premature optimizations.”

But like all “somewhat false dichotomies,” there are flaws in the above formulation. Mostly the “late management” camp is able to delay management most effectively by anticipating their future needs (either by smarts or by dumb luck) early and planning around that. And the “early management” camp has to delay some management needs or else you’d be drowned in overhead before you started: and besides, the MBA union isn’t that strong.

We might even cast the “early management” approach as being “top down,” and the “late management” camp as being “bottom up.” If you know, we were into that kind of thing. It’s always, particularly in the contemporary moment to look at the bottom-up approach and say “that’s really innovative and awesome, that’s better,” and view “top-down” organizations as “stodgy and old world,” when neither does a very good job of explaining what’s going on and there isn’t inherent radicalism or stodginess in either organization. But it is interesting. At least mildly.

Thoughts? Onward and Upward!


  1. Alan Cox’s Cathedrals, Bazaars and the Town Council ↩︎

Jekyll Publishing

I wrote about my efforts to automate my publishing workflow a couple of weeks ago, (egad!) and I wanted to follow that up with a somewhat more useful elucidation of how all of the gears work around here.

At first I had this horrible scheme setup that dependent on regular builds triggered by cron, which is a functional, if inelegant solution. There’s a lot of tasks that you can give the appearance of “real time,” responsiveness by scheduling more brute tasks regularly enough. The truth is, however, that its not quite the same, and I knew that there was a better way.

Basically the “right way” to solve this problem is to use the “hooks” provided by the git repositories that I use to store the source of the website. Hooks, in this context refer to a number of scripts which are optionally run before or after various operations in the repositories that allow you to attach actions to the operations you perform on your git repositories. In effect, you can say “when I git push do these other things” or “before I git commit check for these conditions, and if they’re not met, reject the commit” and so forth. The possibilities can be a bit staggering.

In this case what happen is: I commit to the tychoish.com repositories a script that synchronizes the appropriate local packages runs and publishes changes to the server. It then sends me an xmpp message saying that this operation is in progress. This runs as the post-commit hook, and for smaller sites could simply be “git push origin master”. Because tychoish is a large site, and I don’t want to be rebuilding it constantly, I do the following:

#!/bin/bash

# This script is meant to be run in a cron job to perform a rebuilding
# of the slim contents of a jekyll site.
#
# This script can be run several times an hour to greatly simplify the
# publishing routine of a jekyll site.

cd ~/sites/tychoish.com/

# Saving and Fetching Remote Updates from tychoish.com
git pull >/dev/null &&

# Local Adding and Committing
git checkout master >/dev/null 2>&1
git add .
git commit -a -q -m "$HOSTNAME: changes prior to an  slim rebuild"  >/dev/null 2>

# Local "full-build" Branch Mangling
git checkout full-build >/dev/null 2>&1 &&
git merge master &&

# Local "slim-bild" Branch Magling and Publishing
git checkout slim-build >/dev/null 2>&1 &&
git merge master &&
git checkout master >/dev/null 2>&1 &
git push --all

# echo done

Then on the server, once the copy of the repo on the server is current with the changes published to it (i.e. the post-update hook), the following code is run:

#!/bin/bash
#
# An example hook script to prepare a packed repository for use over
# dumb transports.
#
# To enable this hook, make this file executable by "chmod +x post-update".

unset GIT_DIR
unset GIT_WORKING_TREE

export GIT_DIR
export GIT_WORKING_TREE

cd /path/to/build/tychoish.com
git pull origin;

/path/to/scripts/jekyll-rebuild-tychoish-auto-slim &

exit

When the post-update hook runs, in runs in the context of the repository that you just pushed to, and unless you do the magic (technical term, it seems) the GIT_DIR and GIT_WORKING_TREE variables are stuck in the environment and the commands you run fail. So basically this is a fancy git pull, in a third repository (the one that the site is built from.) The script jekyll-rebuild-tychoish-auto-slim looks like this:

#!/bin/bash
# to be run on the server

# setting the variables
SRCDIR=/path/to/build/tychoish.com/
DSTDIR=/path/to/public/tychoish/
SITENAME=tychoish
BUILDTYPE=slim
DEFAULTBUILD=slim

build-site(){
 cd ${SRCDIR}
 git checkout ${BUILDTYPE}-build >/dev/null 2>&1
 git pull source >/dev/null 2>&1

 /var/lib/gems/1.8/bin/jekyll ${SRCDIR} ${DSTDIR} >/dev/null 2>&1
 echo \<jekyll\> completed \*${BUILDTYPE}\* build of ${SITENAME} | xmppipe garen@tychoish.com

 git checkout ${DEFAULTBUILD}-build >/dev/null 2>&1
}

build-site;

This sends me an xmpp message when the build has completed. And does the needful site rebuilding. The xmppipe command I use is really the following script:

#!/usr/bin/perl
# pipes standard in to an xmpp message, sent to the JIDs on the commandline
#
# usage: bash$ `echo "message body" | xmppipe garen@tychoish.com
#
# code shamelessly stolen from:
# http://stackoverflow.com/questions/170503/commandline-jabber-client/170564#170564

use strict;
use warnings;

use Net::Jabber qw(Client);

my $server = "tychoish.com";
my $port = "5222";
my $username = "bot";
my $password = ";
my $resource = "xmppipe";
my @recipients = @ARGV;

my $clnt = new Net::Jabber::Client;

my $status = $clnt->Connect(hostname=>$server, port=>$port);

if (!defined($status)) {
  die "Jabber connect error ($!)\n";
}
my @result = $clnt->AuthSend(username=>$username,
password=>$password,
resource=>$resource);

if ($result[0] ne "ok") {
  die "Jabber auth error: @result\n";
}

my $body = '';
while (<STDIN>) {
  $body .= $_;
}
chomp($body);

foreach my $to (@recipients) {
 $clnt->MessageSend(to=>$to,
 subject=>",
 body=>$body,
 type=>"chat",
 priority=>10);
}

$clnt->Disconnect();

Mark the above as executable and put it in your path somewhere. You’ll want to install the Net::Jabber Perl module, if you haven’t already.

The one final note. If you’re using a tool like gitosis to manage your git repositories, all of the hooks will be executed by the gitosis user. This means that this user will need to have write access the “build” copy of the repository and the public directory as well. You may be able to finesse this with the +s “switch uid” bit, or some clever use of the gitosis user group.

The End.

The Meaning of Work

I’ve started to realize that, fundamentally, the questions I’m asking of the world and that I’m trying to address by learning more about technology, center on work and the meaning and process of working. Work lies at the intersection of the all the things that I seem to revisit endlessly: interfaces, collaboration technology, cooperatives and economics institutions, and open source software development. I’m not sure if I’m interested in work because it’s the unifying theme of a bunch of different interests, or this is the base from which other interests spring.

I realize that this makes me an incredibly weird geek.

I was talking to caroline about our respective work environments, specifically about how we (and our coworkers) relocated (or didn’t) for our jobs, and I was chagrined to realize that this novel that I’ve been working at (or not,) for way too long at this point spends some time revolving around these questions:

  • How does being stuck in a single place and time constrain one agency to effect the world around them?
  • What does labor look like in a mostly/quasi post-scarcity world?

Perhaps the most worrying thing about this project is that I started writing this story in late August of 2008. This was of course before the American/Financial Services economic crash that got me blogging and really thinking about issues outside of technology.

It’s interesting, and perhaps outside the scope of this post, but I think it’s interesting how since graduating from college, my “research” interests as they were, all work them into fiction (intentionally or otherwise.) I suppose I haven’t written fiction about Free Software/open source, exactly, but I think there’s a good enough reason for that.1

I’m left with two realizations. First, that this novel has been sitting on my plate for far too long, and there’s no reason why I can’t write the last 10/20 thousand words in the next few months and be done with the sucker. Second, I’m interested in thinking about how “being an academic” (or not) affects the way I (we?) approach learning more about the world and the process/rigor that I bring to those projects.

But we’ll get to that later, I have writing to do.


  1. I write fiction as open source, in a lot of ways, so it doesn’t seem too important to put it in the story as well. ↩︎

There's Always Something New to Learn

Now that I’m fairly confident in my ability to do basic Linux systems administration tasks: manage web and email servers, maintain most Linux systems, convince desktop systems that they really do want to work the way they’re supposed to, I’m embarking on a new learning process. I’ve been playing around with “real” virtualization on my desktop, and I’ve been reading a bunch about systems administration topics that are generally beyond the scope of what I’ve dealt with until now. Here is a selection of the projects I’m playing with:

  • Getting a working xen setup at home. This requires, learning a bit more about building working operating systems, and also in the not to distant future buying a new (server) computer.
  • Installing xen on the laptop, because it’ll support it, I have the resources to make it go, and it’ll be awesome.
  • Learning everything I can about LVM, which is a new (to me) way of managing partitions and disk images, that makes backups, disk snapshots, and other awesomeness much easier. It means, some system migration stuff that I have yet to tinker with, as none of my systems currently support LVM.
  • Doing package development for Arch Linux, because I think that’s probably within the scope of my ability, because I think it would add to my skill set, and because I appreciate the community, and I want to be able to give back. Also I should spend some time editing the wiki, because I’m really lazy with that.

I guess the overriding lesson of all these projects is a more firm grasp of how incredibly awesome, powerful, and frankly stable Arch Linux is (or can be.) I mean there are flaws of course, but given how I use systems, I’ve yet to run into something show stopping. That’s pretty cool.

The Old Projects Project

Before a road trip a, by now, a couple of months ago, I installed a copy of nginx on my laptop on the hope of doing some web development and working on other projects when I was in the car. For the uninitiated (you mean you don’t all write technical documentation for web developers and systems administrators?!?) nginx is an incredibly powerful web server. As of June 11th, foucualt the server that hosts the Cyborg Institute and tychoish.

This is, almost always, I think, a loosing proposition.

I never get any sort of substantial (or insubstantial) work done during my road trips up-and-down the north east corridor. Not that that’s a bad thing, but I also expect that there’ll be more awake-time when I’m not driving or gossiping.

And there never is.

So the web server sat unused for a long time on my laptop, but recently I’ve been playing with it a bit and I’ve finally gotten a number of cool things set up. I have a local “git web” instance which makes it easier to track progress on local and private projects that are stored in git. Perhaps more importantly, I have set up quick local ikiwiki instances for a number of projects. They’re easy to configure, quick to setup, and while I suppose I could hack something together in nifty for myself, there’s something nifty about being able to take an alternate view of some content and also being able to really preview changes to you work before publishing them.

Also, and the real reason for this post, is that by virtue of this development, I have revisited a few projects that had been lingering in the home directory of my computer for far too long. Which has been a powerful and useful exercise.

By which I mean, it’s been painful.

Besides “the novel,” which has been the lingering and dragging front burner project for a year, there are a number of quasi-serial stories that have lingered in some state of incompleteness for a couple of years now. I’m kind of amazed both at how foreign these stories seem to me both in terms of the style (good to know that I’m a better writer than I was a few years ago,) and also how quickly I can fall right back into the story and tell you every little thing about the world, situation, and moment where I left off.

The mind is, indeed, an amazing thing.

Where my strategy for the past year has been to “plow through and finish the novel,” I think my tactic this summer will be to move all of my projects forward in some way. Small daily writing goals for the novel, combined with somewhat less regular (but more specific) goals with regards to other projects. In the next two months I want to have a fairly active and varied writing schedule worked out that isn’t based around the monthly (or so) weekend binges that I’ve been using for most of the last year.

That’s the plan at any rate.