distribution habits and change

Here's another one for the "new workstation series."

Until now my linux-usage has been very Debian based. It's good, it's stable, and the package management system is really intensely wonderful. I was happy. And then I was somewhat less happy. Ubuntu--the distribution that I'd been using on my desktop.

Don't get me wrong, Ubuntu is great and I'd gladly recommend it to other people, but... with time, I've found that the system feels clunky. This is hard to describe, but a growing portion of the software I run isn't included in the normal Ubuntu repositories, some services are hard to turn off/manage correctly (the display manager, various superfluous gnome-parts,) and I've had some ubuntu-related kernel instability.

My server, of course, runs Debian Lenny without incident. There's something really beautiful about that whole stability thing that Debian does. I considered running Debian (lenny/testing) distributions on my desktops because of the wonderful experience I had Lenny, and so I tried running various flavors of Debian and I either found that for the software I wanted to run things were either too stale or too unstable. This is totally to be expected, as Debian's singular goal is stability, and getting a fresher/less stable operating system that's based on Debian is always going to be a difficult proposition.

In the past I've dragged my feet with regards to upgrading operating systems because I take a "don't fix it if it ain't broke," approach to maintaining computers, and all my systems worked so, so until I was faced with this work computer--despite my dissatisfaction--I'd never really seriously considered the mechanics of changing distributions, much less the prospect of having to interact with a linux distribution without the pleasures and joys of apt-get.

But then I got this new work station and...

I switched. At least for one system. So far. ArchLinux uses a system called "pacman" for package management. Pacman is really nifty, but it's different from apt: it's output is a bit clearer, it's just as fast and "smart," and the packages are fresh.

And then there's the whole business of the way Arch approaches management; Arch uses a 'rolling release system" where rather than release a version of an operating system, that has a given set of packages at a given moment, Arch packages are released when they're "ready" on a package-by-package basis (with an awareness toward interactions between packages,) and pacman has a system for easily installing software that isn't in the main repositories as packages. (Which makes upgrading and removal of said packages later much easier.)

This sounds complex, maybe too complex, but some how, it's not. When I started thinking about writing this post, I thought, "how do I convey how totally strange and different this is from the Debian way." By the time I got around to actually writing this post, I've settled into a place of stability and I must confess that I don't notice it very much. It's wild, and it just works.

I was going to go on this whole schpeal about how even though the functional differences between one "flavor" of GNU/Linux and another are pretty minimal, it's interesting to see how different the systems can "feel," in practice. Here's a brief list of what I've noticed:

  • I find the fact that the flags that control operations with pacman to be non-intuitive, and frankly I'm a bit annoyed that they're case sensitive so that: pacman -S is different from pacman -s which leads to a lot of typos.

  • I've yet to set up a machine in Arch that uses wireless. I'm just wary, mostly, of having to set up the network stuff "by hand" in Arch, given how finicky these things can be in general.

  • The ABS (arch build system, for installing packages that aren't in the main arch repositories,) took some getting used to, I think this is more about learning how to use a new program/tool and the fact that the commands are a bit weird.

    Having said that, I really like the way the package building scripts just work and pull from upstream sources, and even, say, use git to download the source.

  • I'm impressed with how complete the system is. Debian advertises a huge number of packages and prides itself on its completeness (and it is, I'm not quibbling,) but I run into programs where I have a hard time getting the right version, or the software plain old isn't in the repository. I've yet to find something that isn't in the repository. (Well, getting a version of mutt with the sidebar patch was a bit rough, and I haven't installed the urlview package yet, but that's minor.)

    I think this is roughly analogous to the discussions that python/ruby people have with perl people, about actual vs. the advertised worth of the CPAN. (eg. CPAN is great and has a lot of stuff, but it suffers from being unedited and its huge number of modules is more an artifact of time rather than actual value) So that, while Debian (and CPAN) have more "stuff" than their competitors, in many cases the competitors can still succeed with less "stuff," because their "stuff" has value because it's more edited and they can choose the 80% of stuff that satisfies 94% of need, rather than having 90% of the stuff that satisfies 98% of need. Diminishing returns and all that.

    It's lightweight and smooth. While the hardware of my work computer is indeed impressive, particularly by my own standards, the hardware of the "virtual machine" isn't particularly impressive. And it's still incredibly peppy. Lightweight for the win.

Next up? More reasons Arch Rocks.

See you later!

Multiple Computers and Singular Systems

Here's another episode in my "work workstation" series of posts about setting up my new computer for work, and related thoughts on evolving computer setups. First, some history:

My tendency and leading desire is to make my working environment as consistent as possible. For a long time I was a one-laptop, kind of guy. I had a PowerBook, it did everything just the way I wanted it to, and when ever I needed to do something digitally, I had my computer with me, and I didn't have to worry about other people's computers being configured wrong. It meant that I worked better/smarter/more effectively, and I was happy.

When the PowerBook died, particularly as "my work" became intertwined with my computer, it became clear that I needed a bit more: a computer I could stretch out on, both in terms of things like media (music/video/etc) and in terms of screen space. Concurrently, I also discovered/became addicted to the Awesome window manager, and this has been a great thing for how I use computers, but the end result of this transition was that I had to manage (and needed to use) a couple of machines on a fairly regular basis.

Basically I have a set of applications and tools that all of my systems have installed on them, either their configurations are all standard or I store a copy of the configuration file in a git repository that I link all of the machines to. My work is all stored in git repositories that I sync between machines as needed. It works pretty well, and it means that aside from hardware constraints its not so much that I have multiple machines, as it is that I have different instances of the same machine.

Next: the implications...

I think above all, I'm a Unix guy. UNIX is a modular system that I would describe as being based on a certain kind of modularity, I've also worked out practices for myself that allow me to keep my application configurations synced between machines. Most of the time configurations don't change, but sometimes they do, and when that happens all I have to do is sync up a git repository.

The second implication is that I set up and work with my systems with some notion of stability. While I must confess that I'm not entirely pleased with the way ubuntu has my system desktop and laptop running, it is stable and reliable, and I'm wary of changing things around for a setup that would be functionally more or less the same, but a bit more parsimonious on the back end. I maybe be a huge geek and a hacker type, but I'm a writer and reader first, and although while I'm blathering on about my setup it might seem like all I do is tweak my systems, the writing and reading are really more "my thing."

My Workstation Choices

I've been talking in fairly abstract terms about this new workstation that I've been setting up, and about how this fits into the general ethos of my existing hardware setup, but I think, it's probably the right time to interject and clarify some of the choices and base-assumptions that I've made during this process.

  • My current systems (hardware):
    • A moderately powered dual-monitor workstation (ubuntu-from-Dell, circa October 2008) that's running Ubuntu for the moment (probably move to arch or Debian in the next few months.) This was my work computer during the freelance period.
    • A thinkpad x41t (vintage 2005); 1.4 GHZ pentium M (?); 1.5 gigs of ram; 60gb hard drive, running ubuntu with the lenny kernel. This is my main personal computer at the moment, as I haven't gotten the desktop setup yet. It's a great machine, but I do feel a bit cramped on it for heavy day-to-day useage, it's great for distraction free writing, and portability.
    • (The work computer) A contemporary vintage iMac running OS X 10.5 latest, and also running Arch Linux in Sun's VirtualBox system.
    • (The infrastructure) Debian based virtual server(s), to provide my own personal cloud (web hosting, git hosting, file syncing, remote shell access, email).
  • My current systems (software; but application centered):
    • Window Mangement: awesome. I run slim as a display manager on the laptop, and just use startx/xinit on the desktop/virtual box sessions.
    • Email: I use mutt for reading email, compose emails in emacs, sort email using procmail, download email using fetchmail (if neccessary), but mostly keep mail synchronized using my own git-mail scripts. For sending email and smtp connectivity I use msmtp, and I suppose I'm using postfix on the server as well.
    • Text Editing: I use emacs23 (still the CVS/development/snapshot branch) of emacs (stable is v22). I use 23 because I like the emacs-daemon functionality, and it's pretty damn stable. I have aquamacs installed under OS X for the moment, but I'll probably install 23 soon, because it's quirky.
    • Personal Organization: org-mode, which is technically included in emacs (and I use whatever the stock version in 23 is, these days.) I use org-mode for managing my todo lists, shopping lists, project planning and appointments.
    • Shell/Terminal: bash and urxvt(cd) under linux, and terminal.app on Leopard. And GNU Screen. I live in screen.
    • Web Browsing: I use firefox with hit-a-hint, and emacs-key-bindings (firemacs) on linux systems, as I wait for the day when good functional web-kit based browsers begin to become a possibility.
    • IM/IRC/Chat: mcabber for IM (running ejabberd on my server with the pyaimt transport), and irssi for IRC.
    • Linux Distribution: Debian stable on servers; Ubuntu-mostly on desktops with a desire to move to ArchLinux for desktop use. I love debian, but I think for my desktop-use purposes I cant find a setup that I'm comfortable with, and while ubuntu is great (and I'm glad it works so well with my laptop;) it's a bit heavy and makes assumptions that I'm not comfortable with. Alas.

That's what I'm working with. Just so you know. The mocking can begin now.

New Workstation Trials

Rather than bore you with minutia of my move (eg. shit, I need to get a shower curtain; why is all my toilet paper on the moving truck; I'm getting really sick of sitting on the floor while I wait for the moving truck) I thought I'd talk a little bit about something that seems to occupy a bunch of my thinking these days: how I'm setting up my work-desktop computer. It's trivial, an on going process, and of minimal interest to most other people.

So I think I'll make a series of it. There's this post, and another that I've written, and a few other thoughts that I think fit in. And the truth be told, I've been spending so much time recently packing things, attending to chores and dealing with other crap, that it'll be good to have an excuse to do some writing again.


Here's the setup: I have an iMac of contemporary vintage for a workstation, which is of course running OS X 10.5 Leopard, by default. Here are the challenges I found myself facing:

  • OS X is a great operating system, but I'm not a mac guy anymore, much to my surprise. I installed quicksilver pretty early on, but the truth is that my head isn't shaped that way any more. The mouse frustrates me, and all the things that make OS X great I don't really use.
  • All my other machines and working environments for the past while have been linux based, and I've basically come to the conclusion that having one environment that's shared between a large number of boxes is preferable to having different and unique operating environments.

For a long time, I got the unified operating environment by virtue of only using one portable computer; now I just keep my laptop and desktop the same (more or less) get a very similar result. This is probably worth it's own post, but I was vary wary of having a work machine that was too radically different from what I'm used to and what I use personally.

So if I need to run Linux, there are really only three options:

  1. Level OS X and just run ubuntu/debian/etc.
  2. Set up a dual boot OS X/Linux system, and switch as I need to.
  3. Run Linux in some sort of virtual machine environment like VM Ware or VirtualBox.

Option one is simple, and I liked the thought of it, but it seems like such a waste and what if there was some mac-specific app that I wanted to try? And it removes the option of using OS X as a back up... While I'm no longer a mac guy, I do think that OS X's desktop environment is superior to any non-tiling window manager for X11/Linux.

Option two works, certainly, but I hate rebooting, and in a lot of ways option two would be functionally like option one, except I'd have less accessible hard drive space, and I'd have to reboot somewhat frequently if it turned out I actually needed to use both. Best and worst of both worlds.

Option three is frustrating: virtual machines give some sort of minor performance hit, integration between host and guest is difficult and confusing sometimes, and guest operating stability is largely dependent upon host operating system stability.

Ultimately I chose option three, and as I used the machine, the fact that performance suffers a hit is totally unnoticeable, and though it took a bunch of futzing, I think I've finally settled into something that shows a lot of promise of working. (Ed. note that I've got a bit of a production lag on the blog.)


I think that's sufficient introduction. There'll be more, don't worry.

Org Mode Pitfalls

I'm writing this post for all of the wrong reasons. I've had this "write a post about pitfalls of org-mode," on my org-agenda for weeks, with a list of "ways I'm not doing things right in org-mode." One of those pitfalls, the main one in fact, was "you're living too much in the agenda view, and not thinking of your org-files as working documents and outlines onto themselves."

And because I'm living too much in the agenda view, I'm writing a post (that I need to write, but have been hesitant to write for a while) mostly to get it off my todo list.

This is certainly an acceptable way to work, and I think todo lists mostly exist in order for their items to be completed and checked off. At the same time, I've said (and I keep saying) the beautiful thing about org-mode is that it allows you to plan and process your projects in a way that makes sense for project planning without centering your process on "actionable items," which is good for doing things but less good for planning things.

And so I've been failing at keeping the "planning" and the "doing" as separate thought processes. Note to self: do better with this.

The second pitfall is in the "org-refile" functionality (C-c C-w), which allows you to send items and subtree's to other parts of your org-agenda files. I think part of the problem is that I don't really get how it was intended to be used, and as a result when I try to use it, it doesn't work. (I tooled around in customize, after I wrote this and found: that the following bit (in custom-set-variables) to help, bunches:)

'(org-refile-use-outline-path (quote file))

When I want to refile something, I think to myself "it should go to x file, under which heading, hrm... lets see what's there..." And my options are presented to me in [Heading]/ (filename.org) format. The problem is that org is thinking backwards from me, and as a result I end up miss-filing things, or not using the refile as much as I should because it doesn't really work for me. Hrm. Not sure how to hack this.

In anycase, back to working.

things I hate about the internet

In light of my otherwise fried state of mind, I would like to present a list of things that I dislike. Because I'm snarky like that.

  • HTML emails. I've yet to send or receive an email that really requires rich text formatting provided by HTML emails. While multi-part emails (which send multiple copies of the same email in rich and plain text) are a good thing, it's a huge pain in the ass to get an email (particularly a long email) three times, just for the pleasure.
  • Sites that recreate twitter without adding any useful features or discussion. It's as if the dimwitted internet people said "holy shit, if we give people 140 characters to say banal things on our site maybe we'll get traffic like twitter," except this isn't how the internet has ever worked (or worked well.)

Facebook is coming out with "usernames," I've gotten an invitation to microblog on niche-social networking site, and everyone seems hard set on reimplementing this whole "status" thing a la twitter in the beginning, without any thought of interpretation (a la laconica) or doing something cool like jaiku-style threads, let alone the next big thing.

  • Malformed emails. Dudes. Sending a plain text email is really simple, there's no excuse for it to look like your cat took a nap on the tab key. I'm not chiding anyone for neglecting to test every email "blast" they send (because I'd be that lazy) but I am chiding folks for not testing it once. Writing a text file and sending it isn't that hard.
  • Reimplementation of email. I really hate getting facebook messages, and direct messages on [microblogging service], and each and every other fucking social networking site. Just send me email. Real email. It works, I have a good process for dealing with it, and I don't have to screw around with anything. Thanks.
  • The Twitter Fail Whale. Dudes. There was a while about a year ago, when a bunch of geeks were sitting around and thinking, "you know this twitter technology is going to be really cool, and there are a lot of possibilities here," and there were, and I suppose there still are, but the truth is that I see the fail whale several times every day, and most of the cool things that I wanted to see in twitter two years ago and then a year ago (real xmpp support, track, federation, custom filtered feeds (a la LJ-style friends' filters),) still haven't materialized. I think the addition of OAuth is a great thing, but it's a baby step.
  • The continued prevalence of IRC. Dudes discover jabber/xmpp. Thanks. A while back, I had a lot of nostalgia for IRC, and its true that IRC has a lot of history and is a standard to be reckoned with, but jabber is so much more elegant, secure, and provides features (persistence, logging, independence, etc) without having net-splits and complicated ad hoc registration schemes.

That's all for now. What do you hate about the internet?

on git: in two parts

A post about the distributed version control system "git" in two parts.

Part One: Git Puns

My identi.ca buddy madalu and frequent commenter here posted a few weeks ago the following notice:

#ubuntu-one... No thanks! I'll stick with my home-brewed git + server + usb drive solution. My git repos breed like rabbits!

Which basically sums up my opinion on ubuntuone. But I thought that the "my git repos breed like rabbits" was both accurate (git repositories are designed to be replicated in their entirety), and a sort of funny way to put it. And being the kind of person that I am, I decided to see what other (potentially dirty) puns I could make about git. Here's what I came up with:

what did one git repo say to another git repo? pull my diff

what did mama git say when she found her remote in his room making new branches? octopus merge this instant!

what did one git remote say to entice another remote to branch? it's ok we can just tell them we were cherry picking later.

what did dr. git say when a repo complained of bloating? git gc

I should point out that these four puns all demonstrate a factual feature of git, though the "pull my diff" isn't exactly what happens.

"Octopus Merge" is the method that git uses when there are a lot of divergent branches (more than three) that need to be merged together. Similarly "cherry picking" is a way to manually select what changes get merged together if you're not ready to do full merges, and git gc is the cleanup script that goes through and re-compresses and prunes the database so that your repo works faster and with less disk space.

Anyway, I'm out of puns, you all are welcome to join in.

Part Two: Atypical uses of Git.

I'm sure I've written a bunch here about how I'm not really a programmer, and while this is true I do use git a lot. In part I think this is because git is really mostly an ad-hoc file system and also given how I write, the kind of writing I do isn't that different from programming.

So aside from storing my writing projects, and my orgmode, I do things like store all of my mail directories in git. Which you might think is kind of weird, but the truth is that it makes keeping lots of computers in sync a rather simple proposition, and its damn fast.

I also have a directory I call "garen" (but used to call "main") that is basically my home directory. It has all my emacs lisp files, most of my non-mail related scripts, various configuration files. and so forth. It started out as a backup and workspace for smaller projects, but it's since morphed into "that one thing I need to have of my computer in order to actually work." When I was setting up the server it took a thousand things that might have been huge headaches and made them non-issues. Here's what this repo looks like:

  • emacs/ This is where my emacs-lisp files all live. I have a 'init.el' file which is basically the standard .emacs file, and a 'gui-init.el' file for code that I only want to run if I'm running desktop where I'll be running non-console emacs frames. As a result on my machines my .emacs file looks like this:

    (load "~/garen/emacs/gui-init.el")
    (load "~/garen/emacs/init.el")
    

    With the first line commented out if needed. End result, emacs loads the same everywhere, no thinking.

  • scripts/ I add this to my path, so that any little bit of bash script that I want to be able to use is accessable and the same on all my machines.

  • configs/ Generally my format is to have config_file.machine_name, for example: bashrc.leibniz. In the case of the bashrc, I have a ".common" file that has everything that all my machines need, while the machine specific files have everything that's... well specific, and a source statement for the common file. So my "real" .bashrc looks like this:

    source /home/tychoish/garen/configs/bashrc.leibniz
    

    And everything stays in sync between the machines. How cool is that.

That's sort of the most important thing. The great thing is that this makes setting up a new user account on a server, or a box itself a piece of cake.

Food for thought!

dweebishness of linux users

I ran across this smear piece with regards to Ubuntu users from the perspective of a seasoned Linux user, which I think resonates both with the problem of treating your users like idiots and differently with the kerfuffle over ubuntu one, though this post is a direct sequel to neither post.

The article in question makes critique (sort of) that a little bit of knowledge is a terrible thing, and that by making Linux/Unix open to a less technical swath of users, that the quality of the discourse around the linux world has taken a nose dive. It's a sort of "grumble grumble get off my lawn, kid," sort of argument, and while the elitist approach is off-putting (but total par for the course in hacker communities,) I think the post does resonate with a couple of very real phenomena:

1. Ubuntu has led the way for Linux to become a viable option for advanced beginner and intermediate computer users. Particularly since the beginning of 2008 (eg. the 8.04 release). Ubuntu just works, and a lot of people who know their way around a keyboard and a mouse are and can be comfortable using Linux for most of their computing tasks. This necessarily changes the makeup of the typical "Linux User" quite a bit, and I think welcoming these people into the fold can be a challenge, particularly for the more advanced users who have come to expect something very different from the "Linux Community."

2. This is mostly Microsoft's fault, but people who started using--likely Windows powered--computers in the nineties (which is a huge portion of people out there), being an 'intermediate' means a much different kind of understanding that "old school" Linux users have.

Using a Windows machine effectively, and knowing how to use one of these systems, revolves around knowing what controls are where in the control panel, around being able to "guess" where various settings are within applications, knowing how to keep track of windows that aren't visible, understanding the hierarchy of the file system, and knowing to reboot early and often. By contrast, using a Linux Machine effectively revolves around understanding the user/group/file permissions system, understanding the architecture of the system/desktop stack, knowing your way around a command line window, and the package manager, and knowing how to edit configuration files if needed.

In short, skills aren't as transferable between operating systems as they may have once been.

Ubuntu, for it's flaws (tenuous relationship with the Debian Project, peculiar release cycle), seems to know what it takes to make a system usable with very little upfront cost: How the installer needs to work, how to provide and organize the graphical configuration tools, and how to provide a base installation that is familiar and functional to a broad swath of potential users.

While this does change the dynamic of the community, it's also the only way that linux on the desktop is going to grow. The transition between windows power user and linux user is not a direct one. (While arguably the transition between OS X and Linux is reasonably straight forward.) The new people who come to the linux desktop are by-and-large going to be users who are quite different from the folks who have historically used Linux.

At the same time, one of the magical things about free software is that the very act of using free software educates users about how their software works and how their machines work. The cause of this is partially intentional, partly by virtue of the fact that much free software is designed to be used by the people who wrote the software, and partly because of free software's adoptive home of UNIX-liken systems. Regardless of the reason however, we can expect that even the most "n00bish" of users to eventually become more skilled and knowledgeable.


Having said that, in direct response to the article in question, even though I'm a huge devote of a "real" text editor, might it be the case that the era of the "do everything text editor" may be coming to an end? My thought is not that emacs and vi are no longer applicable, but the truth is that building specialized domain specific editing applications is easy enough that building such editing applications inside of vi/emacs doesn't make the same sort of sense that it made a twenty or thirty years ago? Sure a class of programmers will probably always use emacs, or something like it, but I think the change of emacs being supplanted by things-that-aren't editors, say, is something that isn't too difficult to imagine.

If the singularity doesn't come first, that is.