My Workstation Choices

I’ve been talking in fairly abstract terms about this new workstation that I’ve been setting up, and about how this fits into the general ethos of my existing hardware setup, but I think, it’s probably the right time to interject and clarify some of the choices and base-assumptions that I’ve made during this process.

  • My current systems (hardware):
    • A moderately powered dual-monitor workstation (ubuntu-from-Dell, circa October 2008) that’s running Ubuntu for the moment (probably move to arch or Debian in the next few months.) This was my work computer during the freelance period.
    • A thinkpad x41t (vintage 2005); 1.4 GHZ pentium M (?); 1.5 gigs of ram; 60gb hard drive, running ubuntu with the lenny kernel. This is my main personal computer at the moment, as I haven’t gotten the desktop setup yet. It’s a great machine, but I do feel a bit cramped on it for heavy day-to-day useage, it’s great for distraction free writing, and portability.
    • (The work computer) A contemporary vintage iMac running OS X 10.5 latest, and also running Arch Linux in Sun’s VirtualBox system.
    • (The infrastructure) Debian based virtual server(s), to provide my own personal cloud (web hosting, git hosting, file syncing, remote shell access, email).
  • My current systems (software; but application centered):
    • Window Mangement: awesome. I run slim as a display manager on the laptop, and just use startx/xinit on the desktop/virtual box sessions.
    • Email: I use mutt for reading email, compose emails in emacs, sort email using procmail, download email using fetchmail (if neccessary), but mostly keep mail synchronized using my own git-mail scripts. For sending email and smtp connectivity I use msmtp, and I suppose I’m using postfix on the server as well.
    • Text Editing: I use emacs23 (still the CVS/development/snapshot branch) of emacs (stable is v22). I use 23 because I like the emacs-daemon functionality, and it’s pretty damn stable. I have aquamacs installed under OS X for the moment, but I’ll probably install 23 soon, because it’s quirky.
    • Personal Organization: org-mode, which is technically included in emacs (and I use whatever the stock version in 23 is, these days.) I use org-mode for managing my todo lists, shopping lists, project planning and appointments.
    • Shell/Terminal: bash and urxvt(cd) under linux, and terminal.app on Leopard. And GNU Screen. I live in screen.
    • Web Browsing: I use firefox with hit-a-hint, and emacs-key-bindings (firemacs) on linux systems, as I wait for the day when good functional web-kit based browsers begin to become a possibility.
    • IM/IRC/Chat: mcabber for IM (running ejabberd on my server with the pyaimt transport), and irssi for IRC.
    • Linux Distribution: Debian stable on servers; Ubuntu-mostly on desktops with a desire to move to ArchLinux for desktop use. I love debian, but I think for my desktop-use purposes I cant find a setup that I’m comfortable with, and while ubuntu is great (and I’m glad it works so well with my laptop;) it’s a bit heavy and makes assumptions that I’m not comfortable with. Alas.

That’s what I’m working with. Just so you know. The mocking can begin now.

New Workstation Trials

Rather than bore you with minutia of my move (eg. shit, I need to get a shower curtain; why is all my toilet paper on the moving truck; I’m getting really sick of sitting on the floor while I wait for the moving truck) I thought I’d talk a little bit about something that seems to occupy a bunch of my thinking these days: how I’m setting up my work-desktop computer. It’s trivial, an on going process, and of minimal interest to most other people.

So I think I’ll make a series of it. There’s this post, and another that I’ve written, and a few other thoughts that I think fit in. And the truth be told, I’ve been spending so much time recently packing things, attending to chores and dealing with other crap, that it’ll be good to have an excuse to do some writing again.


Here’s the setup: I have an iMac of contemporary vintage for a workstation, which is of course running OS X 10.5 Leopard, by default. Here are the challenges I found myself facing:

  • OS X is a great operating system, but I’m not a mac guy anymore, much to my surprise. I installed quicksilver pretty early on, but the truth is that my head isn’t shaped that way any more. The mouse frustrates me, and all the things that make OS X great I don’t really use.
  • All my other machines and working environments for the past while have been linux based, and I’ve basically come to the conclusion that having one environment that’s shared between a large number of boxes is preferable to having different and unique operating environments.

For a long time, I got the unified operating environment by virtue of only using one portable computer; now I just keep my laptop and desktop the same (more or less) get a very similar result. This is probably worth it’s own post, but I was vary wary of having a work machine that was too radically different from what I’m used to and what I use personally.

So if I need to run Linux, there are really only three options:

  1. Level OS X and just run ubuntu/debian/etc.
  2. Set up a dual boot OS X/Linux system, and switch as I need to.
  3. Run Linux in some sort of virtual machine environment like VM Ware or VirtualBox.

Option one is simple, and I liked the thought of it, but it seems like such a waste and what if there was some mac-specific app that I wanted to try? And it removes the option of using OS X as a back up… While I’m no longer a mac guy, I do think that OS X’s desktop environment is superior to any non-tiling window manager for X11/Linux.

Option two works, certainly, but I hate rebooting, and in a lot of ways option two would be functionally like option one, except I’d have less accessible hard drive space, and I’d have to reboot somewhat frequently if it turned out I actually needed to use both. Best and worst of both worlds.

Option three is frustrating: virtual machines give some sort of minor performance hit, integration between host and guest is difficult and confusing sometimes, and guest operating stability is largely dependent upon host operating system stability.

Ultimately I chose option three, and as I used the machine, the fact that performance suffers a hit is totally unnoticeable, and though it took a bunch of futzing, I think I’ve finally settled into something that shows a lot of promise of working. (Ed. note that I’ve got a bit of a production lag on the blog.)


I think that’s sufficient introduction. There’ll be more, don’t worry.

dweebishness of linux users

I ran across this smear piece with regards to Ubuntu users from the perspective of a seasoned Linux user, which I think resonates both with the problem of treating your users like idiots and differently with the kerfuffle over ubuntu one, though this post is a direct sequel to neither post.

The article in question makes critique (sort of) that a little bit of knowledge is a terrible thing, and that by making Linux/Unix open to a less technical swath of users, that the quality of the discourse around the linux world has taken a nose dive. It’s a sort of “grumble grumble get off my lawn, kid,” sort of argument, and while the elitist approach is off-putting (but total par for the course in hacker communities,) I think the post does resonate with a couple of very real phenomena:

1. Ubuntu has led the way for Linux to become a viable option for advanced beginner and intermediate computer users. Particularly since the beginning of 2008 (eg. the 8.04 release). Ubuntu just works, and a lot of people who know their way around a keyboard and a mouse are and can be comfortable using Linux for most of their computing tasks. This necessarily changes the makeup of the typical “Linux User” quite a bit, and I think welcoming these people into the fold can be a challenge, particularly for the more advanced users who have come to expect something very different from the “Linux Community.”

2. This is mostly Microsoft’s fault, but people who started using--likely Windows powered--computers in the nineties (which is a huge portion of people out there), being an ‘intermediate’ means a much different kind of understanding that “old school” Linux users have.

Using a Windows machine effectively, and knowing how to use one of these systems, revolves around knowing what controls are where in the control panel, around being able to “guess” where various settings are within applications, knowing how to keep track of windows that aren’t visible, understanding the hierarchy of the file system, and knowing to reboot early and often. By contrast, using a Linux Machine effectively revolves around understanding the user/group/file permissions system, understanding the architecture of the system/desktop stack, knowing your way around a command line window, and the package manager, and knowing how to edit configuration files if needed.

In short, skills aren’t as transferable between operating systems as they may have once been.

Ubuntu, for it’s flaws (tenuous relationship with the Debian Project, peculiar release cycle), seems to know what it takes to make a system usable with very little upfront cost: How the installer needs to work, how to provide and organize the graphical configuration tools, and how to provide a base installation that is familiar and functional to a broad swath of potential users.

While this does change the dynamic of the community, it’s also the only way that linux on the desktop is going to grow. The transition between windows power user and linux user is not a direct one. (While arguably the transition between OS X and Linux is reasonably straight forward.) The new people who come to the linux desktop are by-and-large going to be users who are quite different from the folks who have historically used Linux.

At the same time, one of the magical things about free software is that the very act of using free software educates users about how their software works and how their machines work. The cause of this is partially intentional, partly by virtue of the fact that much free software is designed to be used by the people who wrote the software, and partly because of free software’s adoptive home of UNIX-liken systems. Regardless of the reason however, we can expect that even the most “n00bish” of users to eventually become more skilled and knowledgeable.


Having said that, in direct response to the article in question, even though I’m a huge devote of a “real” text editor, might it be the case that the era of the “do everything text editor” may be coming to an end? My thought is not that emacs and vi are no longer applicable, but the truth is that building specialized domain specific editing applications is easy enough that building such editing applications inside of vi/emacs doesn’t make the same sort of sense that it made a twenty or thirty years ago? Sure a class of programmers will probably always use emacs, or something like it, but I think the change of emacs being supplanted by things-that-aren’t editors, say, is something that isn’t too difficult to imagine.

If the singularity doesn’t come first, that is.

The Hard (GNU/Linux) Truth

Backstory: Chris is something of an Operating system junkie, and likes to play around with things. I think he’s weird, but whatever. Also, he bought a netbook several months ago, and after much persistence on my part (and some epic failures of Ubuntu instalations,) he finally installed Arch Linux, and it worked amazingly well. Here’s a funny (ok, mildly amusing?) conversation about his latest system plan with only minor editing for understandability and comedic value:

Chris: I was thinking that I’d move some stuff off of my second internal hard drive and install the alpha version of Ubuntu to see how it works.

tycho: How it works? Like crap. It’s ubuntu, so it’s meant to be easy to install and usable, not fresh, robust, and hardened. Besides its an alpha, if you what stability just install Arch and get it done with.

Chris: [silence and pause]

tycho: now that we resolved this quandary what’s next?

Chris: [sighs and laughs] Nothing, really. [pause] I’m downloading Arch now, asshole.

tycho: [laughs] You’re welcome.

I don’t actually use Arch, because Ubuntu has been simple and I’ve yet to have a problem with it, but I would use Arch if I needed it, and I (seem) to recommend it to all my friends who are really geeky and are having problems with debian/ubuntu.

shrug

Linux Emergence

Here’s a little bit about emergence/systems theory and open source, as promised in my `change process <http://tychoish.com/posts/theories-of-change>`_ post a while back.

I was reading this article about linux and complexity theory last week, which I think is a pretty good frame for any discussion of open source and linux from a systems approach. There are a few things that I had issue with. First, the article is 3+ years old, and so it doesn’t have the benefit of seeing what’s happened with ubuntu linux, netbooks, the AGPL, rails and web 2.0, let alone things like Drupal and the last two major iterations of Firefox, which for all their assorted faults have really changed the face of open source.

Secondly, the empirical study focuses on the kernel development, and takes the kernel to represent the entire Linux eco-system, which I think it doesn’t do very well. Kernel development is really high level, really niche, and despite its size, represents so little of what people think about when they talk about “Linux.” GNOME, KDE, the GNU Toolchain, X11, pyton/ruby/perl, let alone the superstrucutral elements that projects like Debian, Gentoo and Arch represent are really more important than the Kernel. Debian and Gentoo more or less work with various BSD kernels and given enough money--to inspire interest and caring--full BSD based releases wouldn’t be technologically difficult. The dominance of the Linux Kernel in the free-operating system space is--I think largely the result of momentum and the fact that the kernel is damn good and that there’s not a lot of a need for another kernel option.

In any case, the paper is in a lot of ways a 21st century review of Eric S. Raymond’s “The Cathedral and the Bazaar” paper. Raymond’s argument is taken as a statement in favor of bottom up organization (Linux, and the bazaar) in open source projects, and the author uses this insight to explore kernel development with some good-old systems theory.

While it’s very true that there isn’t a monolithic organization in the Kernel development, it’s not exactly a free for all. Linus (or whoever is at the top of the project) provides a measure of top-down structure. This makes it easier for contributions to bubble from the bottom up. I think it’s no mistake that many open source projects have “dictator”-type leaders that are pretty consistent.

This says nothing of the barriers to entry for most kinds of development in the open source world, aren’t trivial (commit access projects that use non-distributed version control; eg. Drupal), let alone the burden of engineering knowledge for Kernel-level development, and other lower level projects. These--largely--informal standards none the less, constrain what happens on the kernel development.

Even if a lot of the day to day work on the kernel isn’t done by Linus himself, his presence, and the importance of his branch/tree gives structure. Particularly before the advent of git, but even now. And I’m not saying that this is a bad thing--quite the contrary, top down forces are often a good thing--but I do think that overly romantic depictions of open source development as being bottom-up/anarchical aren’t productive.

Doest his mean that Linux is a Cathedral? Or that it’s headed that way? Not at all. But I don’t think that the Bazaar is necessarily as bottom-up as Raymond (and those that have followed) thought it was. Open source is also much more commercial now than it was ten or twenty years ago, that can’t not have an impact.

Just thoughts…

Onward and Upward!