Production Ready

Chris and I were talking about our Linux usage the other day, and we both came to the conclusion that for better or for worse our main production machines were Linux machines. He still has a Vista machine and I still have my MacBook, but our main desktop machines are Ubuntu boxes. I’ve been rolling over a few questions: around what it means for an operating system to be suitable for production, and what it means that Chris and I are both using Linux systems for our day-to-day heavy lifting. Then, in order:

Production

Given the nature of my work (both vocational and avocational), I use and rely on computers extensively. While I’ve done a lot of things to backup my computer in the last few months and days, I cannot abide by a system that won’t do what I need it to when I need it. While most computers are pretty reliable these days, the understanding that a computer is going to be there and ready with the programs and the data is as much a matter of trust as it is technical capability. Users need to be able to trust their production systems to keep their data, to run as expected, and to not fail them.

Another factor is user-comfort. While I’m not 100% comfortable with my new computer yet, I know that this is something that comes with time, as we use a system more, we all learn quicker ways of accomplishing common tasks, and it becomes easier to perform our most important computing tasks, and the “price” of converting a project from your mind/speech/analog source goes way down. That’s a good thing.

A lot of my own musing on this site about productivity and technical usage, could be classified as being about making systems and users more production ready. While I think hacking on technology is really interesting, and technological development is really important, at the same time doing things with technology, is always the more important thing.

Linux

Chris and I are pretty technical users, admittedly, but I think we also have a pretty low tolerance for stuff that just doesn’t work. Which says something really fundamental about the status of Linux in 2008. While there are rough spots, the applications are pretty much right where they need to be. For instance, even before I began to seriously consider getting a Linux desktop set up for my own purposes, the vast majority of the software that I use on OS X has very viable Linux ports. While the general usability of Linux-based systems have gotten much better in the last couple of years (thanks Ubuntu), the ecosystem is very vibrant, and that’s incredibly important.

Having said that I think we’re probably still a few years away from seeing an Ubuntu/Linux Mint that’s ready for the general public. There are a few things that need to happen before that, such as:

  • Hardware makers need to continue to make and build computers with Linixes pre-installed. Ubuntu’s installer isn’t more painful than windows' or OS X’s, and convincing average users to switch for ideological reasons after they’ve just bought a new computer is difficult. Also, given hardware compatibility issues, having companies like Dell and HP make sure that there’s support in the OS for the hardware is a great service.
  • The interface needs to get a lot better. This is “just wait and see” issue mostly, but I think GNOME needs work, and without a really good and fun UI, Linux is sunk.
  • X11, the primary graphics/interaction layer for all (?) unix/unix-like operating system GUIs (other than OS X) needs some work. Dual monitor support is lackluster, support for laptop displays is tenuous, and while I don’t think we should throw all of X way, a lot of the UI problems are rooted in X’s limitations. Of all the parts of the UI in most Linux systems, X is the weakest link. While this is a pretty low level concern, making X better will make the whole experience better. And that’s what counts. I may be able to get really impressive system up-times, but unless I can get impressive up-times for X, the former isn’t worth much.
  • End User distributions (Ubuntu, etc) and bare-bones distributions (Arch, Gentoo, etc.) need to become even more distinct. Ubuntu should probably attempt to use a more “rolling release” approach to package inclusion and should attempt to cover up command line access the same way that OS X does, say, and the bare-bones distributions should probably avoid delusions that they’re going to capture the end user market, and focus at being even more awesome bare bones distributions. The great thing about Linux distributions is that they don’t really compete with each other and while the geeks might know this, I’m not sure the general public does in the same way.

That’s what I have for now, do any of you have ideas about what more Linux needs before it’s production ready for the general use user?

Freedom in Source

There are two schools of thought on why software developers should release their projects as free/open source software. There’s the thought that open source equals software freedom from large companies who might seek undo influence over your computing; then there’s the opinion that open source equals the freedom to tinker and use your software as you see fit.1

Which is a really interesting argument, I suppose, if you’re living in the 1980s (or before really.) In the earlier days of computing and open source, having unencumbered access to source code meant something very different. Most computer users ‘back in the day’ had a stronger programing background, and computer systems (software and hardware) were less reliable and required more tinkering. Open access to source code had a functional meaning that was fundamentally different from what it means today.

Today, most computer users and users of open source software don’t have a particularly strong background in program. My desktop, with the exception of a few encumbered media codecs, and a closed source video driver, is all open source. While I write shell scripts that do cool things, and I can dabble in PHP when needed, I’ve never tinkered with an C code, and never really done anything that could be rightly considered a “program.” And this says nothing of all the people who use open source programs like Firefox, Open Office, and Pidgin.

While I am a fierce proponent of open source (as traditionally defined) in a strictly pragmatic sense, the fact that I can download the source code of software is largely irrelevant to me on a day to day level. This is to say that the “source” in “open source” is as much a symbolic identifier as it is a meaningful technological feature.

So what does open source symbolize and signify in the contemporary moment? This is a huge question that I think we requires a non-significant amount of attention. Is open source really about larger freedoms in our society? Is open source software about smaller/more concrete freedoms in terms of flexible and customizable systems? Is open source just the only viable way to practice the UNIX philosophy of small modular tools, rather than large monolithic tools?

There are also other angles that we can run with this question. Is open source the only way to gain a large enough user base (cite prevalence of LAM(P/P/P/R/J) stack vs. Microsoft’s server technologies?) Given current economic instabilities, might open source be a more viable way of generating wealth and participating in an authentic economy?

I expect that I’ll probably be tossing this question around, in various ways for years to come, but you have to start somewhere.

Onward and Upward!


  1. The conventional wisdom is that this divide is represented by the division between the Free Software Foundation (in the freedom from corner) and the BSD/Apache Software Foundation (in the freedom to corner). This is of course simplifies the position of both of these institutions in the community, as both BSD folks and FSF folks advocate the “opposite” argument. For example, RMS' pro-hacker arguments are very much “freedom to” and I think the inspiration for BSD-style projects is often very much a “freedom from” kind of proposition. ↩︎

In Real Time

So in the past couple of weeks we’ve seen the proliferation of a couple new “real time services,” for various kinds of data. Enjit brings real time data from friendfeed (which itself aggregates a lot of data pretty close to real time), and then there’s tweet.im which finally brings something approaching real time twitter interaction back to those of us who have been begging for a real time/xmpp twitter interface for a while.

Though to be honest, I think that the lag is a bit more than 30 seconds, but I’m not sure and I’m not going to quibble for now. Actually I’m not convinced that this redeems twitter, given the number of other features that they’ve turned off (can’t delete posts anymore, can’t elect to not receive updates from people you follow, not to mention track) but it’s a start. When they get Oauth and Open Micro Blogging implemented,1 I won’t worry. But in the mean time, there are people on twitter that I want to be able to talk to, and this is a much appreciated move.

In any case, what this week has taught us is that real time services are here, and that companies and developers are beginning to realize this and provide services based on that. The man said “you don’t need a weatherman to know which way the wind is blowing,” and I don’t think you need an ubergeek to know that realtime is on the way.


Which means, its my turn--as a resident uberworkflow/user interest geek--to parse out what this means. You might think that this means that there are geeks who are wanting as much data as possible as quickly as possible. But I don’t think that’s the case. Really i think it’s about, having as much control over that data as possible.

Ken Sheppardson, one of the folks behind enjit, talked about wanting to have was all about consuming as much data as he could. said " I only want a notice every hour or so when somebody’s talking about something I care about, but I want it in time to participate." (Edit Note: I totally flubbed up the reference and introduction to this section and have edited to make me seem like less of a dip. Apologies.)

The secret is that real time means push, and the truth is I think that I read less content and spend less time reading content that comes at me real time, than I do reading the same content that I have to check on in a special client or on a web page. Why?

Because the time/energy spent on checking disappears. So if twitter is coming at me in an IM, I can trust that there’s no reason to visit twitter.com, unless it’s to look at someone new to follow. And it’s easy to tell if I’ve seen something before, and avoid reading the same content that people blast all over the internet again and again. (Ping.fm, how I hate you). And when you get your data real-time, it’s easier to make filtering decisions, which is a good thing.

Converging these data streams in real time/xmpp (ff, twitter, laconica, etc.) means that your data comes to you, not that you get more of it. So from a usage/workflow perspective, I think this is wonderful.


  1. So you’re probably thinking, how then would twitter make money. I’m not, for the record making this argument out of some idiomatic Open Culture position, though I’m sympathetic. Rather, I think that Oauth and OMB are features that twitter’s userbase might value. I’d, totally be willing to pay nominal fees for services, like IM and track, and text messaging, and the ability to filter that stream? Totally worth a few bucks a month. And twitter could totally have special features (like their election coverage) be ad supported (which would be the most logical solution anyway) and that might be really effective. So the next person to say “but twitter has to make money somehow, they can’t give everything away for free,” gets branded an uncreative apologist. ↩︎

Wiki Completion

Insofar as it’s been a loose series, this post is a continuation of my thoughts on wikis and hypertext. My leading question is “Are wiki’s ever completed?” And “If so, how do we know and decide?”

This is a question that I find myself wondering about a fair deal, and I think the answer--which I haven’t come to a firm conclusion on--has a to do with the potentials of the wiki medium.

I should jump in and say that, while wikipedia is a great reference, a great tool, and an important project, because it’s the example of “what a wiki is” it has shaped how we think of the medium in a way that I’m not sure is particularly useful. The biggest wikia projects are encyclopedic studies of Star Wars and Star Trek, and while their material isn’t quite suitable for wikipedia it is certainly in the same vein and tone.

The encyclopedia form has been revitalized by the wiki, by decentralizing the review process, democratizing (more or less) the focus and the creation of articles, but most importantly by removing the “space limitation” on content. Nevertheless I continue to be convinced that wikis as a forum are capable of so much more.

On the one hand, big projects, like the kind that might be recorded in a wiki, are never really completed as much as they are eventually abandoned. That sounds pessimistic, but I think it’s ultimately productive: eventually a project has done what it needs to do, and what with perfection being unattainable, the productive thing to do is move on. The decision of when to do that is perhaps one of the most important decisions that a creator/artist can make about a work.

But who makes that kind of decision about a wiki? Is there a point where people just abandon a wiki? While wiki’s are collaborative, that’s not to say that they don’t have leadership (wikipeida’s leadership organization is epic, for example,) but who makes these kinds of decisions?

While I’m prone taking an entire wiki as a single document, the fact that a wiki is really a network of tightly connected texts surely has baring on the answer to the question.

Software projects use the concept of “stable releases” and a “release cycle” to ensure that a project can both continue to develop, and exist as finished cycles. The debian project has it’s own procedure for encouraging ongoing development of their system/packages and creating rock solid stable systems.

Additionally, while most wiki’s have semi sophisticated version control systems, they for the most part don’t have a concept of “branches,” which might be helpful for implementing a stable wiki/wiki branch system. Even ikiwiki, which can use systems like git to store history, doesn’t have a good display system for switching between branches/revisions.

I’m not sure--of course--if there are really good answers to these questions. While I haven’t begun to post any of them, there are a number of projects that I’ve been playing with in my mind (and locally on my own computer) that are wikis, but I’ve been hesitant to let them go into the wild in part because of issues like the one discussed above. And above all, if the wiki format is going to grow away from and independently of the encyclopedia format, I think we need to begin discussing questions like that.

So there. Onward and Upward!

The Siege

So I said I would, in honor of NaNoWriMo, write about writing on tychoish a bit during the month of November. So here I am. I thought for the first in this occasional series, I’d touch on the project that I’m currently working on.

I’m working on a long novel/novella that fits loosely into my interest series of “historiographical science fiction” stories that I’ve been working on for a while now. It’s a totally new world, and deals with a couple of different groups of characters active in the same--singular--moment of time, but who all have a very different historical perspective and lineage.

Some of the characters live on a human populated outpost dozens of light years away from Earth (and have lived on this world for generations), other characters have never left the Earth system, the main characters belong to the space faring class, and have spent most of their adult lives going between worlds (and due to relativity) which makes them hundreds upon hundreds of years “older” than everyone else.

It also has elements of military SF and political drama, the story is all about living on the cusp of great social change which I thinks is pretty relevant. In all I’m pleased.

A lot of people say that ideas for novels are cheap and bountiful, and that writing a novel isn’t as much about having a good idea as it is about having the stick-to-itness to finish writing a (pragmatically) 80-100 thousand word document. Indeed NaNoWriMo is founded on this kind of idea. While I don’t disagree that stubbornness is a much needed skill in a novelist, nor do I disagree that ideas a bountiful, I’m not sure that good ideas are a dime a dozen, nor do I think that flawed conceptual work can be entirely compensated by a skilled execution (or inversely that briliant conceptual work can hide less than perfect execution). These are two factors which have a sliding and dynamic relationship, and that’s part of the reason why fiction writing is an art and not a science.

So with that out of the way, allow me the indulgence of a little introspection. My previous stories have been interest enough, and I’m pretty sure that my execution has improved in the last six years, but my largest regret as I go over my older stories is that there is often some huge conceptual failure. The tensions are too simple. The characters don’t feel/read as being distinct enough. The plots are simplistic and a bit improbable, and there’s a point in a story where I always seem to loose the forward drive--not in writing momentum, but in the plot--where the characters are sort of looking at each other saying “hrm, what next,” and that’s bad.

For this project, I took the position to focus on these conceptual issues. Not because I’m satisfied with my technical ability at writing fiction, but because that’s something that I can a) fix later, and b) will improve gradually with time, as long as I’m attentive to that development.

The last time I planned out a novel, I concentrated on getting the “what happens next and next and next” details of the plot worked out. I have a stack of note cards in my desk drawer that outline all of the scenes (settings, present characters, plot goals, etc.) and as I began to write the story, I realized that I didn’t have a clue who the characters were, or any sort of deep understanding of the world outside of what the characters were doing. That was a problem.

This time, I opened up a new page in my personal wiki and I just started writing. Not the story, but stuff about the story, the major characters, the big political groups and institutions that I’d be dealing with, stuff about the technology as it related to the plot and the customs of the worlds I knew I’d be dealing with. And after a few weeks and several thousand words, I realized that I needed more not just more details so that I could write a stronger outline, but more things going on, more tension.

About this time I listened something Cory Doctoorw said in an interview about how the key to dramatic tension was “making it more difficult for the key characters to get what they wanted on every page, and as long as that happens, you’re doing your job.” Which you can’t do unless your story is very short (which presents its own dramatic challenges), or you have a lot going on in your story. Given the advances of digital technology, we (or I) can sometimes lose track of the fact that even though novels are long, their length needs to be worthwhile and justified.

So I added stuff until it felt full, and I was excited to start. Not just because new projects are exciting to start, but because there was so much going on. And in the end? my notes directory has almost 7,000 words, which is about half what the story itself has these days.

Oh and you’re wondering about the title? When I started the story I thought it was going to begin in medias res with a warship in siege of a colony world, and it would be about the Siege, hence the title of this post, and the working title of “The Siege of Al-Edarian,” but it doesn’t begin in the middle of that story, and there isn’t really a Siege any more. So I need a new title. There are worse things to be in need of, though.

Onward and Upward!

Deep Computing

So here’s another report on my Linux usage:

For some unknown reason I tried to upgrade to Ubuntu Intrepid (8.10) last weekend. Which failed epically. So I reinstalled, which has gone… less well?

Explaining the problems I’m having are incredibly complicated. Everything works well, except the dual monitor support, which is just bothersome. I have a work around that seems to work pretty well, but I’m not sure if pretty well = production ready. When I’m using it, I’ve taken to running all of my important windows in screen, so that if the X server panics and I have to kill it, I can pick up right where I left off with everything. I think I mostly have the problem kicked, but I’m not quite to the point where I trust it.

I’ve also traced down about 80% of the problem, but I don’t have quite enough to file a bug report.

And the truth is that I’m adjusting pretty well to the linux world. Emacs is a giant ball of confusing, but that’s to be expected, and I have it rigged to read my normal text file format and do all the right highlighting. The scope of what my fingers know how to do is pretty limited, but it’s a start, and I’m purposely going slow so that I can learn things the right way. The last time I mentioned something about emacs on the blog, Jack emailed me something about emacsclient and emacs server mode, which I haven’t totally absorbed yet.

My current conclusion is that I’m going to have to find some sort of new way of managing and interacting with my text editors. Rather than have a bunch of different instances of the editor open (as I might do with vim or TextMate) I’ll probably figure out some way to work with two instances of emacs open, one for each screen and just move things between them. This is subject to change.

Here’s the rundown:

  • Other advances made recently:
    • I have figured out a cool way to implement multiple mail profiles using mutt. I have a lot of different email addresses/identities that I need to send email from (real life contacts, professional contacts, work contacts, etc.) and being able to automatically switch? A divine thing.
  • Advances yet to be made:
    • I need to figure out/use a news reader on the new computer. This requires segmenting my current OPML file into “laptop reading” and “desktop reading.”
    • I need to figure out some web-browsing solution, that really works. Vimperator seems to be the Awesome default, and while it’s the best Firefox around it’s still firefox, and FF doesn’t impress me on a personal level. (I use a browser incidentally, and mostly for viewing static pages, not as an application platform.)
    • Still don’t have my calendars in a place where I can start accessing them on linux.
    • I’m still using the laptop to serve my personal wiki/notebook(s), though I have a clone of the repository on the linux machine, which is really the inverse of how it should work.

And a thousand other things… The truth is I think it’s all going pretty well. I will of course keep you up to date

Window Management

OS X has been a really innovative force in personal computing. It’s highly usable, lots of different kinds of users are able to work with it. It’s compatible with lots of different standards, and it provides a lot of tools to developers that make even the sucky third party software pretty nice. I think if you look at Windows Vista, and the latest versions of KDE and GNOME and some of the other open source user interfaces, it’s pretty easy to see some resonances of OS X.

More importantly, probably, it proved that Unix and Unix-like operating systems were viable and usable for desktop use cases. While we’ve been able to run BSD and Linux on home computers for years, I don’t think we’ve thought of Linux as being something that anyone could run without needing a lot of technical background.

Ubuntu Linux followed this trend, pretty persuasively. Ubuntu makes desktop unix-like experience possible. Which is a really big thing. Chris and I are both using Ubuntu these days for our primary desktop computers, and it’s been really interesting to compare notes. One thing that we keep coming back to, is that despite the fact that the core of the OS is great, the user interface (UI) is tragic. OS X proves that it’s not only theoretically possible to have a nice UI, but it’s possible to do that on a unix-like system.

As an aside, I’d bet good money that Apple has an in house version of Aqua/Cocoa/Carbon/CoreServices (all the UI and application frameworks that make OS X, OS X) running on the Linux Kernel. Betcha.

And by tragic, I don’t mean that GNOME and KDE are unusable, but they’re flawed. GNOME doesn’t use space efficiently, it’s applications are functional but not exceptional (and because of the way the GNOME project is there aren’t many ‘third party’ alternatives), and it feels sort of behind the curve. It works, and it does everything that you might want in a graphical user interface (GUI) but it’s not exceptional.

Thankfully KDE fails for completely different reasons. It’s attractive and usable where GNOME isn’t, and the interface is unique and exceptional where GNOME feels stale and aged. But the applications aren’t nearly as compelling, and it suffers from having an interface/look that’s too flexible, such that it’s pretty easy to get a setup that looks like crap. Not to mention the fact that the kind of rich GUIs that KDE emblimizes don’t mesh particularly well with the mostly hacker audience that Linux (and it) attracts. But that’s a larger critique of the GUI paradigm, which isn’t quite on topic.

So where does this leave us?

I’d say the biggest shortcoming of linux systems is the window management options. I like Awesome and I think there are a bunch of people who might really like it--but it’s not for everyone, and I’m admittedly not up to date with enough of the other options to provide a really clear analysis, but I know that this is the the next big issue for open source operating systems.

I’m not sure that I know enough

Is there any there there?

I learned the other day that some (a lot?) of the big box retail corporations--Costco/Sams/etc.--don’t turn a profit by selling things to people.

This shouldn’t be particularly large surprise, they sell goods at prices that undercut all of the competition, and probably aren’t that much above the core cost of the goods (if that).

And yet the companies are profitable. How? My understanding is that they take their gross income and invest it in short term things--bonds, stocks, and the like--which generates enough income that the entire operation can turn a profit. In the mean time, to make sure that the trains run on time (ie. that they make payroll, keep the lights on at the retail locations, etc.) they borrow against their non-liquid assets, which are busy earning the profit.

Depending how widespread this is (and I’d be inclined to think that it’s pretty rampant) we can explain a couple of phenomena with this understanding. First off, it explains why the market tends to move as a whole. Not because investors panic when they see the number dropping on the trading floor, but because they know that if company B doesn’t perform at a certain level company A can’t perform either.

That’s pretty straightforward.

The more disturbing realization is that the entire basis of our economy isn’t about the exchange of money for goods and services, but rather the exchange of money for other money. The hope being that by exchanging money a lot, it will somehow turn into more money. Which given the legal fictions of the banking industry, it does. More or less. Until it doesn’t. Enter the present.

In the banking industry--the one that was recently bailed out by the federal government--we know this. Banks make money by charging interest on certain kind of transactions, and while this is kind of creepy and odd when you think about it, it’s not surprising. The exchange of material goods, on the other hand is completely absurd and troubling.

The conventional wisdom for the last hundred years, or so, is that big corporations are able to be more successful because of economies of scale and standardization, the largest lesson that I’m taking away from this right now is that while big corporations might be more efficient, they might not--in a concrete sense--be more successful/profitable. Outside of the profits made by ridding the money-holding-financial game.

One of the reasons why I’m interested in open source software is because it proposes and requires a very different sort of “economic” (in the generous sense) perspective. Open source is very business centered, but the exchange of money is all centered around wealth-for-services, rather than wealth-for-money-holding, say. These kinds of alternate (and it’s sad that it’s the alternate) means of generating and distributing wealth are the inevitable conclusion to the current economic crisis. It’s unclear how long the current system will linger and limp, but eventually something better/different will emerge.

This isn’t the kind of subject matter that I typically write about on tychoish, and I don’t want you to worry that I’m going to turn into some sort of political blog. Except insofar as I’ve always had a (lower case p) political side focus, I think it’s interesting to think through these kinds of issues to try and figure out what’s going on. While I think change is afoot, and a more authentic economy is on the horizon, this is a systemic change that will not come easily.1

Onward and cautiously Upward!


  1. There is a minor movement in some circles of people who are attempting to reduce their ecological footprint, buying locally produced goods, opting toward organic foods, and so forth. While there are a lot of reasons to do this--quality/freshness, etc--this kind of “individualistic economic activism” requires an absurd amount of privilege (money/time) and access to economic resources. While this economy is more authentic in some ways, it’s not independent or self-sufficient and that’s totally crucial here. ↩︎