Industry, Community, Open Source

In "Radicalism in Free Software, Open Source" I contemplated the discourse of and around radicalism in and about Free Software and Open Source software. I think this post is a loose sequel to that post, and I want to use it to think about the role.

I suppose the ongoing merger of Sun Microsystems and Oracle, particularly with regards to the MySQL database engine weights heavy on many of our minds.

There are a number of companies, fairly large companies, who have taken a fairly significant leadership role in open source and free software. Red Hat. Sun Microsystems. IBM. Nov ell. And so forth. While I'm certainly not arguing against the adoption of open source methodologies in the enterprise/corporate space, I don't think that we can totally ignore the impact that these companies have on the open source community.

A lot of people--mistakenly, I think--fear that Free Software works against commercialism [1] in the software industry. People wonder: "How can we make money off of software if we give it away for free?" [2] Now it is true that free software (and its adherents) prefer business that look different from proprietary software businesses. They're smaller, more sustainable, and tend to focus on much more custom deployments for specific users and groups. This is in stark contrast to the "general releases" for large audiences, that a lot of proprietary audiences strive for.

In any case, there is a whole nexus of issues related to free software projects and their communities that are affected by the commercial interests and "powers" that sponsor, support, and have instigated some of the largest free software projects around. The key issues and questions include:

  • How do new software projects of consequence begin in an era when most projects of any notable size have significant corporate backing?
  • What happens to communities when the corporations that sponsor free software are sold or change directions?
  • Do people contribute to free software outside of their jobs? Particularly for big "enterprise" applications like Nagios or Jboss?
  • Is the "hobbyist hacker" a relevant and/or useful arch-type? Can we intuit which projects attract hobbyists and which projects survive because businesses sponsor their development, rather than because hobbyists contribute energy to them. For example: desktop stuff, niche window managers, games, etc. are more likely to be the province of hobbyists and we might expect stuff like hardware drivers, application frameworks, and database engines might be the kind of thing where development is mostly sponsored by corporations.
  • Is free software (or, Open Source may be the more apropos terminology at the moment) just the contemporary form of industry group cooperation? Is open source how we standardize our nuts and bolts in the 21st century?
  • How does "not invented here syndrome" play out in light of the genesis of open source?
  • In a similar vein, how do free software projects get started in today's world. Can someone say "I want to do this thing" and people will follow? Do you need a business and some initial capital to get started? Must the niche be clear and undeveloped?
  • I'm sort of surprised that there haven't been any Lucid-style forks of free software projects since, well, Lucid Emacs. While I'm not exactly arguing that the Lucid Emacs Fork was a good thing, it's surprising that similar sorts of splits don't happen any more.

That's the train of thought. I'd be more than happy to start to hash out any of these ideas with you. Onward and Upward!

[1]People actually say things like "free software is too communist for me" which is sort of comically absurd, and displays a fundamental misunderstanding of both communism/capitalism and the radical elements of the Free Software movement. So lets avoid this, shall we?
[2]To be totally honest I don't have a lot of sympathy for capitalists who say "you're doing something that makes it hard for me to make money in the way that I've grown used to making money." Capitalist' lack of creativity is not a flaw in the Free Software movement.

Radicalism in Free Software, Open Source

The background:


In light of this debate I've been thinking about the role and manifestations of radicalism in the free software and open source world. I think a lot of people (unfairly, I think in many cases) equate dedication to the "Cause of Free Software," as the refusal to use anything but free software, and the admonishment of those who do use "unpure" software. To my mind this is both unfair to Free Software as well as the radicals who work on free software projects and advocate for Free Software.

First, lets back up and talk about RMS [1]. RMS is often held up as the straw man for "free software radicals." RMS apparently (and I'd believe it) refuses to use software that isn't free software. This is seen as being somewhat "monkish," because doesn't just involve using GNU/Linux on the desktop, but it also involves things like refusing to use the non-free software written for GNU/Linux, including Adobe's Flash player, and various drivers. In short using the "free-only" stack of software is a somewhat archaic experience. The moderates say "who wants to use a computer which has been willfully broken because the software's license is ideologically incompatible," and the moderates come out looking rational and pragmatic.

Except that, as near as I can tell, while the refusal to use non-free software might be a bit traumatic for a new convert from the proprietary operating system world, for someone like RMS, it's not a huge personal sacrifice. I don't think I'm particularly "monkish" about my free software habits, and the only non-free software I use is the adobe flash player, and the non-open-source extensions to Sun's Virtual Box. I'm pretty sure I don't even need the binary blob stuff in the kernel. For me--and likely for RMS, and those of our ilk--sticking to the pure "free software" stuff works better and suits the way I enjoy working. [2]

In short, our ability to use free software exclusively, depends upon our habits and on the ways in which we use and interact with technology.

To my mind, the process by which the pragmatic approach to free software and open source radicalizes people like RMS, is terribly unproductive. While we can see that the moderates come away from this encounter looking more reasonable to the more conventional types in the software world, this is not a productive or useful discussion to entertain.

In any case I think there are a number of dimensions to the free software (and open source world,) that focusing on "how free your software" is distracts us from. Might it not be useful to think of a few other issues. They are, as follows:

1. Free software is about education, and ensuring that the users of technology can and do understand the implications of the technology that they use.

At least theoretically, one of the leading reasons why having "complete and corresponding source code" is so crucial to free software is that with the source code, users will be able to understand how their technology works.

Contemporary software is considerably more complex than the 70s vintage software that spurred the Free Software movement. Where one might have imagined being able to see, use, and helpfully modified an early version of a program like Emacs, today the source code for Emacs is eighty megabytes, to say nothing of the entire Linux Kernel. I think it's absurd to suggest that "just looking at the source code" for a program will be educational in and of itself.

Having said that, I think free software can (and does) teach people a great deal about technology and software. People who use free software know more about technology. And it's not just because people who are given to use free software are more computer literate, but rather using free software teaches people about technology. Arch Linux is a great example of this at a fairly high level, but I think there's a way that Open Office Firefox plays a similar role for a more general audience.

2. There are a number of cases around free software where freedom--despite licensing choices--can be ambiguous. In these cases, particularly, it is important to think about the economics of software, not simply the state of the "ownership" of software.

I'm thinking about situations like the "re-licensing" such as that employed by MySQL AB/Sun/Oracle over the MySQL database. In these cases contributors assign copyright to the commercial owner of the software project on the condition that the code also be licensed under the terms of a license like the GPL. This way the owning copy has the ability to sell licenses to the project under terms that would be incompatible with the GPL. This includes adding proprietary features to the open source code that don't get reincorporated into the mainline.

This "hybrid model" gives the company who owns the copyright a lot of power over the code base, that normal contributors simply don't have. While this isn't a tragedy, I think the current lack of certainty over the MySQL project should give people at least some pause before adopting this sort of business model.

While it might have once been possible to "judge a project by the license," I think the issue of "Software Freedom" is in today's world so much more complex, and I'm pretty sure that having some sort of economic understanding of the industry is crucial to figuring this out.

3. The success of free software may not be directly connected to the size of the userbase of free software

One thing that I think Zonker's argument falls apart around is the idea that free software will only be successful if the entire world is using it. Wrong.

Lets take a project like Awesome. It's a highly niche window manager for X11 that isn't part of a Desktop Environment (e.g. GNOME/KDE/XFCE), and you have to know a thing or two about scripting and programing in order to get it to be usable. If there were much more than a thousand users in the world I'd be surprised. This accounts for a minuscule amount of the desktop window management market. Despite this, I think the Awesome project is wildly successful.

So what marks a successful free software project? A product that creates value in the world, by making people's jobs easier and more efficient. A community that supports the developers and users of the software equally. Size helps for sure, particularly in that it disperses responsibility for the development of a project among a number of capable folks. However, the size of a projects userbase (or developer base) should not be the sole or even the most important quality by which we can judge success.

There are other issues which are important to think about and debate in the free software world. There are also some other instances where the "hard line" is over radicalized by a more moderate element, nevertheless I think this is a good place to stop for today, and I'm interested in getting some feedback from you all before I continue with this idea.

Onward and Upward!

[1]Richard Stallman, founder of the Free Software Foundation and the GNU Project, original author of the GNU GPL (perhaps the most prevalent free software license), as well as the ever popular gcc and emacs.
[2]Arguably, it's easier for software developers and hacker types like myself to use "just free software" because hackers tend to make free software to satisfy their needs (the "scratch your own itch" phenomena), and so there's a lot of free software that supports "working like a hacker," but less for more mainstream audiences. Indeed one could argue that "mainstream computer using audiences" as a class is largely the product of the "proprietary software and technology industry."

Package Mangement and Why Your Platform Needs an App Store

When I want to install an application on a computer that I use, I open a terminal and type something to the effect of:

apt-get install rxvt-unicode

Which is a great little terminal emulator. I recommend it. Assuming I have a live interned connection, and the application I'm installing isn't too large, a minute later or less I have whatever it is I asked for installed and ready to use (in most cases.)

Indeed this is the major feature of most Linux Distributions: their core technology and enterprise is to take all of the awesome software that's out there (and there's a lot of it,) and make it possible to install easily, to figure out what it depends on, and get it to compile safely and run on a whole host of machines. Although this isn't the kind of thing that most people think when they're choosing a distribution of Linux, one of the biggest differentiating features between distributions. But I digress.

I've written about package management here before, but to summarize:

  1. We use package managers because many programs share dependencies, that we wouldn't want to install twice or three times, but that we might not want to install by default with every installation of an operating system. Making sure that everything gets installed is important. This, is I think, a fairly unique-to-open-source problem, because in the proprietary system the dependencies are installed by default (as in there are more monolithic development environments, like .NET, Cocoa, and Java [1], and other older non-managed options).
  2. One of the defining characteristics of open source software is the fact that it's meant to be redistributed. Package management makes it easy to redistribute software, and provides real value for both the users of the operating system and for the upstream developers. Or so I'm lead to believe.

In the end, we're back at the beginning where, you can install just about anything in the world if you know what the package is named, and the operating system will blithely keep everything up to date and maintained. [2]

While GNU/Linux systems get flack for not being as useable as proprietary operating systems, I see package management as this huge "killer feature" that open source systems have on top of proprietary system. We'll never see something like apt-get for Windows not because it's not good technology, but because it's impossible to mange every component of the system and all of the software with a single tool. [3]


And then all these "App Store" things started popping up.

As I've thought about it, "App stores," do the same thing for application delivery on non-GNU/* systems that package management does for open source systems. We're seeing this sort of thing for various platforms from cell phones like the iPhone/Blackberry/Andriod to Inuit's QuickBooks and even for more conventional platforms like Java.

Technically it's a bit less interesting. App stores generally just receive and distribute compiled code, [4] but from a social and user-centric perspective, the app store experience is really quite similar to the package management experience.

I'm surely not the only one to make this connection, but I'd be interesting to move past this and think about the kinds of technological progress that stem from this. App stores clearly provide value to users by making applications easier to find, and to developers who can spend less time distributing their software. Are there greater advancements to be made here? Is this always going to be platform specific, or might there be some other sort of curatorial mechanism that might add even more value in this space? And of course, how does Free Software persist and survive in this kind of environment?

I look forward to hearing from you.

[1]Let's not bicker about this, because the argument breaks down here a bit, admittedly, given that Java is now, mostly open. But it wasn't designed as an open system, and all of these solve the dependency problem by--I think--providing a big "standard runtime," and statically compiling anything extra into the program's executable. Or something.
[2]Package management sometimes breaks things, it's true, but I've never really had a problem that I haven't been able to recover from in reasonably short order. I mean, things have broken, and I will be the first to admit that my systems are far from pristine, but everything works, and that's good enough for me.
[3]While its possible to use more than one package manager at once, and there are cases even on linux where this is common (i.e. CPAN shell, system packages (apt/deb, yum/rpm) and ruby gems, haskell cabal and so forth) it's not preferable: Sometimes a package will be installed by a language-specific program manager and then the system package manager will install (over it) a newer or older version of the package, which you might not notice, or it might just cause something to act a bit funny on some systems. If you're lucky, Usually stuff breaks.
[4]Which means, I think indirectly that we're seeing a move away from static linking and bundling of frameworks and into a single binary or bundle. This is one of the advancements of OS X, that all applications are delivered in these "bundles" which are just directories that contain everything that an application needs to run. Apple addressed the dependency problem by removing all dependencies. And this works in the contemporary world because if an App had to be a few extra megs to include its dependencies? No big deal. Same with Ram usage.

Links on Technology, Blogging, and Emacs

A mostly technology-centric collection of links:

  • Emacs starter configuration scripts. I can't, for the life of me, recall why I went looking for this, but last week I ended up with a whole host of basic configuration files that people have published. I've thought about doing this for my own files, but I've not had it properly cleaned up and working in a non-embarrassing way in a while. Most of these are on github, which is a phenomena that could tolerate some investigation, but no matter. Here they are, linked to by screen name: ki, elq, jonshea, larrywright, defmacro (har, just got it), jmhodges, technomancy, markhepburn, and al3x. I'd love to collect more of these, so maybe comments or the cyborg wiki.

  • Adjunct to that, a few more cool emacs and related links and points: First, paraedit which is a little tool which makes editing lisp easier, as well as an org-mode tip from Nathan Yergler about using org-rembmember with firefox and ubiquity. which might be of interest to some of you. I also have in the file [this link about yet another lisp dialect (yald?) called Lysp, but I don't have much more than that. I, on the other hand will have more to say about this in the coming few weeks.

  • My **friend Chris Fletcher discusses his experience with contemporary blogging services** in this post. I'm not sure. Right? I mean blogging is so different today than it was when I got into it. I remember when you handed FTP credentials to blogger so they could publish your blog with their system to your site. Surely people don't do that anymore. One of the things that I noticed at Podcamp (more on that on another post) that, frankly horrified me a bit, was that there was a whole class of bloggers who wanted to do "this thing," but they had no interest in running their own website or making that investment of time and energy.

    And maybe that's what blogging has become. In a lot of ways doing a blog is something anyone can do pretty easily, and having a website is no longer a big part of participating in this discourse. While I'm a big fan of independence, and I don't think the technological burden is that high. "Doing websites," very much made me the geek I am today, so I'm not sure. Having said that, LiveJournal has never easily fit into a niche: It was blogging before there was blogging. It was social networking before we said that. It was subculture/niche before that became the thing. If I had more time in my life I'd figure out some way to study and capture that history.

  • For all of you OS X Desktop User Interaction Geeks, there's this thing that lets you hide unused windows baked into the window manager. I think. I have access to OS X, but I don't really use it enough to give this a try. GNU Screen and lots (and lots) of Emacs buffers make it possible to keep a lot of irons on the fire without getting distracted.

  • A **good example of a zshrc** file if that's your thing. I think it's my thing. Alas. I'll write more about this once I get more used to it and figure some things out. Mostly, I'm finding that one can use it as a pure superset of bash without ill effect.

The End of Reusable Software

I wrote a post for the Cyborg Institute several weeks ago about the idea of "Reusable Software", and I've thought for a while that it deserved a bit more attention. The first time around, I concentrated a lot about the idea of reusable software in the context of the kinds of computing that people like you and me do on a day to day basis. I was trying to think about the way we interact with computers and how this has changed in the last 30 years (or so) and how we might expect for this to change soon.

Well that was the attempt at any rate. I'm not sure how close I got to that.

More recently, I've been thinking about the plight of reusable software in the context of "bigger scale" information technology. I'd say this fits into my larger series of technology futurism posts, except that this is very much a work of presentism. So be it.

To back up for a moment I think we can summarize the argument against reusable software, which boils down to a couple of points:

1. With widely reusable software, most of the people who use computers on a regular basis can pretty much avoid ever having to write software. While it's probably true most people end up doing a very small amount of programming without realizing it, gone are the days when using a computer meant that you had to know how to program it. While more people can slip into using computers than ever before, the argument is that people aren't as good at using computers because they don't know how they work as well.

Arguably this trend is one of the harbingers of the singularity, but that's an aside.

2. Widely reusable software is often less good software than the single-use, or single-purpose stuff. When software doesn't need to be reused, it only needs to do the exact things you need it to do well and can be optimized, tuned, and designed to fit into a single person's or organization's work-flow. When developers know that they're developing a reusable application, they have to take into account possible variances in the environments where it will be deployed, a host of possible supported and unsupported uses. They have to design a feature set for a normalized population, and the end result is simply lower quality software.

So with the above rattling around in my head, I've been asking:

  • Are web applications, which are deployed centrally and often only on one machine (or a small cluster of machines), the beginning of a return to single use applications? Particularly since the specific economic goals of the sponsoring organization/company is often quite tightly coupled with the code itself?
  • One of the leading reasons that people give for avoiding open source release is embarrassment at the code base. While many would argue that this is avoidance of one sort or another, and it might be, I think it's probably also true more often than not. I'm interested in thinking about what the impact of the open source movement's focus on source code has had on the development of single use code versus multi use code in the larger scope.
  • What do we see people doing with web application frameworks in terms of code reuse? For starters, the frameworks themselves are all about code reuse and bout providing some basic tools to prevent developers from recreating the wheel over and over again. But then, the applications are (within some basic limitations) wildly different from each other and highly un-reusable.

Having said that, Rails/Django/Drupal sites suffer from poor performance in particularly high-volume situations for two reasons: Firstly, it's possible to strangle yourself in database queries in the attempt to do something that you'd never do if you had to write the queries yourself. Secondly the frameworks are optimized to save developers time, rather than run blindingly fast on very little memory.

I suppose if I had the answers I wouldn't be writing this here blog, but I think the questions are more interesting anyways, and besides, I bet you all know what I think about this stuff. Do be in touch with your questions and answers.

Onwards and Upwards!

The Tiling Window Manager Story

As I said in "The Odd Cyborg Out," I'm thinking of giving StumpWM a run. So I did some musing about tiling window managers, because I am who I am. Here goes,

So, like I said, I've been tinkering a very little with StumpWM, and I thought some background might be useful. For those of you who aren't familiar, StumpWM is another tiling window manager, like my old standard Awesome, except Stump is written in Common Lisp, and is descended from different origins from Awesome. Here's the history as I understand it.

The History of Tiling Window Managers

There was (and is,) this very minimalist tiling window manager called dwm which is written in less than 2000 lines of code, and is only configurable by modifying the original C code and then recompiling. It's intentionally elitist, and targeted at a very high level of user. While this is ok, particularly given the niche that are likely to want to use tiling window managers, there were a lot of people who wanted very different things from dwm. In a familiar story to those of us who follow free software and open source development: lots of people started maintaining and sharing patch-sets for DWM. These added additional functionality like easier configuration tools, integration with menus, notification libraries, theeing support, API hooks, and the rest is history.

Fast-forwarding a bit, these patch-sets inspired a number of forks, clones, and children projects. DWM was great (so I hear) if you were into it, but I think the consensus is that even if you were geeky/dweeby enough for it, it required a lot of attention and work to get it to be really useable in a day-to-day sort of way. As a result we see things like Awesome, which began life as a fork of DWM with some configuration options, and has grown into it's own project "in the tradition of dwm." dwm is also a leading inpsiration for projects like Xmonad, which is a re-implementation of dwm in the Haskell programing language with some added features around extension and configuration options.

This default configuration problem is something of an issue in the tiling window manager space, that I might need to return to in a later post. In any case...

Stump, by contrast has nothing (really) to do with dwm, except that they take a similar sort of approach to the "window management" problem which is to say that window behavior in both are highly structured and efficient. They tiling windows to use the whole screen and focus on a user experience which is highly keyboard driven operation. Stump, like xmonad, is designed to use one language exclusively for both the core program, the configuration, and the extension of the environment.

And, as I touched on in my last post on the subject I'm kind of enamored with lisp, and it clicks in my head. I don't think that I "chose wrong" with regards to Awesome, or that I've wasted a bunch of time with Awesome. Frankly, I think I'm pretty likely to remain involved with the project, but I think I'm a very different computer user--Cyborg--today than I was back then, and one of the things that I've discovered since I started using Awesome has been emacs and Lisp.

My History with Awesome

Lets talk a little bit more about Awesome though. Awesome is the thing that set me along the path to being a full-time GNU/Linux user. I found the tiling window manager paradigm the perfect thing that lets me concentrate on the parts of my projects that are important and not get hung up on the distractions of organizing windows, and all of the "mouse stuff" that took too much of my brain time. I started playing around in a VM on my old Macbook and I found that I just got things accomplished there somehow. And the more I played with things the more I got into it, and the rest is history.

When I finally gave up the mac, however, I realized that my flirtation with vim wasn't going to cut it, and I sort of fell down the emacs rabbit hole, which makes sense--in retrospect--given my temperament and the kind of work that I do, but none the less here I am. While Awesome is something that I'm comfortable with and that has severed me quite well, there are a number of inspirations for my switch. Some of them have to do with Awesome itself, but most of them have to do with me:

  • I want to learn Common Lisp. While I know that emacs' lisp, and Common Lisp aren't the same there are similarities, and Lua was something that I've put up with and avoided a lot while using Awesome. Its not that Lua is hard, quite the opposite, it's just that I don't have much use for it in any other context, and while I know enough to make awesome really work for me, my configuration is incredibly boring.

    Not that I think Common Lisp is exactly the kind of thing that is going to be incredibly useful to me in my career in the future, but like I said: I like the way Lisp makes me think, and it's a language that can be used for production-grade types of things, and it's a standard, it's not explained from a math-centric [1] perspective, and like I said reading lisp code makes sense to me. Go figure.

  • There are several of quirks with Awesome which get to me:

  • If you change your configuration, you have to restart the window manager. Which wouldn't be a big problem except...

  • When you restart, if you have a window that appears in more than one tag, the window only appears on one tag.

  • The commands for awesome are by default pretty "vimmy," and while my current config has been properly "emacsified," you have to do a lot of ugliness to get emacs-style chords (e.g. "C-x C-r o a f" or Control-x, Control-R, followed by o, a, and f.) which I kind of like.)

  • Because one of my primary environments is running a virtual machine (in Virtual Box) on an OS X host, I've run into some problems around using the Command/Windows/Mod4 key, and there's no really good way to get around this in awesome.

So that's my beef, along with the change in usage pattern that I talked about last time, which is probably the biggest single factor. I'm not terribly familiar with Stump yet, so I don't have a lot to offer in terms of thoughts, but I've been tinkering in the laptop, and it fits my brain, which is rather nice. I'll post more as I progress. For now I think I better cut this off.

[1]This is my major problem with haskell. It looks awesome, I sort of understand it when people talk about it, but every "here's how to use haskell" guide I read is fully of what I think are "simple" math examples, of how it works, but I have a hard time tracking the math in the examples, so I have a hard time grasping the code and programming lessons because the examples are too hard for me. This is the problem of having geeked out on 20th continental philosophy in college and not math/programming, I think.

Desktop Virtualization and Operating Systems

So what's the answer to all this operating system and hardware driver angst?

I'm going to make the argument that the answer, insofar as there is one is probably virtualization.

But wait, tycho, this virtualization stuff all about servers. Right?

Heretofore, virtualization technology--the stuff that lets us take a single very powerful piece of hardware, and run multiple instances of an operating system that, in most ways "think of themselves" as being an actual physical computer--has been used in the server way, as a way of "consolidating" and utilizing the potential of given hardware. This is largely because hardware has become so powerful that it's hard to write software that really leverages this effectively, and there are some other benefits that make managing physical servers "virtually" a generally good thing, and there aren't a lot of people who would be skeptical of this assertion I think.

But on desktops? On servers where users access the computer over a network connection, it makes sense to put a number of "logical machines" on a physical machine. On a desktop machine this doesn't make a lot of sense, after all, we generally interact with the physicality of the machine; so having multiple, concurrently running, operating systems on your desk (or in your lap!) doesn't seem to provide a great benefit. I'd suggest the following two possibilities:

  • Hypervisors (i.e. the technology that talks to the hardware and the operating system instances running on the hardware,) abstract away the driver problem. The hypervisors real job is to talk to the actual hardware, and provide a hardware-like-interface to the "guest operating systems." Turns out this technology is 80-90% of where it needs to be for desktop usage. This makes the driver problem a little easier to solve.
  • Application specific operating systems. One of the problems with desktop usability in recent years is that we've been building interfaces that have needed to do everything, as people use computers for everything. This makes operating systems and stacks difficult to design and support, and there is all sorts of unforeseen interactions between all of the different things that we do, which doesn't help things. So desktop virtualization might allow us to develop very slim operating systems that are exceedingly reliable and portable, but also very limited in what they can accomplish. Which is ok, because we could have any number of them on a given computer.

I only need one instance of an operating system on my computer, why do you want me to have more?

See above for a couple of "ways desktop hypervisors may promote the growth of technology." But there are a number of other features that desktop virtualization would convey to users, but it mostly boils down to "Easier management and backup."

If the "machine" is running in a container on top of a hypervisor, its relatively easy to move it to a different machine (the worst thing that could happen is the virtual machine would have to be rebooted, and even then, not always.) It's easy to snapshot known working states. It's easy to redeploy a base image of an operating system in moments. These are all things that are, when we live "on the metal," quite difficult at the moment.

For the record, I don't think anyone is ever really going have more than five (or so) instances running on their machine, but it seems like there's a lot of room for some useful applications around five machines.

And lets face it, TCP/IPA is the mode of inter-process communication these days, so I don't think application architectures would likely change all that much.

Won't desktop hypervisors have the same sorts of problems that "conventional operating systems," have today. You're just moving the problem around.

If you're talking about the drivers problem discussed earlier, then in a manner of speaking, yes. Hypervisors would need to be able to support all kinds of hardware that (in many cases) they don't already support. The argument for "giving this" to hypervisor developers is that largely, they're already working very closely with the "metal" (a great deal of hardware today has some support for virtualization baked in,) and hypervisors are in total much simpler projects.

Its true that I'm mostly suggesting that we move things around a bit, and that isn't something that's guaranteed to fix a specific problem, but I think there's some benefit in rearranging our efforts in this space. As it were

Don't some of the leading hypervisors, like KVM and others, use the parts or all of the Linux Kernel, so wouldn't this just recreate all of the problems of contemporary Linux anew?

I'll confess that I'm a huge fan of the Xen hypervisor which takes a much more "thin" approach to the hypervisor problem, because I'm worried about this very problem. And I think Xen is more parsimonious. KVM might be able to offer some slight edge in some contexts in the next few years, like the ability to more intelligently operate inside of the guest operating system, but that's a ways down the road and subject to the same problems that Linux has today.


So there you have it. Thoughts?

Operating Systems and the Driver Issue

I made a quip the other day about the UNIX Epoch problem (unix time stamps, are measured in seconds since Jan 1, 1970, and displayed in a 10 digit number. Sometime in 2038, there will need to be 11 digits, and there's no really good way to fix that.) Someone responded "whatever, we won't be using UNIX in thirty years!"

Famous last words.

People were saying this about UNIX itself years ago. Indeed before Linux had even begun to be a "thing," Bell Labs had moved on to "Plan 9" which was to be the successor to UNIX. It wasn't. Unix came back. Hell, in the late eighties and early nineties we even thought that the "monolithic kernel" as a model of operating system design was dead, and here we are. Funny that.

While it's probably the case that we're not going to be using the same technology in thirty years that we are today (i.e. UNIX and GNU/Linux,) it's probably also true that UNIX as we've come to know it, is not going to disappear given UNIX's stubborn history in this space. More interesting, I think, is to contemplate the ways that UNIX and Linux will resonate in the future. This post is an exploration of one of these possibilities.


I suppose my title has forced me to tip my hand slightly, but lets ignore that for a moment, and instead present the leading problem with personal computing technology today: hardware drivers.

"Operating System geeks," of which we all know one or two, love to discuss the various merits of Windows/OS X/Linux "such and such works better than everything else," "such and such is more stable than this," "suck and such feels bloated compared to that," and so on and so forth. The truth is that if we take a step back, we can see that the core problem for all of these operating systems is pretty simple: it's the drivers, stupid.

Lets take Desktop Linux as an example. I'd argue that there are two large barriers to it's widespread adoption. First it's not immediately familiar to people who are used to using Windows. This is pretty easily addressed with some training, and I think Microsoft's willingness to change their interface in the last few years (i.e. the Office "Ribbon," and so forth,) is a great testimony to the adaptability of the user base. The second, and slightly more thorny issue is about hardware drivers: which are the part of any operating system that allow the software to talk to hardware like video, sound, and networking (including, of course, wireless) adapters. The Kernel has gotten much better in this regard in the past few years (probably by adding support for devices without requiring their drivers be open source), but the leading cause of an "install just not working," is almost always something related to the drivers.

"Linux People," avoid this problem by buying hardware that they know is well supported. In my world that means, "Intel everything particularly if you want wireless to work, and Nvidia graphics if you need something peppy, which I never really do," but I know people who take other approaches.

In a weird way this "geek's approach to linux" is pretty much the same way that Apple responds to the driver problem in OS X. By constraining their Operating System to run only on a very limited selection of hardware, they're able to make sure that the drivers work. Try and add a third party wireless card to OS X. It's not pretty.

Windows is probably the largest victim to the driver problem: they have to support every piece of consumer hardware and their hands are more or less tied. The famous Blue Screen of Death? Driver errors. System bloat (really for all operating systems) tends to be about device drivers. Random lockups? Drivers. Could Microsoft build better solutions for these driver problems, or push equipment manufacturers to use hardware that had "good drivers," probably; but as much as it pains me, I don't really think that it would make a whole lot of business sense for them to do that, at the moment.


More on this tomorrow...