Xen and KVM: Failing Differently Together

When I bought what is now my primary laptop, I had intended to use the extra flexibility to learn the prevailing (industrial-grade) virtualization technology. While that project would have been edifying on its own, I also hoped to use the extra flexibility to some more consistent testing and development work.

This project spurned a xen laptop project, but the truth is that Xen is incredibly difficult to get working, and eventually the "new laptop" just became the "every day laptop," and I let go of the laptop Xen project. In fact, until very recently I'd pretty much given up on doing virtualization things entirely, but for various reasons beyond the scope of this post I've been inspired to begin tinkering with virtualization solutions again.

As a matter of course, I found myself trying KVM in a serious way for the first time. This experience both generated a new list of annoyances and reminded me about all the things I didn't like about Xen. I've collected these annoyances and thoughts into the following post. I hope that these thoughts will be helpful for people thinking about virtualization pragmatically, and also help identify some of the larger to pain points with the current solution.

Xen Hardships: It's all about the Kernel

Xen is, without a doubt, the more elegant solution from a design perspective and it has a history of being the more robust and usable tool. Performance is great, Xen hosts can have up-times in excess of a year or two.

The problem is that dom0 support has, for the past 2-3 years, been in shambles, and the situation isn't improving very rapidly. For years, the only way to run a Xen box was to use an ancient kernel with a set of patches that was frightening, or a more recent kernel with ancient patches forward ported. Or you could use cutting edge kernel builds, with reasonably unstable Xen support.

A mess in other words.

Now that Debian Squeeze (6.0) has a pv-ops dom0 kernel, things might look up, but other than that kernel (which I've not had any success with, but that may be me,) basically the only way to run Xen is to pay Citrix [1] or build your own kernel from scratch, again results will be mixed (particularly given the non-existent documentation,) maintenance costs are high, and a lot of energy will be duplicated.

What to do? Write documentation and work with the distributions so that if someone says "I want to try using Xen," they'll be able to get something that works.

KVM Struggles: It's all about the User Experience

The great thing about KVM is that it just works. "sudo modprobe kvm kvm-intel" is basically the only thing between most people and a KVM host. No reboot required. To be completely frank, the prospect of doing industrial-scale virtualization on-top of nothing but the Linux kernel and with a wild module in it, gives me the willies is inelegant as hell. For now, it's pretty much the best we have.

The problem is that it really only half works, which is to say that while you can have hypervisor functionality and a booted virtual machine, with a few commands, it's not incredibly functional in practical systems. There aren't really good management tools, and getting even basic networking configured off the bat, and qemu as the "front end" for KVM leaves me writhing in anger and frustration. [2]

Xen is also subject to these concerns, particularly around netowrking. At the same time, Xen's basic administrative tools make more sense, and domU's can be configured outside of interminable non-paradigmatic command line switches.

The core of this problem is that KVM isn't very Unix-like, and it's a problem that is rooted in it's core and pervades the entire tool, and it's probably rooted in the history of its development.

What to do? First, KVM does a wretched job of anticipating actual real-world use cases, and it needs to do better at that. For instances it sets up networking in a way that's pretty much only good for software testing and GUI interfaces but sticking the Kernel on the inside of the VM makes it horrible for Kernel testing. Sort out the use cases, and there ought to be associated tooling that makes common networking configurations easy.

Second, KVM needs to at least pretend to be Unix-like. I want config files with sane configurations, and I want otherwise mountable disk images that can be easily mounted by the host.

Easy right?

[1]The commercial vendor behind Xen, under whose stewardship the project seems to have mostly stalled. And I suspect that the commercial distribution is Red Hat 5-based, which is pretty dead-end. Citrix doesn't seem to be very keen on using "open source," to generate a sales channel, and also seems somewhat hesitant to put energy into making Xen easier to run for existing Linux/Unix users.
[2]The libvirtd and Virt Manager works pretty well, though it's not particularly flexible, and it's not a simple command line interface and a configuration file system.

Minimalism Versus Simplicity

A couple of people, cwebber and Rodrigo have (comparatively recently) switched to using StumpWM as their primary window managers. Perhaps there are more outside of the circle of people I watch but it's happened enough to get me to think about what constitutes software minimalism.

While StumpWM is a minimal program in terms of design and function; however, in terms of ram usage or binary size, it's not particularly lightweight. Because of the way Common Lisp works, "binaries" and RAM footprint is in the range of 30-40 megs. Not big by contemporary standards, but the really lightweight window managers can get by with far less RAM.

In some senses this is entirely theoretical: even a few years ago, it wasn't uncommon for desktop systems to have only a gig of ram, so the differences would hardly have been noticeable. Now? Much less so. Until 2006 or so, RAM was the most performance effecting limited resource on desktop system, since then, even laptops have more than enough for all uses. Although Firefox challenges this daily.

Regardless, while there may be some link between binary size and minimalism, I think it's probably harmful to reduce minimalism and simplicity to what amounts to an implementation detail. Let's think about minimalism more complexly. For example:

Write a simple (enough) script in Python/Perl and C. It should scan a file system and change the permissions of files such that they match the permissions of the enclosing folder, but not change the permissions of a folder if it's different from it's parent. Think of it as "chmod -R" except from the bottom up. This is a conceptually simple task and it wouldn't be too hard to implement, but I'm not aware of any tool that does this and it's not exactly trivial (to implement or in terms of its resource requirements.)

While the C program will be much more "lightweight," and use less RAM during while running, the chances are that the Python/Perl version will be easier to understand and use much more straightforward logic. The Python/Perl version will probably take longer to run and there will be some greater overhead for the Python/Perl runtime. Is the C version more minimal because it uses more RAM? Is the Perl/Python program more minimal because it's interface and design is more streamlined, simple and easier to use?

I'm not sure what the answer is, but lets add the following factor to our analysis: does the "internal" design and architecture of software affect the minimalism or maximalism of the software?

I think the answer is clearly yes, qualified by "it depends" and "probably not as much as you'd think initially." As a corollary as computing power increases the importance of minimalist implementations matters less generally, but more in cases of extremely large scale which are always already edge cases.

Returning for a moment to the question of the window manager, in this case I think it's pretty clear: StumpWM is among the most minimal window managers around, even though it's RAM footprint is pretty big. But I'd love to hear your thoughts on this specifically, or technological minimalism generally.

Constraints for Mobile Software

This post is mostly just an overview of Epistle by Matteo Villa, which is--to my mind--the best Android note taking application ever. By the time you read this I will have an Android Tablet, but it's still in transit while you read this and that's a topic that dissevers it's own post.

Epistle is a simple notes application with two features that sealed the deal:

1. It knows markdown, and by default provides a compiled rich text view of notes before providing a simple notes editing interface. While syntax highlighting would be nice, we'll take what we can get.

2. It's a nice, simple application. There's nothing clever or fancy going on. This simplicity means that the interface is clean and it just edits text.

For those on the other side there's Paragraft that seems similar. While in my heart of hearts I'm probably still holding out for the tablet equivalent [1] of emacs. In the mean time, I think developing a text editing application that provide a number of paradigmatic text editing features and advances for the touch screen would be an incredibly welcome development.

In the end there's much work to be done, and the tools are good enough to get started.

[1]I want to be clear to say equivalent and not replacement, because while I'd like to be able to use emacs and have that kind of slipstream writing experience on an embeded device, what I really want is something that is flexible and can be customized and lets me do all the work that I need to do, without hopping between programs, without breaking focus, that makes inputting and manipulating text a joy. And an application that we can trust (i.e. open source, by a reputable developer,) in a format we can trust (i.e. plain text.) Doesn't need to be emacs and doesn't need lisp, but I wouldn't complain about the lisp.

Remote Accessibility/Reverse Tunneling/Super Dynamic DNS

I have a question for my system administrator readers. And maybe the rest of you as well.

I run a web server on my laptop that hosts about 8 test sites. Nothing special: mostly test and development sites for various public sites, but from time to time I think, "shit, wouldn't it be nice if I could just give someone a link to this." My solution is generally to copy whatever it is that I'm working on up to the server that runs this website, and while that generally work just fine, it could be better.

So here's what I'm thinking:

I'd like to be able to hook my laptop up to the internet and be able to let people access (some) of the content running on this web server. I don't want it to be automatic, or open my entire machine to the world (though... I could secure it, I suppose.) The options I've considered.

  • I set up a VPN that I can connect to the public server (that hosts this website) from the laptop, and I have a virtual host (or set of virtual hosts) that proxy requests to the laptop. Wherever I am, it works. I'm not worried about the bandwidth or the strain on the server given the usage pattern I'm expecting.

    • Pros: Simple, Secure, works even if I'm on a weird local network. Potentially useful for other kinds of nifty hacking including tunneling all traffic through the VPN on insecure connections.
    • Cons: Way complex, and I'm not sure if it will work. I'll need to set up VPN software. And it's total overkill.
  • Some sort of scripted dynamic DNS solution, probably involving running my own DNS server.

    • Pros: less proxy madness. Pretty simple.
    • Cons: running a DNS server. Won't work on some (most) local networks.
  • There has to be some sort of alternate approach using a minimalist tunneling solution. There are a few of them, I think they're nifty, and it would probably be perfect. I'm just not sure what it is.

    ...and then half the night later, I finished deploying the VPN. I have to say that I'm really pleased with it:

  • It can (and has) replaced my ssh tunnels for sending email. That's pretty great.

  • The web server stuff works, though I don't have anything really up there yet. I feel like I need some sort of access restriction method, but I don't really like any of the options. HTTP Auth is annoying rather than protective, SSL is terribly imperfect and fussy, host based control isn't very tight.

  • I think I will be able to finally sacrifice a laptop to the "homeserver" because aside from dis/re-enabling the "sleep on laptop close" function. If needed it'll be dead simple to convert a sever laptop to a mobile laptop.

Thoughts?

Tablet Interfaces and Intuition

I've been using FBReaderJ to read .epub files on my tablet recently, and I discovered a nitfty feature: you can adjust the screen's brightness by dragging your finger up or down the left side of the screen. Immediately this felt like discovering a new keybinding or a new function in emacs that I'd been wishing for a while time. Why, I thought, aren't there more tricks like this?

The iPhone (and the iPad by extension) as well as Android make two major advances over previous iterations of mobile technology. First, they're robust enough to run "real" programs written in conventional programming environment. Better development tools make for better applications and more eager developers (which also makes for better applications.) Second, the interfaces are designed to be used with fingers rather than stylus (thanks to capacitive touch screens) and the design aesthetic generally reflects minimalist values and simplicity. The mobile applications of today's app stores would not work if they were visually complex and had multi-tiered menus, and hard to activate buttons.

The tension between these two features in these platforms makes it difficult to slip nifty features into applications. Furthermore, th economy of application market places does not create incentives for developers to build tools with enduring functionality. The .epub reader I mentioned above is actually free software. [1] I write a couple of posts a while back on innovation (one and two) that address the relationship between free software and technological development but that's beside the point.

Given this, there are two major directions that I see tablet interfaces moving toward:

1. Tablet interfaces will slowly begin to acquire a more complete gestural shorthand and cross-app vocabulary that will allow us to become more effective users of this technology. Things like Sywpe are part of this, but I think there are more.

2. There will be general purpose systems for tablets that partially or wholly expect a keyboard, and then some sort of key-command system will emerge. This follows from my thoughts in the "Is Android the Future of Linux?" post.

I fully expect that both lines of development can expand in parallel.

[1]I also found the base configuration of FBReader (for the tablet, at least) to be horrible, but with some tweaking, it's a great app.

Interfaces in Enterprise Software

This post is a continuation of my human solution to IT and IT policy issues series. This post discusses a couple of ideas about "enterprise" software, and its importance the kind of overall analysis of technology that this posts (and others on this site) engage in. In many ways this is a different angle on some of the same questions addressed in my "Caring about Java" post: boring technologies are important, if not outright interesting.

There are two likely truths about software that make sense upon reflection, but are a bit weird when you think about it:

  1. The majority of software is used by a small minority of users. This includes software that's written for and used by other software developers, infrastructure, and the applications which are written for "internal use." This includes various database, CRM, administrative tools, and other portals and tools that enterprise uses.
  2. Beautiful and intuitive interfaces are only worth constructing if your software has a large prospective userbase or if you're writing software where a couple of competing products share a set of common features. Otherwise there's no real point to designing a really swanky user interface.

I'm pretty sure that these theories hold up pretty well, and are reasonably logical. The following conclusions are, I think, particularly interesting:

  • People, even non-technical users, adjust to really horrible user interfaces that are non-intuitive all the time.

  • We think that graphical user interfaces are required for technological intelligibility, while the people who design software use GUIs as minimally as possible, and for the vast majority of software the user interface is the weakest point.

    The obvious questions then, is: why don't we trust non-technical users with command lines? Thoughts?

Packaging Technology Creates Value

By eliminating the artificial scarcity of software, open source software forces businesses and technology developers to think differently about their business models. There are a few ways that people have traditionally built businesses around open free and open source software. There are pros and cons to every business model, but to review the basic ideas are:

  • Using open source software as a core and building a thin layer of proprietary technology on top of the open source core. Sometimes this works well enough (e.g. SugarCRM, OS X,) and sometimes this doesn't seem to work as well (e.g. MySQL, etc.)
  • Selling services around open source software. This includes support contracts, training services, and infrastructure provisioning. Enterprises and other organizations and projects need expertise to make technology work, and the fact that open source doesn't bundle licensing fees with support contracts doesn't make the support (and other services) less useful or needed for open source.
  • Custom development services. Often open source projects provide a pretty framework for a technology, but require some level of customization to fit the needs and requirements of the "business case." The work can be a bit uneven, as with all consulting, but the need a service are both quit real. While the custom code may end up back in the upstream, sometimes this doesn't quite happen for a number of reasons. Custom development obviously overlaps with service and thin-proprietarization, but is distinct: it's not a it doesn't revolve around selling proprietary software, and it doesn't involve user support or systems administration. These distinctions can get blurry in some cases.

In truth, when you consider how proprietary software actually convey value, it's really the same basic idea as the three models above. There's just this minor mystification around software licenses, but other than that, the business of selling software and services around software doesn't vary that much.

James Governor of Red Monk suggests a fourth option: Packaging technology.

The packaging model is likely just an extension of the "services" model, but it draws attention to the ways that companies can create real value not just by providing services and not just by providing a layer of customization, but by spending time attending to the whole experience, rather than the base technology. It also draws some attention to the notion that reputation matters.

I suppose it makes sense: when businesses (and end users) pay for proprietary software, while the exchange is nominally "money" for "license" usage rights, in reality there are services and other sources of value. Thus it is incumbent upon open source developers and users to find all of the real sources of value that can be conveyed in the exchange of money for software, and find ways to support themselves and the software. How hard can it be?

Leadership in Distributed Social Networks

Let us understand "social networks," to mean networks of people interacting in a common group or substrate rather than the phenomena that exists on a certain class of websites (like Facebook): we can think of this as the "conventional sense" of the term.

As I watch Anonymous, the 'hacktivist" group, to say nothing of the movements in Egypt and Tunisia, I'm fascinated by the way that such an ad hoc group can appear to be organized and coherent on the outside without appearing to have real leadership. External appearance and reality is different, of course, and I'm not drawing a direct parallel between what Anonymous is doing and what happened in Egypt, but there is a parallel. I think we're living in a very interesting moment. New modes of political and social organizing do not manifest themselves often.

We still have a lot to learn about what's happened recently in Egypt/Tunisia/Libya and just as much to learn about Anonymous. In a matter of months or years, we could very easily look back on this post and laugh at its naivete. Nevertheless, at least for the moment, there are a couple of big things that I think are interesting and important to think about:

  • We have movements that are lead, effectively, by groups rather than individuals, and if individuals are actually doing leadership work, they are not taking credit for that work. So movements that are not lead by egos.
  • These are movements that are obviously technologically very aware, but not in a mainstream sort of way. Anonymous uses (small?) IRC networks and other collaborative tools that aren't quite mainstream yet. The Egyptian protesters in the very beginning had UStream feeds of Tahrir Square, and I'd love to know how they were handling for internal coordination and communication.
  • I think the way that these movements "do ideology," is a bit unique and non conventional. I usually think of ideology as being a guiding strategy from which practice springs. I'm not sure that's what's happening here.
  • The member activists, who are doing the work in these movements are not professional politicians or political workers.

The more I ponder this, the more I realize how improbable these organizations are and the more I am impressed by the ability of these groups to be so effective. In addition to all of my other questions, I'm left wondering: how will this kind of leadership (or non-leadership) method and style influence other kinds of movements and projects.