Against Open Stacks

I have misgivings about Open Stack. Open Stack is an open source “Cloud” or infrastructure/virtualization platform, that allows providers to create on-demand computing instances, as if “in the cloud,” but running on their own systems. This kind of thing is generally refereed to as “private clouds,” but as all things in the “cloud space,” this is relatively nebulous concept.

To disclose, I am employed by a company that does work in this space, that isn’t the company that is responsible for open space. I hope this provides a special perspective, but I am aware that my judgment is very likely clouded. As it were.

Let us start from the beginning, and talk generally about what’s on the table here. Recently the technology that allows us to virtualize multiple instance on a single piece of hardware has gotten a lot more robust, easy to use, and performant. At the same time, for the most part the (open source) “industrial-grade” virtualization technology isn’t particularly easy to use or configure. It can be done, of course, but it’s non trivial. These configurations and the automation to glue it all together--and the quality therein--is how the cloud is able to differentiate itself.

On some level “the Cloud” as a phenomena is about the complete conversion of hardware into a commodity. Not only is hardware cheap, but it’s so cheap that we can do most hardware in software, The open sourcing of this “OpenStack” pushes this barrier one step further and says, that the software is a commodity as well.

It was bound to happen at some point, it’s just a curious move and probably one that’s indicative of something else in the works.

The OpenStack phenomena is intensely interesting for a couple of reasons. First, it has a lot of aspects of some contemporary commercial uses of open source: the project has one contributor and initial development grows out of the work of one company that developed the software for internal use and then said “hrm, I guess we can open source it.” Second, if I’m to understand correctly, OpenStack isn’t software that isn’t already open source software (aside from a bunch of glue and scripts), which is abnormal.

I’m not sure where this leads us, and I’ve been milling over what this all means for a while, and have largely ended up here: it’s an interesting move, if incredibly weird and hard to really understand what’s going on.

The Meaning of Work

I’ve started to realize that, fundamentally, the questions I’m asking of the world and that I’m trying to address by learning more about technology, center on work and the meaning and process of working. Work lies at the intersection of the all the things that I seem to revisit endlessly: interfaces, collaboration technology, cooperatives and economics institutions, and open source software development. I’m not sure if I’m interested in work because it’s the unifying theme of a bunch of different interests, or this is the base from which other interests spring.

I realize that this makes me an incredibly weird geek.

I was talking to caroline about our respective work environments, specifically about how we (and our coworkers) relocated (or didn’t) for our jobs, and I was chagrined to realize that this novel that I’ve been working at (or not,) for way too long at this point spends some time revolving around these questions:

  • How does being stuck in a single place and time constrain one agency to effect the world around them?
  • What does labor look like in a mostly/quasi post-scarcity world?

Perhaps the most worrying thing about this project is that I started writing this story in late August of 2008. This was of course before the American/Financial Services economic crash that got me blogging and really thinking about issues outside of technology.

It’s interesting, and perhaps outside the scope of this post, but I think it’s interesting how since graduating from college, my “research” interests as they were, all work them into fiction (intentionally or otherwise.) I suppose I haven’t written fiction about Free Software/open source, exactly, but I think there’s a good enough reason for that.1

I’m left with two realizations. First, that this novel has been sitting on my plate for far too long, and there’s no reason why I can’t write the last 10/20 thousand words in the next few months and be done with the sucker. Second, I’m interested in thinking about how “being an academic” (or not) affects the way I (we?) approach learning more about the world and the process/rigor that I bring to those projects.

But we’ll get to that later, I have writing to do.


  1. I write fiction as open source, in a lot of ways, so it doesn’t seem too important to put it in the story as well. ↩︎

Jekyll and Automation

As this blog ambles forward, albeit haltingly, I find that the process of generating the site has become a much more complicated proposition. I suppose that’s the price of success, or at least the price of verbosity.

Here’s the problem: I really cannot abide by dynamically generated publication systems: there are more things that can go wrong, they can be somewhat inflexible, they don’t always scale very well, and it seems like horrible overkill for what I do. At the same time, I have a huge quantity of static content in this site, and it needs to be generated and managed in some way. It’s an evolving problem, and perhaps one that isn’t of great specific interest to the blog, but I’ve learned some things in the process, and I think it’s worthwhile to do a little bit of rehashing and extrapolating.

The fundamental problem is that the rebuilding-tychoish.com-job takes a long time to rebuild. This is mostly a result of the time it takes to convert the Markdown text to HTML. It’s a couple of minutes for the full build. There are a couple of solutions. The first would be to pass the build script some information about when files were modified and then have it only rebuild those files. This is effective but ends up being complicated: version control systems don’t tend to version mtime and importantly there are pages in the site--like archives--which can become unstuck without some sort of metadata cache between builds. The second solution is to provide very limited automatically generated archives and only regenerate the last 100 or so posts, and supplement the limited archive with more manual archives. That’s what I’ve chosen to do.

The problem is that even the last 100 or so entries takes a dozen seconds or more to regenerate. This might not seem like a lot to you, but the truth that at an interactive terminal, 10-20 seconds feels interminable. So while I’ve spent a lot of time recently trying to fix the underlying problem--the time that it took to regenerate the html--when I realized that the problem wasn’t really that the rebuilds took forever, it was that I had to wait for them to finish. The solution: background the task and send messages to my IM client when the rebuild completed.

The lesson: don’t optimize anything that you don’t have to optimize, and if it annoys you, find a better way to ignore it.

At the same time I’ve purchased a new domain, and I would kind of like to be able to publish something more or less instantly, without hacking on it like crazy. But I’m an edge case. I wish there were a static site generator, like my beloved jekyll that provided great flexibility, and generated static content, in a smart and efficient manner. Most of these site compilers, however, are crude tools with very little logic for smart rebuilding: and really, given the profiles of most sites that they are used to build: this makes total sense.


I realize that this post comes off as pretty complaining, and even so, I’m firmly of the opinion that this way of producing content for the web is the most sane method that exists. I’ve been talking with a friend for a little while about developing a way to build websites and we’ve more or less come upon a similar model. Even my day job project uses a system that runs on the same premise.

Since I started writing this post, I’ve even taken this one step further. In the beginning I had to watch the process build. Then I basically kicked off the build process and sent it to the background and had it send me a message when it was done. Now, I have rebuilds scheduled in cron, so that the site does an automatic rebuild (the long process) a few times a day, and quick rebuilds a few times an hour.

Is this less efficient in the long run? Without a doubt. But processors cycles are cheap, and the builds are only long in the subjective sense. In the end I’d rather not even think that builds are going on, and let the software do all of the thinking and worrying.

Why Open Source Matters

A reader (hi grandma!) asked me to write a post about why I’m so interested in open source, and who am I to refuse. In fact, I tend to do requests pretty well, so if there’s a subject you’d like to see me cover here, just ask and I’ll see what I can do. In any case, I’ve been involved, for varying definitions of involved, in Free Software and open source for a few years now. On a personal level, I use this software (almost exclusively) because I can make it do exactly what I need it to do, because it’s very stable, and because from an architecture perspective I understand how these systems work and that’s useful for me. Having said that, I think open source is important and worth considering for reasons beyond the fact that I (and people like me) find it to be the most important tool for the work we do.

When folks get together and say “I’m going to work on an open source project,” I think some interesting things happen. First, they’re making a number of interesting economic decisions about their work. There are business models around open source, but they are more complex than “I make software, you give me money for software,” and thus require people to think a little bit more widely about the economic impact of their work. I think the way that people view the implications of their labor is incredibly important, and free software presents an interesting context to think about these questions.

The second, and perhaps larger reason I’m interested in open source is the community. Open source developers often know that the things they want to create are beyond the scope of their free time and personal ability, so they collaborate with other people to make something of value and worth. How this collaboration happens: what motivates developers, how they create tools and technologies to support this kind of work flow, how the “intellectual property” is negotiated (particularly in projects that don’t use the GNU GPL,) how leaders are selected and appointed, how decisions are made as a community, and how teams are organized and organize themselves. These are intensely fascinating.

And these phenomena matter, both in and for themselves, but also as they impact and connect with other questions and phenomena in the world. For instance:

  • I think that the decision making process in free software projects is instructive for thinking about how all sorts of communities can reach a “decision making fork” resolve it somehow and then continue with their work. Some open source projects have formal structures, and that is easier to understand from the outside, but most make decisions in an informal way, and that decision making process is potentially novel, I’d argue. In what other context do people have to construct projects outside of work.
  • While leaders in the open source community are rarely elected (aside from a number of notable examples; the Debian Project Leader springs instantly to mind) most projects are very democratic. But this requires that we keep in mind a fairly broad definition of democracy. Because there isn’t a lot of voting, and sometimes decisions aren’t discussed thoroughly before people start doing things, it doesn’t look democratic. But everything is volunteer based, and leaders I think have a sense of responsibilities to their constituencies, which is meaningful.
  • The tools that open source developers use are, unsurprisingly open source, and are often picked up and used by teams that aren’t making free software. I’m interested in thinking about how “ways of working,” proliferate out of open source and into other spheres. Is non-open source developed differently if the developers are familiar with and use open source tools?
  • Similarly, I think I’m interested in thinking about how the architecture of Linux and Unix give rise to a thought about APIs and open standards in a way that doesn’t necessarily happen on closed platforms. After a certain point, I think I’m forced to ask: is GNU/Linux the leading free software/open source platform because it just happens to be, or because it’s UNIX. Is there something special about the design of UNIX that leads to openness, and the practices of openness? To what extent does the limitations of the environment (operating system here) the social conventions that are built on it?

And then beyond the specific questions--which are terribly important in and of themselves--open source present a terribly exciting subject for the study of these issues. There is so much data on the ground concerning open source: version control history, email logs, IRC logs, and so forth. Not only are the issues important but the data is rich, and I think has a lot to tell us if we (I?) can bother to spend some time with it.

Enterprise Linux Community

Ok. I can’t be the only one.1

I look at open source projects like OpenSolaris, Alfresco, Resin, Magento, OpenSuSE, Fedora, and MySQL, among others, and I wonder “What’s the community around these projects that people are always talking about.” Sure I can download the source code under licenses that I’m comfortable with, sure they talk about a community, but what does that mean?

What, as a company, does it mean to say that the software you develop (and likely own all the rights to,) is “open source,” and “supported by a community?”

If I were sensible, I’d probably stop writing this post here. From the perspective of the users of and participants in open source software, this is the core question, both because it dictates what we can expect from free software and open source and more importantly because it has been historically ill defined.

There are two additional, but related, questions that lurk around this question, at least in my mind:

1. Why are new open source projects only seen as legitimate if the developers are able to build a business around the project?

2. What does it mean to be a contributor to open source in this world, and what do contributors in “the community,” get from contributing to commercial projects?

There are of course exceptions to this rule: the Debian Project, the Linux Kernel itself, GNU packages, and most open source programming languages among others. I’d love to know if I’ve missed a class of software in this list--and there’s one exception that I’ll touch on in a moment--but the commonality here is that that these projects are so low level that it seems too hard to build businesses around directly.

When “less technical” free software projects began to take off, I think a lot of people said “I don’t know if this open source thing will work when the users of the software aren’t hackers,” because after all what does open source code do for non-hackers? While it’s true that there are fringe benefits that go beyond the simple “free as in beer” quality of open source for non-hacker users, these benefits are not always obvious. In a lot of ways the commercialization around open source software helps add a buffer between upstreams and end users. This is why I included Debian in the list above. Debian is very much a usable operating system, but in practice it’s often an upstream of other distributions. Ubuntu, Maemo, etc.

The exception that I mentioned is, to my mind, projects like Drupal and web development frameworks like Ruby on Rails and Django. These communities aren’t sponsored or driven by venture capital funded companies. Though the leader of the Drupal community has taken VC money for a Drupal-related start up. I think the difference here is that the economic activity around these projects is consulting based: people use Drupal/Django/Rails to build websites (which aren’t, largely open source) for clients. In a lot of ways these are much closer to the “traditional free software business model,” as envisioned in the eighties and nineties, than what seems to prevail at the moment.

So to summarize the questions:

  • What, as a company, does it mean to say that the software you develop (and likely own all the rights to,) is “open source,” and “supported by a community?”
  • What does it mean to participate in and contribute to a community around a commercial product that you don’t have any real stake in?
  • How does the free software community, which is largely technical and hacker centered, transcend to deal with and serve end users?
  • How do we legitimize projects that aren’t funded with venture capital money?

Onward and Upward!


  1. I think and hope this is the post I meant to write when I started writing this post on the work of open source ↩︎

Analyzing the Work of Open Source

This post covers the role and purpose (and utility!) of analysts and spectators in the software development world. Particularly in the open source subset of that. My inspirations and for this post come from:


In the video Coté says (basically,) open source projects need to be able to justify the “business case” for their project, to explain what’s the innovation that this project seeks to provide the world. This is undoubtedly a good thing, and I think we should probably all be able to explore and clearly explain and even justify the projects we care about and work on in terms of their external worth.

Project leaders and developers should be able to explain and justify the greater utility of their software clearly. Without question. At the same time, problems arise when all we focus on is the worth. People become oblivious to how things work, and become unable to successfully participate in informed decisions about the technology that they use. Users, without an understanding of how a piece of technology functions are less able to take full advantage of that technology.

As an aside: One of the things that took me forever to get used to about working with developers is the terms that they describe their future projects. They use the imperative case with much more ease than I would ever consider: “the product will have this feature” and “It will be architected in such a way.” From the outside this kind of talk seems to be unrealistic and grandiose, but I’ve learned that programmers tend to see their projects evolving in real time, and so this kind of language is really more representative of their current state of mind than their intentions or lack of communications skills.

Returning for a moment to the importance of being able to communicate the business case of the projects and technology that we create. As we force the developers of technology to focus on the business cases for the technology they develop we also make it so that the only people who are capable of understanding how software works, or how software is created, are the people who develop software. And while I’m all in favor of specialization, I do think that the returns diminish quickly.

And beyond the fact that this leads to technology that simply isn’t as good or as useful, in the long run, it also strongly limits the ability of observers and outsiders (“analysts”) to be able to provide a service for the developers of the technology beyond simply communicating their business case to outside world. It restricts all understanding of technology to journalism rather than the sort of “rich and chewy” (anthropological?) understanding that might be possible if we worked to understand the technology itself.

I clearly need to work a bit more to develop this idea, but I think it connects with a couple of previous arguments that I’ve put forth in these pages one regarding Whorfism in Programming, and also in constructing rich arguments.

I look forward to your input as I develop this project. Onward and Upward!

If Open Source is Big Business Then Whither the Community?

I’ve been thinking recently about the relationship and dynamic between the corporations and “enterprises” which participate in and reap benefits from open source/free software and the quasi-mythic “communities” that are responsible for the creation and maintenance of the software. Additionally this post may be considered part of my ongoing series on cooperative economics.

When people, ranging from business types, to IT professionals, to programmers, and beyond, talk about open source software we talk about a community: often small to medium sized groups of people who all contribute small amounts of time to creating software. And we’re not just talking about dinky little scripts that make publishing blogs easier (or some such), we’re talking about a massive amount of software: entire operating systems, widely used implementations of nearly all relevant programing languages, and so forth. On some level the core of this question is who are these people, and how do they produce software?

On the surface the answer to these questions is straightforward. The people who work on open source software are professional programmers, students, geeks, and hacker/tinkerer-types who need their computers to do something novel, and then they write software. This works as model for thinking about who participates in open source, if we assume that the reason why people contribute to open source projects is because their individual works/contributions are too small to develop business models around. This might explain some portion of open source contributions, but it feels incomplete to me.

There are a number of software projects that use open source/free software licenses, with accessible source code, supported by “communities,” which are nonetheless developed almost entirely by single companies. MySQL, Alfresco, and Resin among others serve as examples these kinds of projects which are open source by many any definitions and yet don’t particularly strike me as “community,” projects. Is the fact that this software provides source code meaningful or important?

Other questions…

1. If there are companies making money from open source code bases, particularly big companies in a business directly related to software, does this effect participation of people who are not employed by that company in the project?

In my mind I draw distinctions between technology businesses that use/sell/support open source software (e.g. Red-Hat, the late MySQL AB, etc.) and businesses that do something else but use open source software (i.e. everyone with a Linux server in the basement, every business with a website that runs on Apache, etc.)

2. Does corporate personhood extend to the open source community. Are corporate developers contributing as people, or as representatives of their company?

I largely expect that it’s the former; however, I’d be interested in learning more about the various factors which affect the way these contributors are perceived?

3. Do people participate in open source because it is fun or for the enjoyment of programming,

4. Has software become so generic that open source is a current evolution of industry standards groups. Do we write open source software for the same reason that industries standardized the size and threading of bolts?

5. Are potential contributors disinclined to contribute to software that is controlled by a single entity, or where projects

6. Is the cost of forking a software project too high to make that a realistic outcome of releasing open source software?

Conversely, were forks ever effective?

7. Do communities actually form around software targeted at “enterprise” users, and if so in what ways are those communities different from the communities that form around niche window p managers or even community projects like Debian?

I don’t of course have answers yet, but I think these questions are important, and I’d love to hear if you have any ideas about finding answers to these questions, or additional related questions that I’ve missed.

Window Sizes in Tiling Window Managers

There’s an issue in tiling window managers, that I think a lot folks who are used to floating window managers never expect. I wrote a post to the Awesome listserv a while back explaining this to someone, and it seems to have struck a chord (I saw the post linked to last week). I thought I’d write a brief post here to explain what’s going on in a more clear and general way.

The Problem

When tiled, windows don’t seem to take up all the space that’s available to them. This creates weird “gaps” between windows. But only some windows: Firefox is immune to this problem, while terminal emulators like xterm, and urxvt, and gVim, and emacs get all funky.

What’s Happening

The application that are affected by this draw their windows based upon a number of fixed width columns. We’ll note that terminal emulators, as well as GUI versions of programmer’s text editors like vim and emacs, all used fixed-width fonts and often allow you to set window sizes based on the number of columns (of characters).

As a result, these applications are only able to use space on the screen in increments of full characters. Most of the time, in floating window managers, we never really notice this limitation.

In tiling window managers you do notice, because the window manager forces the windows to use all available space, except in some windows it leaves these weird gaps at the bottom and right of the window. Sometimes the gaps end up in the window, as unusable buffers, and sometimes they end up between windows. It looks funny, pretty much no matter how you slice it.

What You Can Do About It

The truth? Not much.

The Awesome Window Manager, by default shows the gaps between the windows. I always found this to be the “more ugly” option. You can alter this behavior by searching your configuration file for size_hints_honor and making the line look like this:

c.size_hints_honor = false

This tells Awesome to ignore windows (client’s) when they say “I want to have these dimensions.” It doesn’t fix the problem but it does get rid of the gaps.

The real solution is to tweak text sizes, fonts, and any buffering elements (like a status bar, mode line, or widget box), and window borders so that the windows aren’t left with extra space that they don’t know how to cope with.

By real solution, I really mean “only option:” it’s really impossible to get all of your fixed width applications to have exactly the right number of pixels. You can get close in a lot of situations, and I’ve always found this to be much less annoying than using floating window managers.

The Original Post

Just for giggles, I’ve included a quoted portion of what I posted original to the listserv on the topic.

The one big of information that might be important: The urxvt terminal emulator, when not “honoring size hints,” is unable to really properly draw the “extra space” with the proper background. I suspect this is a bug with the pseudo-transparency system they use. As a result there are often a few pixels with the background in an inverted color scheme. Same problem as above, but it looks funny if you’re not used to it.

What’s happening is that urxvt (like many terminal emulators) can only draw windows of some specific sizes based on the size of the characters (eg. x number of rows, and y number of columns.) So while you may have larger and the equivalent of say 80.4x20.1, urxvt can’t do anything with this extra space.

If you honor size hints, the windows will end wherever they can, and use as much space as they can, but leave gaps between windows if the total space isn’t properly divisible. If you don’t honor size hints, the windows themselves take up the extra room (but they can’t do anything with the extra room, so they just leave it blank, and sometimes the transparency is a bit wonky in those “buffers”).

So there you have it. I hope this helps!