Industry, Community, Open Source

In "Radicalism in Free Software, Open Source" I contemplated the discourse of and around radicalism in and about Free Software and Open Source software. I think this post is a loose sequel to that post, and I want to use it to think about the role.

I suppose the ongoing merger of Sun Microsystems and Oracle, particularly with regards to the MySQL database engine weights heavy on many of our minds.

There are a number of companies, fairly large companies, who have taken a fairly significant leadership role in open source and free software. Red Hat. Sun Microsystems. IBM. Nov ell. And so forth. While I'm certainly not arguing against the adoption of open source methodologies in the enterprise/corporate space, I don't think that we can totally ignore the impact that these companies have on the open source community.

A lot of people--mistakenly, I think--fear that Free Software works against commercialism [1] in the software industry. People wonder: "How can we make money off of software if we give it away for free?" [2] Now it is true that free software (and its adherents) prefer business that look different from proprietary software businesses. They're smaller, more sustainable, and tend to focus on much more custom deployments for specific users and groups. This is in stark contrast to the "general releases" for large audiences, that a lot of proprietary audiences strive for.

In any case, there is a whole nexus of issues related to free software projects and their communities that are affected by the commercial interests and "powers" that sponsor, support, and have instigated some of the largest free software projects around. The key issues and questions include:

  • How do new software projects of consequence begin in an era when most projects of any notable size have significant corporate backing?
  • What happens to communities when the corporations that sponsor free software are sold or change directions?
  • Do people contribute to free software outside of their jobs? Particularly for big "enterprise" applications like Nagios or Jboss?
  • Is the "hobbyist hacker" a relevant and/or useful arch-type? Can we intuit which projects attract hobbyists and which projects survive because businesses sponsor their development, rather than because hobbyists contribute energy to them. For example: desktop stuff, niche window managers, games, etc. are more likely to be the province of hobbyists and we might expect stuff like hardware drivers, application frameworks, and database engines might be the kind of thing where development is mostly sponsored by corporations.
  • Is free software (or, Open Source may be the more apropos terminology at the moment) just the contemporary form of industry group cooperation? Is open source how we standardize our nuts and bolts in the 21st century?
  • How does "not invented here syndrome" play out in light of the genesis of open source?
  • In a similar vein, how do free software projects get started in today's world. Can someone say "I want to do this thing" and people will follow? Do you need a business and some initial capital to get started? Must the niche be clear and undeveloped?
  • I'm sort of surprised that there haven't been any Lucid-style forks of free software projects since, well, Lucid Emacs. While I'm not exactly arguing that the Lucid Emacs Fork was a good thing, it's surprising that similar sorts of splits don't happen any more.

That's the train of thought. I'd be more than happy to start to hash out any of these ideas with you. Onward and Upward!

[1]People actually say things like "free software is too communist for me" which is sort of comically absurd, and displays a fundamental misunderstanding of both communism/capitalism and the radical elements of the Free Software movement. So lets avoid this, shall we?
[2]To be totally honest I don't have a lot of sympathy for capitalists who say "you're doing something that makes it hard for me to make money in the way that I've grown used to making money." Capitalist' lack of creativity is not a flaw in the Free Software movement.

Radicalism in Free Software, Open Source

The background:


In light of this debate I've been thinking about the role and manifestations of radicalism in the free software and open source world. I think a lot of people (unfairly, I think in many cases) equate dedication to the "Cause of Free Software," as the refusal to use anything but free software, and the admonishment of those who do use "unpure" software. To my mind this is both unfair to Free Software as well as the radicals who work on free software projects and advocate for Free Software.

First, lets back up and talk about RMS [1]. RMS is often held up as the straw man for "free software radicals." RMS apparently (and I'd believe it) refuses to use software that isn't free software. This is seen as being somewhat "monkish," because doesn't just involve using GNU/Linux on the desktop, but it also involves things like refusing to use the non-free software written for GNU/Linux, including Adobe's Flash player, and various drivers. In short using the "free-only" stack of software is a somewhat archaic experience. The moderates say "who wants to use a computer which has been willfully broken because the software's license is ideologically incompatible," and the moderates come out looking rational and pragmatic.

Except that, as near as I can tell, while the refusal to use non-free software might be a bit traumatic for a new convert from the proprietary operating system world, for someone like RMS, it's not a huge personal sacrifice. I don't think I'm particularly "monkish" about my free software habits, and the only non-free software I use is the adobe flash player, and the non-open-source extensions to Sun's Virtual Box. I'm pretty sure I don't even need the binary blob stuff in the kernel. For me--and likely for RMS, and those of our ilk--sticking to the pure "free software" stuff works better and suits the way I enjoy working. [2]

In short, our ability to use free software exclusively, depends upon our habits and on the ways in which we use and interact with technology.

To my mind, the process by which the pragmatic approach to free software and open source radicalizes people like RMS, is terribly unproductive. While we can see that the moderates come away from this encounter looking more reasonable to the more conventional types in the software world, this is not a productive or useful discussion to entertain.

In any case I think there are a number of dimensions to the free software (and open source world,) that focusing on "how free your software" is distracts us from. Might it not be useful to think of a few other issues. They are, as follows:

1. Free software is about education, and ensuring that the users of technology can and do understand the implications of the technology that they use.

At least theoretically, one of the leading reasons why having "complete and corresponding source code" is so crucial to free software is that with the source code, users will be able to understand how their technology works.

Contemporary software is considerably more complex than the 70s vintage software that spurred the Free Software movement. Where one might have imagined being able to see, use, and helpfully modified an early version of a program like Emacs, today the source code for Emacs is eighty megabytes, to say nothing of the entire Linux Kernel. I think it's absurd to suggest that "just looking at the source code" for a program will be educational in and of itself.

Having said that, I think free software can (and does) teach people a great deal about technology and software. People who use free software know more about technology. And it's not just because people who are given to use free software are more computer literate, but rather using free software teaches people about technology. Arch Linux is a great example of this at a fairly high level, but I think there's a way that Open Office Firefox plays a similar role for a more general audience.

2. There are a number of cases around free software where freedom--despite licensing choices--can be ambiguous. In these cases, particularly, it is important to think about the economics of software, not simply the state of the "ownership" of software.

I'm thinking about situations like the "re-licensing" such as that employed by MySQL AB/Sun/Oracle over the MySQL database. In these cases contributors assign copyright to the commercial owner of the software project on the condition that the code also be licensed under the terms of a license like the GPL. This way the owning copy has the ability to sell licenses to the project under terms that would be incompatible with the GPL. This includes adding proprietary features to the open source code that don't get reincorporated into the mainline.

This "hybrid model" gives the company who owns the copyright a lot of power over the code base, that normal contributors simply don't have. While this isn't a tragedy, I think the current lack of certainty over the MySQL project should give people at least some pause before adopting this sort of business model.

While it might have once been possible to "judge a project by the license," I think the issue of "Software Freedom" is in today's world so much more complex, and I'm pretty sure that having some sort of economic understanding of the industry is crucial to figuring this out.

3. The success of free software may not be directly connected to the size of the userbase of free software

One thing that I think Zonker's argument falls apart around is the idea that free software will only be successful if the entire world is using it. Wrong.

Lets take a project like Awesome. It's a highly niche window manager for X11 that isn't part of a Desktop Environment (e.g. GNOME/KDE/XFCE), and you have to know a thing or two about scripting and programing in order to get it to be usable. If there were much more than a thousand users in the world I'd be surprised. This accounts for a minuscule amount of the desktop window management market. Despite this, I think the Awesome project is wildly successful.

So what marks a successful free software project? A product that creates value in the world, by making people's jobs easier and more efficient. A community that supports the developers and users of the software equally. Size helps for sure, particularly in that it disperses responsibility for the development of a project among a number of capable folks. However, the size of a projects userbase (or developer base) should not be the sole or even the most important quality by which we can judge success.

There are other issues which are important to think about and debate in the free software world. There are also some other instances where the "hard line" is over radicalized by a more moderate element, nevertheless I think this is a good place to stop for today, and I'm interested in getting some feedback from you all before I continue with this idea.

Onward and Upward!

[1]Richard Stallman, founder of the Free Software Foundation and the GNU Project, original author of the GNU GPL (perhaps the most prevalent free software license), as well as the ever popular gcc and emacs.
[2]Arguably, it's easier for software developers and hacker types like myself to use "just free software" because hackers tend to make free software to satisfy their needs (the "scratch your own itch" phenomena), and so there's a lot of free software that supports "working like a hacker," but less for more mainstream audiences. Indeed one could argue that "mainstream computer using audiences" as a class is largely the product of the "proprietary software and technology industry."

Package Mangement and Why Your Platform Needs an App Store

When I want to install an application on a computer that I use, I open a terminal and type something to the effect of:

apt-get install rxvt-unicode

Which is a great little terminal emulator. I recommend it. Assuming I have a live interned connection, and the application I'm installing isn't too large, a minute later or less I have whatever it is I asked for installed and ready to use (in most cases.)

Indeed this is the major feature of most Linux Distributions: their core technology and enterprise is to take all of the awesome software that's out there (and there's a lot of it,) and make it possible to install easily, to figure out what it depends on, and get it to compile safely and run on a whole host of machines. Although this isn't the kind of thing that most people think when they're choosing a distribution of Linux, one of the biggest differentiating features between distributions. But I digress.

I've written about package management here before, but to summarize:

  1. We use package managers because many programs share dependencies, that we wouldn't want to install twice or three times, but that we might not want to install by default with every installation of an operating system. Making sure that everything gets installed is important. This, is I think, a fairly unique-to-open-source problem, because in the proprietary system the dependencies are installed by default (as in there are more monolithic development environments, like .NET, Cocoa, and Java [1], and other older non-managed options).
  2. One of the defining characteristics of open source software is the fact that it's meant to be redistributed. Package management makes it easy to redistribute software, and provides real value for both the users of the operating system and for the upstream developers. Or so I'm lead to believe.

In the end, we're back at the beginning where, you can install just about anything in the world if you know what the package is named, and the operating system will blithely keep everything up to date and maintained. [2]

While GNU/Linux systems get flack for not being as useable as proprietary operating systems, I see package management as this huge "killer feature" that open source systems have on top of proprietary system. We'll never see something like apt-get for Windows not because it's not good technology, but because it's impossible to mange every component of the system and all of the software with a single tool. [3]


And then all these "App Store" things started popping up.

As I've thought about it, "App stores," do the same thing for application delivery on non-GNU/* systems that package management does for open source systems. We're seeing this sort of thing for various platforms from cell phones like the iPhone/Blackberry/Andriod to Inuit's QuickBooks and even for more conventional platforms like Java.

Technically it's a bit less interesting. App stores generally just receive and distribute compiled code, [4] but from a social and user-centric perspective, the app store experience is really quite similar to the package management experience.

I'm surely not the only one to make this connection, but I'd be interesting to move past this and think about the kinds of technological progress that stem from this. App stores clearly provide value to users by making applications easier to find, and to developers who can spend less time distributing their software. Are there greater advancements to be made here? Is this always going to be platform specific, or might there be some other sort of curatorial mechanism that might add even more value in this space? And of course, how does Free Software persist and survive in this kind of environment?

I look forward to hearing from you.

[1]Let's not bicker about this, because the argument breaks down here a bit, admittedly, given that Java is now, mostly open. But it wasn't designed as an open system, and all of these solve the dependency problem by--I think--providing a big "standard runtime," and statically compiling anything extra into the program's executable. Or something.
[2]Package management sometimes breaks things, it's true, but I've never really had a problem that I haven't been able to recover from in reasonably short order. I mean, things have broken, and I will be the first to admit that my systems are far from pristine, but everything works, and that's good enough for me.
[3]While its possible to use more than one package manager at once, and there are cases even on linux where this is common (i.e. CPAN shell, system packages (apt/deb, yum/rpm) and ruby gems, haskell cabal and so forth) it's not preferable: Sometimes a package will be installed by a language-specific program manager and then the system package manager will install (over it) a newer or older version of the package, which you might not notice, or it might just cause something to act a bit funny on some systems. If you're lucky, Usually stuff breaks.
[4]Which means, I think indirectly that we're seeing a move away from static linking and bundling of frameworks and into a single binary or bundle. This is one of the advancements of OS X, that all applications are delivered in these "bundles" which are just directories that contain everything that an application needs to run. Apple addressed the dependency problem by removing all dependencies. And this works in the contemporary world because if an App had to be a few extra megs to include its dependencies? No big deal. Same with Ram usage.

The End of Reusable Software

I wrote a post for the Cyborg Institute several weeks ago about the idea of "Reusable Software", and I've thought for a while that it deserved a bit more attention. The first time around, I concentrated a lot about the idea of reusable software in the context of the kinds of computing that people like you and me do on a day to day basis. I was trying to think about the way we interact with computers and how this has changed in the last 30 years (or so) and how we might expect for this to change soon.

Well that was the attempt at any rate. I'm not sure how close I got to that.

More recently, I've been thinking about the plight of reusable software in the context of "bigger scale" information technology. I'd say this fits into my larger series of technology futurism posts, except that this is very much a work of presentism. So be it.

To back up for a moment I think we can summarize the argument against reusable software, which boils down to a couple of points:

1. With widely reusable software, most of the people who use computers on a regular basis can pretty much avoid ever having to write software. While it's probably true most people end up doing a very small amount of programming without realizing it, gone are the days when using a computer meant that you had to know how to program it. While more people can slip into using computers than ever before, the argument is that people aren't as good at using computers because they don't know how they work as well.

Arguably this trend is one of the harbingers of the singularity, but that's an aside.

2. Widely reusable software is often less good software than the single-use, or single-purpose stuff. When software doesn't need to be reused, it only needs to do the exact things you need it to do well and can be optimized, tuned, and designed to fit into a single person's or organization's work-flow. When developers know that they're developing a reusable application, they have to take into account possible variances in the environments where it will be deployed, a host of possible supported and unsupported uses. They have to design a feature set for a normalized population, and the end result is simply lower quality software.

So with the above rattling around in my head, I've been asking:

  • Are web applications, which are deployed centrally and often only on one machine (or a small cluster of machines), the beginning of a return to single use applications? Particularly since the specific economic goals of the sponsoring organization/company is often quite tightly coupled with the code itself?
  • One of the leading reasons that people give for avoiding open source release is embarrassment at the code base. While many would argue that this is avoidance of one sort or another, and it might be, I think it's probably also true more often than not. I'm interested in thinking about what the impact of the open source movement's focus on source code has had on the development of single use code versus multi use code in the larger scope.
  • What do we see people doing with web application frameworks in terms of code reuse? For starters, the frameworks themselves are all about code reuse and bout providing some basic tools to prevent developers from recreating the wheel over and over again. But then, the applications are (within some basic limitations) wildly different from each other and highly un-reusable.

Having said that, Rails/Django/Drupal sites suffer from poor performance in particularly high-volume situations for two reasons: Firstly, it's possible to strangle yourself in database queries in the attempt to do something that you'd never do if you had to write the queries yourself. Secondly the frameworks are optimized to save developers time, rather than run blindingly fast on very little memory.

I suppose if I had the answers I wouldn't be writing this here blog, but I think the questions are more interesting anyways, and besides, I bet you all know what I think about this stuff. Do be in touch with your questions and answers.

Onwards and Upwards!

The Tiling Window Manager Story

As I said in "The Odd Cyborg Out," I'm thinking of giving StumpWM a run. So I did some musing about tiling window managers, because I am who I am. Here goes,

So, like I said, I've been tinkering a very little with StumpWM, and I thought some background might be useful. For those of you who aren't familiar, StumpWM is another tiling window manager, like my old standard Awesome, except Stump is written in Common Lisp, and is descended from different origins from Awesome. Here's the history as I understand it.

The History of Tiling Window Managers

There was (and is,) this very minimalist tiling window manager called dwm which is written in less than 2000 lines of code, and is only configurable by modifying the original C code and then recompiling. It's intentionally elitist, and targeted at a very high level of user. While this is ok, particularly given the niche that are likely to want to use tiling window managers, there were a lot of people who wanted very different things from dwm. In a familiar story to those of us who follow free software and open source development: lots of people started maintaining and sharing patch-sets for DWM. These added additional functionality like easier configuration tools, integration with menus, notification libraries, theeing support, API hooks, and the rest is history.

Fast-forwarding a bit, these patch-sets inspired a number of forks, clones, and children projects. DWM was great (so I hear) if you were into it, but I think the consensus is that even if you were geeky/dweeby enough for it, it required a lot of attention and work to get it to be really useable in a day-to-day sort of way. As a result we see things like Awesome, which began life as a fork of DWM with some configuration options, and has grown into it's own project "in the tradition of dwm." dwm is also a leading inpsiration for projects like Xmonad, which is a re-implementation of dwm in the Haskell programing language with some added features around extension and configuration options.

This default configuration problem is something of an issue in the tiling window manager space, that I might need to return to in a later post. In any case...

Stump, by contrast has nothing (really) to do with dwm, except that they take a similar sort of approach to the "window management" problem which is to say that window behavior in both are highly structured and efficient. They tiling windows to use the whole screen and focus on a user experience which is highly keyboard driven operation. Stump, like xmonad, is designed to use one language exclusively for both the core program, the configuration, and the extension of the environment.

And, as I touched on in my last post on the subject I'm kind of enamored with lisp, and it clicks in my head. I don't think that I "chose wrong" with regards to Awesome, or that I've wasted a bunch of time with Awesome. Frankly, I think I'm pretty likely to remain involved with the project, but I think I'm a very different computer user--Cyborg--today than I was back then, and one of the things that I've discovered since I started using Awesome has been emacs and Lisp.

My History with Awesome

Lets talk a little bit more about Awesome though. Awesome is the thing that set me along the path to being a full-time GNU/Linux user. I found the tiling window manager paradigm the perfect thing that lets me concentrate on the parts of my projects that are important and not get hung up on the distractions of organizing windows, and all of the "mouse stuff" that took too much of my brain time. I started playing around in a VM on my old Macbook and I found that I just got things accomplished there somehow. And the more I played with things the more I got into it, and the rest is history.

When I finally gave up the mac, however, I realized that my flirtation with vim wasn't going to cut it, and I sort of fell down the emacs rabbit hole, which makes sense--in retrospect--given my temperament and the kind of work that I do, but none the less here I am. While Awesome is something that I'm comfortable with and that has severed me quite well, there are a number of inspirations for my switch. Some of them have to do with Awesome itself, but most of them have to do with me:

  • I want to learn Common Lisp. While I know that emacs' lisp, and Common Lisp aren't the same there are similarities, and Lua was something that I've put up with and avoided a lot while using Awesome. Its not that Lua is hard, quite the opposite, it's just that I don't have much use for it in any other context, and while I know enough to make awesome really work for me, my configuration is incredibly boring.

    Not that I think Common Lisp is exactly the kind of thing that is going to be incredibly useful to me in my career in the future, but like I said: I like the way Lisp makes me think, and it's a language that can be used for production-grade types of things, and it's a standard, it's not explained from a math-centric [1] perspective, and like I said reading lisp code makes sense to me. Go figure.

  • There are several of quirks with Awesome which get to me:

  • If you change your configuration, you have to restart the window manager. Which wouldn't be a big problem except...

  • When you restart, if you have a window that appears in more than one tag, the window only appears on one tag.

  • The commands for awesome are by default pretty "vimmy," and while my current config has been properly "emacsified," you have to do a lot of ugliness to get emacs-style chords (e.g. "C-x C-r o a f" or Control-x, Control-R, followed by o, a, and f.) which I kind of like.)

  • Because one of my primary environments is running a virtual machine (in Virtual Box) on an OS X host, I've run into some problems around using the Command/Windows/Mod4 key, and there's no really good way to get around this in awesome.

So that's my beef, along with the change in usage pattern that I talked about last time, which is probably the biggest single factor. I'm not terribly familiar with Stump yet, so I don't have a lot to offer in terms of thoughts, but I've been tinkering in the laptop, and it fits my brain, which is rather nice. I'll post more as I progress. For now I think I better cut this off.

[1]This is my major problem with haskell. It looks awesome, I sort of understand it when people talk about it, but every "here's how to use haskell" guide I read is fully of what I think are "simple" math examples, of how it works, but I have a hard time tracking the math in the examples, so I have a hard time grasping the code and programming lessons because the examples are too hard for me. This is the problem of having geeked out on 20th continental philosophy in college and not math/programming, I think.

microsoft reconsidered

I've been thinking about Microsoft recently, and thinking about how the trajectory of Microsoft fits in with the trajectory of information technology in general.

A lot of people in the free software world are very anti-Microsoft, given some of the agregious anti-competitive activites they use, and general crappiness of their software. And while I agree that MS is no great gift to computing, it's always seemed to me that they're johnny-come-lately to the non-free software world (comparatively speaking AT&T and the telecom industry has done way more to limit and obstruct software and digital freedom than microsoft, I'm thinking.) But this is an akward argument, because there's no real lost love between me and Microsoft, and to be honest my disagreement with Microsoft is mostly technologcial: microsoft technology presents a poor solution to technical problems. But I digress.

One thing that I think is difficult to convey when talking about Microsoft is that "The Microsoft We See" is not "The Core Business of Microsoft;" which is to say the lion's share of Microsoft's business is in licensing things like Exchange servers (email and groupware stack) to big organizations, and then there's the whole ASP.NET+SQL-Server stack which a lot of technology is built upon. And Microsoft works licensing in ways that's absurd to those of us who don't live in that world. A dinky instance (ten users?) of Windows Server+Exchange for small corporations easily starts at a grand (per year? bi-annually?) and goes up from there depending on the size of the user-base. I would, by contrast, be surprised if Microsoft saw more than 50 or 60 dollars per desktop installation of Windows that consumers buy. [1] And I suspect a given installation of windows lasts three to five years.

I don't think it's going to happen tomorrow or even next year, but I think netbooks--and the fact that Microsoft won't put anything other than XP on them--and the continued development of Linux on embedded devices, and the growing market share of Apple in the Laptop Market (and the slow death of the desktop computing market as we know it,) all serve to make any attention that we give to market share of Windows on the desktop, increasingly less worthwhile. This isn't to say that I think people will flock in great numbers to other platforms, but...

I think what's happening, with the emergence of all these web-based technologies, with Mono, with Flash/Flex/Silverlight/Moonlight, with web-apps, with Qt running cross platform, with native GTK+ ports to windows and OS X, is that what you run on your desktop is (and will continue to become) more and more irrelevant. There won't be "the next Microsoft," because whatever you think of the future of IT, there isn't going to be a future where quality software is more scarce, or harder to produce than it is today.


So this brings us back to servers licensing, and something that I realized only recently. In the Linux world, we buy commodity hardware, sometimes really beefy systems, and if you have a scaling problem you just set up a new server and do some sort of clustered or distributed setup, which definitely falls under the heading of "advanced sysadmining," but it's not complex. With virutalization it's even easier to fully utilize hardware, and create really effective distributed environments. At the end of the day, what servers do is not particularly complex work in terms of number crunching, but it is massively parallel. And here's the catch about Windows: developers are disincentived to run more than one server, because as soon as you do that, your costs increase disproportionately with regard to the hardware. Say the cost of a production server (hardware) is 4k and you pay 2k-3k for the software. If at some point this server isn't big enough for your needs, do you: buy an almost-twice-as-good-8k dollar server with a single license, or just shell out another 6k-7k and have a second instance? Now lets multiply this times 10? Or more? (I should point out that I'm almost certainly low balling Software licensing costs.)

At some point you do have to cave and pay for an extra Microsoft license, but it makes a lot of sense from an operations perspective to throw money at hardware rather than distributed architectures, because not only is it quicker, but it's actually cheaper to avoid clusters.

Microsoft, the company that made its money in microcomputer software has backed itself into being the "big iron" computing business. Which is risky for them, and anyone. Sun Microsystems couldn't make it work, IBM kills in this space (and Linux mainframes are in the 50k-100k range, which doesn't look as absurd in light of the calculations above.)

Anyway, this post has been all over the place, and I'm not sure I can tie it all together in a neat bow, but I think its safe to say that we live in interesting times, and that this whole "cloud thing" combined with the rapidly falling price of very high-powered equipment changes all of the assumptions that we've had about software for the past twenty or thirty years. For free software as well as the proprietary software...

[1]There's a line in the Windows EULA, that says if you don't agree with the terms and aren't going to use the windows that comes installed on your computer that you can get a refund on this if you call the right people for your machine's distributor. I've heard reports of people getting ~130 USD back for this, but it's unclear how much of that goes to Microsoft, or to the support for MS products that OEMs have to provide.

open source competition

I've been flitting about the relm of political economics, technological infrastrucutre, and cyborg-related topics for a number of weeks, maybe months, and I haven't written very much about open source. This post is hopefully a bit of a return to this kind of topic, mostly because I've been staring at a blog post for weeks, and I finally have something that's nearly cogent to about an article that kind of pissed me off. Here goes

The article in question seeks to inform would-be-software entrepreneurs how they ought to compete against open source software, and to my mind makes a huge mess of the whole debate. Lets do some more in-depth analysis.

"Open source is only cheap if you don't care about time," is an interesting argument that sort of addresses the constant complaint that open source is "fussy." Which it is, right? Right. One of the best open source business models is to provide services around open-source that make it less fussy. Also I think Free Software is often "a work in progress," and is thus only occasionally "fully polished," and is often best thought of as a base component that can be used to build something that's fully customized to a specific contextual set of requirements. That's part of the value and importance of free software.

I don't think we can have our cake and eat it too on this one, (the cake is a lie!) and in a lot of ways I think this is really a positive attribute of free software.

The complaints regarding open source software seem to boil down to: "open source software doesn't come with support services, and installation polish" (we're working on it, but this is a commercial opportunity to provide support around open source products in general.)

So to consolidate the argument, the author seems to suggest that: "in order to beat open source software, which sucks because it's not polished enough and doesn't have support, I'm going to write a totally different code base, that I'll then have to polish and support."

My only real response is. "Have fun with that."


Before I lay this to rest, I want to give potential "Commercial Software Vendors" (proprietary software vendors?) the following qualifications on the advice in the original article.

1. Save your users time: Sound advice. Though I think the best way to save users time is probably to integrate your product with other related tools. Make your product usable and valuable. Provide support, and take advantage skilled interaction designers to provide intuitive interfaces. Don't, however, treat your users like idiots, or assume that because your product might have a learning curve it's flawed. The best software not only helps us solve the problems we know we have, but also solves problems we didn't know we had, and in the process creates tremendous value. Don't be afraid to innovate.

Also, **save yourself time*, you can create more value for your customers by not reinventing the proverbial wheel. Use open source software to bootstrap your process, and if the value you create is (as it always is) in support and polish, you can do that to open source just as well as you can to your own software.

2. Market Hard, might work, but it's all hit and miss. Open source might not be able to advertise, or send people on sales calls to enterprises, but open source has communities that support it, including communities of people who are often pretty involved in IT departments. Not always, mind you, but sometimes.

If you're a "Commercial Software Vendor" you're going to have a hell of a time building a community around your product. True fact. And word of mouth, which is the most effective way to predict sales, is killer hard without a community.

4. Focus on features for people who are likely to buy your product, is a great suggestion, and really, sort of the point of commercial software, as far as I can see. Custom development and consulting around open source if you can provide it, achieves the same goal. At the same time, I think a lot of open source enterprise software exists and succeeds on the absence of licensing fees, and so I think would-be-software vendors should be really wary to think of the enterprise as being "cash cows" particularly in the long run.

So in summary:

  • Create value, real enduring value. Not ephemeral profitability, or in-the-moment utility.
  • Be honest about what your business/endeavor really centers on, and do that as best you can.
  • Understand the social dynamics of open source, not simply the technological constrains of the user experience.

And.... done.

on package management

I was writing my post on distribution habits and change, and I realized that I some elaboration on the concept of package management was probably in order. This is that elaboration.

Most linux--and indeed UNIX, at this point--systems have some kind of package management:

Rather than provide an operating as one-monolithic and unchanging set of files, distributions with package management provide systems with some sort of database, and common binary file format that allows users to install (and install) all software in a clear/standardized/common manner. All software in a Linux system (generally) is thus, covered by these package managers, which also do things like tracking the way that some packages depend on other packages, and making sure that the latest versions of a package are installed.

The issue, is that there are lots of different ways to address the above "problem space," and a lot of different goals that operating system designers have when designing package management and selecting packages. For instance: how do we integrate programs into the rest of our system? Should we err on the side of the cutting edge, or err on the side of stability? Do we edit software to tailor it to our system/users or provide more faithful copies of "upstream sources"? These are all questions that operating system/distribution/package managers must address in some way, and figuring out how a giving Linux distribution deals with this is, I think, key to figuring out which system is the best for you, though to be fair, it's an incredibly hard set of questions to answer.

The thing about package management, is that whatever ideologies you choose with regards to what tools you use, what packages to include and how to maintain packages, the following is true: all software should be managed by the package management tools without exception. Otherwise, it becomes frighteningly easy for new versions of software to "break" old non-managed versions of a piece of software with overlapping file names, by overwriting or deleting old files, by loading one version of a program when you mean to load another version, by making it nearly impossible to remove all remnants of an old piece of software, and so forth, or just by making it hard to know when a piece of software needs to be updated to a new version for security fixes or some such.

I hope that helps.