dweebishness of linux users

I ran across this smear piece with regards to Ubuntu users from the perspective of a seasoned Linux user, which I think resonates both with the problem of treating your users like idiots and differently with the kerfuffle over ubuntu one, though this post is a direct sequel to neither post.

The article in question makes critique (sort of) that a little bit of knowledge is a terrible thing, and that by making Linux/Unix open to a less technical swath of users, that the quality of the discourse around the linux world has taken a nose dive. It’s a sort of “grumble grumble get off my lawn, kid,” sort of argument, and while the elitist approach is off-putting (but total par for the course in hacker communities,) I think the post does resonate with a couple of very real phenomena:

1. Ubuntu has led the way for Linux to become a viable option for advanced beginner and intermediate computer users. Particularly since the beginning of 2008 (eg. the 8.04 release). Ubuntu just works, and a lot of people who know their way around a keyboard and a mouse are and can be comfortable using Linux for most of their computing tasks. This necessarily changes the makeup of the typical “Linux User” quite a bit, and I think welcoming these people into the fold can be a challenge, particularly for the more advanced users who have come to expect something very different from the “Linux Community.”

2. This is mostly Microsoft’s fault, but people who started using--likely Windows powered--computers in the nineties (which is a huge portion of people out there), being an ‘intermediate’ means a much different kind of understanding that “old school” Linux users have.

Using a Windows machine effectively, and knowing how to use one of these systems, revolves around knowing what controls are where in the control panel, around being able to “guess” where various settings are within applications, knowing how to keep track of windows that aren’t visible, understanding the hierarchy of the file system, and knowing to reboot early and often. By contrast, using a Linux Machine effectively revolves around understanding the user/group/file permissions system, understanding the architecture of the system/desktop stack, knowing your way around a command line window, and the package manager, and knowing how to edit configuration files if needed.

In short, skills aren’t as transferable between operating systems as they may have once been.

Ubuntu, for it’s flaws (tenuous relationship with the Debian Project, peculiar release cycle), seems to know what it takes to make a system usable with very little upfront cost: How the installer needs to work, how to provide and organize the graphical configuration tools, and how to provide a base installation that is familiar and functional to a broad swath of potential users.

While this does change the dynamic of the community, it’s also the only way that linux on the desktop is going to grow. The transition between windows power user and linux user is not a direct one. (While arguably the transition between OS X and Linux is reasonably straight forward.) The new people who come to the linux desktop are by-and-large going to be users who are quite different from the folks who have historically used Linux.

At the same time, one of the magical things about free software is that the very act of using free software educates users about how their software works and how their machines work. The cause of this is partially intentional, partly by virtue of the fact that much free software is designed to be used by the people who wrote the software, and partly because of free software’s adoptive home of UNIX-liken systems. Regardless of the reason however, we can expect that even the most “n00bish” of users to eventually become more skilled and knowledgeable.


Having said that, in direct response to the article in question, even though I’m a huge devote of a “real” text editor, might it be the case that the era of the “do everything text editor” may be coming to an end? My thought is not that emacs and vi are no longer applicable, but the truth is that building specialized domain specific editing applications is easy enough that building such editing applications inside of vi/emacs doesn’t make the same sort of sense that it made a twenty or thirty years ago? Sure a class of programmers will probably always use emacs, or something like it, but I think the change of emacs being supplanted by things-that-aren’t editors, say, is something that isn’t too difficult to imagine.

If the singularity doesn’t come first, that is.

canonical freedom and ubuntu one

Recently, Canonical Ltd., the company which sponsors the Ubuntu family of GNU/Linux distributions recently announced the UbuntuOne service, which is at it’s core a service that allows users to synchronize files between multiple Ubuntu-based machines. Having your files sync between multiple machines is a huge feature, and the truth is that there aren’t really good solutions that accomplish this task, for any operating system. At the same time there’s been a lot of hubbub in the community over this release. It’s complex but the issues in the complaint are:

  1. UbuntuOne is a non-free project, in that, the software that’s powering the service (on the servers) is not being distributed (in source, or binary) to the users of the service. While the client is being open sourced, the server component is crucial important to users' autonomy.
  2. Ubuntu, if we are to believe what Canonical says, is the name of a community developed Linux distribution based on Debian. Canonical, is a for-profit organization, and it’s using the Ubuntu name (the trademark to which it owns) for a non-free software project.
  3. Canonical has also gone back on a promise to release the software that powers LaunchPad under the AGPL. While this isn’t directly related to the flap surrounding Ubuntu One, it allows us to (potentially) contextualize the ongoing actions of Canonical with regards to network services.

My response comes in three parts.

Part One, the Technology

File syncing services are technologically pretty simple, and easy to create for yourself. I use ssh and git to synchronize all of my files, data, and settings between machines. I keep the sync manual, but I could automate it pretty easy. It’s not free, I pay a small monthly fee for some space on a server, but it works, and I have total control over the process.

Granted, my solution is a bit technical and requires some babying along, and works because 95% of my files are text files. If I had more binary files that I needed to sync, I’d probably use something like rsync which is a great tool for keeping large groups of files synchronized.

In fact rsync is so good, you can probably bet that UbuntuOne is using rsync or some rsync-variant (because it’s GNU GPL software, and it’s good.) If you’re running OS X, or any GNU or Linux based operating system then the chances are, you’ve already got rsync installed. Pulling together something to keep your files synced between more than one machine just requires a few pieces:

  • something that runs on your computer in the background that keeps track of when files change so that it can send the changes to the server. Conversely this component can also just run on a timer and send changes ever x amount of time (five minutes? if the computer isn’t idle.)
  • something that runs on the server that can send changes to other computers when the other computers say (“Has anything changed?").

Done. I’m no programmer--as I’m quick to attest--but I think that I could probably (with some help,) pull together a tutorial for how to get this to work in a few hours.

Part Two, Trademarks, Centralization and Community

I think a lot of people feel betrayed by the blurring of this “thing” that a community has built (Ubunutu) with Canonical Ltd.

Which is totally reasonable, but this is largely an orthogonal problem to the problem with UbuntuOne, and I think is a much larger problem within the free software/open source/crowd sourcing world. This is one of the problems when entrusting trademarks and copy rights to single entities. In a very real way, Canonical--by using UbuntuOne--is trading on the social capital of the Ubunutu community, and that leaves a sour taste in a lot of peoples mouths.

But the issue of ceding control over a name or a product to a centralized group, is something that we have a lot of experience with, with varying results. Some thoughts and examples:

Here’s one example: There’s a huge “open source” community that’s built up around the commercial/proprietary text editor TextMate for OS X. While I think TextMate is really great software, and the TextMate community is made up of really great people, TextMate is largely valuable because of the value created by the community, and it exists (tenuously) on the good graces of the owner of the TextMate intellectual property. While Alan is a great guy, for whom I have a great deal of respect, if anything were to happen to TextMate a lot of people would find that they had nothing to show for their energy and efforts in the TextMate community.

Similarly, MySQL AB (and later Sun Microsystems and now Oracle) owns the entire copyright for the MySQL database, which isn’t (or wasn’t) a major issue for most developers in the early days, but now given the sale of that company (and it’s copyright holdings) puts the development of that code-base into some doubt. I’ve seen, as a result, much greater buzz around the PostgreSQL project as a result of this doubt, and I think this kind of fall out serves as a good example of what can happen to a community when the centralized body fails to act in the interests of the community, or even threatens to.

This is a huge issue, in the whole “web 2.0”/mashup/social networking/social media space. The logic for the proprietors of these sites and services is “build something, attract users create a REST API that makes it easy for people to develop applications using our service that add value to our service, attract more users, stomp out competition in the space, profit.” This is basically, the Twitter/Facebook/ning business model, and while it works to some degree it’s all built upon: stable APIs and the enduring good will of the community toward the proprietors of the service. Both of these are difficult to maintain, from what I’ve seen, as the business model isn’t very coherent, and requires the proprietors to balance their own self interest, their community’s interests, and find some way to profit in light of an unstable business model. It’s tough.

Part Three, Business and Free Network Services.

I’ve been treading over ideas related to free network businesses and cooperatives and software freedom for weeks now, but I swear it all fell into my lap here. Some basic thoughts, as conclusion for this already too lengthy essay:

  • The UbuntuOne service, like most free network service, is at it’s core providing a systems administration service rather than some sort of software product. The software, is relatively trivial compared to making sure the servers are running/accessible/secure
  • The way to offer users' autonomy is to develop easy/free systems administration tools, and to educate them on how to run these systems.
  • Corporations, while important contributors to the free software community, also inevitably serve their own interests, while it’s disappointing to see Canonical go down the proprietary track, it’s neither surprising nor a betrayal. Canonical has put on a good show and accomplished a great deal, but in retrospect we can imagine a number of things that they could have done differently from way back that would have changed the current situation. (eg. Worked within the Debian Project, developed a tighter business model, etc.)
  • Free software, is very pro-business, but it’s not very pro-big-business, as “native free software business models” are built on personal reputations rather than tangible products. It translates to making an honest living pretty well, but it doesn’t convert very well into making a lot of money quickly.

Anyway, I better get going. Food for thought.

adventures in systems administration

I’m beginning to write this in the evening after a long day of system administration work. For birthday (though, really, it’s been on my todo list for a long time), I ordered and set up a server from these fine folks to serve as the technological hub of my activities. While I’ve been quite fond of those fine folks for quite a long time, there was a growing list of things that I always wished worked better with dreamhost, and it was finally starting to limit the kinds of projects I could undertake in. So I bit the bullet and went ahead and ordered the server and spent some time getting everything straightened out.

For some background: the server is just an instance running inside of a Xen hypervisor, which runs other servers together on the same hardware: this is good, I couldn’t really use a server that powerful all by my lonesome (and I wouldn’t want to have to pay for it all either). It’s also way more powerful that what I had before, and the vitalization allows me to act with impunity, because it’s as if I’m running my own server, really. I’ve been doing computer administration and web development for a long time, but I’ve never had to do anything like this so it’s been an experience to learn how DNS records really work, how all the different kinds of server applications work, how really cool the Apache web server really is. It’s a great learning experience, and I think it would be prudent (and potentially helpful for you) to reflect on the experience. So here are my notes on the adventure:

  • I’ve been running Ubuntu on my personal/desktop machines since the great Linux switch, and I’ve been pretty pleased with it. Not totally wowed by it: it works, but my tendencies are toward the more minimal/lightweight systems. But more than anything, I’m really drawn to systems that just work more than I am to systems that work perfectly, and I’m pretty good at keeping systems working. In 5 years of OS X usage I installed an OS twice, and since I got things stable and running, the only installations I’ve done have been to put ubuntu on new machines.

    In any case, this server was a chance for me really explore debian stable (lenny), which I hadn’t ever done before. It’s so cool. It’s not sexy or daring or anything but in a server you don’t want that, and it just works. I think it probably also helps matters somewhat lenny was released only a few months ago, rather than nearly two years ago, but in any case I’m quite enamored of how well it works.

  • Email is much more complicated than I think any of us really give it credit for. There’s all sorts of complicated mess with regards to how DNS servers identify mail servers to help fight spam, and then there’s all the filtering and sorting business, and even the “modern” email servers are a bit long in the tooth. I hope that it’s a “set it and forget about it” sort of thing, though to be truthful I just got it all running and set up initially, but there’s a lot of further setup to do, before I move it all around.

  • I’m pretty proud of the fact that as i was going thought the set up process, I go to the point where it said “ok, now set up the FTP server, and I said ‘meh’ and moved around.” Turns out that I can do everything I need to do in terms of getting files onto the server with git/scp/ssh/rsync and FTP is just lame and old. Welcome to the new world, where file transfers are shorter, versioned, and more secure.

    This isn’t particularly new, I couldn’t tell you the last time I used FTP, but I think this represents both the utility in moving to a “real server,” and a larger shift in the way we think about webservers. FTP assumes that the main purpose of the webserver is to store and serve files. the ssh/rsync/git model assumes that your webserver exists to be “your computer in the sky.” Which it is. We interact with the computers on our desks in really complex ways; there’s no reason to interact with our computers in the sky by just copying files to and from it.

  • I’m convinced that systems-administration work will increasingly be the “hard currency” (basis for exchange) for the networked computing age. It’s sort of onerous work, there are skills and knowledge that most people who need network service don’t have and don’t need to have, there are actual costs, the need is ongoing, and success is quantifiable.

    There’s definitely space here for people (like me, and others) to provide these kinds of services to people. Sort of “boutique” style. Clearly I have more to learn and more thinking to do on the subject, but it’s a start.

  • Ejabberd is peculiar, and the Debian package is… less than ideal. I knew going in that there was a “web administration” interface which sounds cool until you realize that, it’s… not an administration panel as much as it is a sort of “web dashboard.” You still have to tweak the configuration file which is written in Erlang, and wow. That’s pain right there.

Having said that, it seems to work just fine, without much fussing, and I’m want the jabber-server to do a very limited set of things: host my own IM account and transports; host muc-chats (created by me); and that’s about it. I’m a bit worried that it might be a bit too heavy for this.


That’s about all. More to come, I’m sure.

free project xanadu

It’s my hope that this post will combine the following ideas:

1. The concept of “General Information” As Posited Samuel Delany’s 1984 novel Stars in my Pocket Like Grains of Sand.

2. The hypertext system, Project Xanadu, as described by Theodor Holm Nelson in his book Literary Machines (and elsewhere) which I’ve discussed on this blog recently.

3. The contemporary idea of distributed network service, as described in the Franklin Street Statement, and enacted by technologies like git, xmpp, laconi.ca and open microblogging, and others.


We value the Internet--really the “web”--as it is to today, because it’s diverse, and flexible. Web pages can look like anything, can do virtually anything from present the text of a book or newspaper to fulfill most of the functionality of your desktop computing needs. What’s more all this is indexed and made accessible with google search. That’s pretty cool.

While the web’s ad-hoc and disorganized structure has made many things possible, there’s no reason to assume that the future development of the web will continue in the current direction. microformats, and the proliferation of rss in “Web 2.0,” not to mention human generated portals like Mahalo (or google knoll, and even various WikiMedia Foundation Projects), all seem to point to a larger trend toward more structured, hand curated information.

As an aside, I think it’s interesting that hand-curation (more human involvement) in information networks while structured data means less human involvement those networks.

I should also clarify that by “more structured” I basically mean an end to web-design as we know it now. Rather than allow designers and--well, people like me--to have a say with regards to how pages are organized, information would be collected in containers with specific structures (headings, lists, tables, metadata, etc.) and the design or display would happen on the client side in the form of specialized browsers, Site specific browsers, but also domain specific browsers. (eg. use this program to view blags and microblog pages, and this program for reading pages from the news services, and this program to view x-new-class of sites). In short, adding structure to content wouldn’t limit the realm of possibility, but it would separate content from this stream of thought.

Structure is one part of the Xanadu-model of hypertext/content, and perhaps the most lamented by those of us who are… professionally frustrated by the lack of structure in the contemporary web, but I think it’s distribution and federation concepts are too often overlooked, and are quickly becoming relevant to contemporary technology.

Federation, to subtitle, is the set of technologies technologies that allow network services to function without always-on and real-time network. Federation avoids two other technical problems with distributed network services: first, it removes the need for centralized servers that provide canonical versions of content. Secondly, in a distributed environment federation removes the need for local nodes to contain complete copies of the entire network. Xanadu had provisions for the first aspect by not the second while the Internet (more or less) has provisions for the second, but not the first, and free network services--in some senses--attempt to bring the second form of federation to the web and to the Internet.

Federation, for free network services, means finding ways of communicating data between websites so that networks of information can be built in the same way that networks of computers have already been built.


In Stars in my Pocket Like Grains of Sand Delany’s “Internet” is a service called “General Information” or GI which exists in a neural link for some of the characters. GI isn’t always complete, or accessible in it’s most up to date format--and it’s users know this--and accept it as a price for living in an interstellar society, but it is accessible on an interstellar level. GI, like free network services is built (either implicitly or explicitly) with the notion that a node on the network could go offline, continue to develop and be useful, and then go back on-line later, and “sync” with it’s peer nodes, thus creating some measure of resilience in the network.

The contemporary network uses a resilient routing system to “get around” nodes that drop offline, whereas a truly federated system would store diffs across time and use this “temporal” information to maintain a consistent network. This sort of consistency is going to be really useful--not only because it would allow individuals and small groups to provide their own networked computing services locally, but also because providing data connectivity that is free, always-accessible, fault tallerant, and high speed, is unlikely to appear universally… ever, and certainly not for a long time.


I suppose the next step in this train of thought is to include some discussion of my friend joe’s project called “haven,” which would tie this to the discussions I’ve been having with regards to databases. But that’s a problem for another time.

database market

This post is the spiritual sequal to my (slight) diatribe against database powered websites of a few weeks ago. And a continuation of my thoughts regarding the acquisition of Sun Microsystems by Oracle. Just to add a quick subtitle: Oracle is a huge vendor of database software, and about 18 months ago (? or so) Sun acquired mySQL which is the largest and most successful open-source competitor to Oracle’s products.

With all this swirling around in my head I’ve been thinking about the future of database technology. Like ya’do…

For many years, 15 at least, relational database systems (rdbms') have ruled without much opposition. This is where Oracle has succeeded, and mySQL is an example of this kind of system, and on the whole they accomplish what they set out to do very well.

The issue, and this is what I touched on the last time around, is that these kinds of systems don’t “bend” well, which is to say, if you have a system that needs flexibility, or that is storing a lot of dissimilar sorts of data, the relational database model stops making a lot of sense. Relational databases are big collections of connected tabular data and unless the data is regular and easily tabulated… it’s a big mess.

So we’re starting to see things like CouchDB, google’s big table, Etoile’s CoreObject MonetDB that manage data, but in a much more flexible and potentially multi-dimensional way. Which is good when you need to merge dissimilar kinds of data.

So I can tell the winds are blowing in a new direction, but this is very much outside of the boundaries of my area of expertice or familiarity. This leads me to two obvious conclusions

1. For people in the know: What’s happening with database engines, and the software that is built upon these database systems. I suspect there’s always going to be a certain measure of legacy data around, and developers who are used to developing against RBDMS' aren’t going to let go of that easily.

At the same time, there’s a lot of rumbling that suggests that something new is going to happen. Does anyone have a sense of where that’s going?

2. For people who lost me at when I said the word database: In a lot of ways, I think this has a huge impact on how we use computers and what technology is able to do in the near term. Computers are really powerful today. In the nineties the revolution in computing was that hardware was vastly more powerful than it had been before; in the aughts it became cheaper. In the teens--I’d wager--it’ll become more useful, and the evolution of database systems is an incredibly huge part of this next phase of development.

new awesome

I’ve been (slowly) upgrading to the latest version of the Awesome Window Manager. Since Awesome is a pretty new program, and there was a debian code freeze during development for a huge chunk of the awesome3-series code… it’s been hard to install on ubuntu. Lots of dithering about, and then compiling by hand. For the uninitiated, ususally installing new software on a Debain-based system (like ubuntu; and many GNU/Linux systems are this way) is as simple as typing a single command. This hasn’t really been the case for awesome.

In any case, with the latest release candidates for awesome 3.3 in sid (debian unstable) I added a sid repository to my ubuntu system, updated, installed awesome, removed the sid repository. Breathed a huge sigh of relief, and then got to getting things setup again. I have the following responses to the new awesome:

  • I really like the fact that if you change something in your config file and it doesn’t parse, awesome loads the default config (at /etc/xdg/awesome/rc.lua) so that you don’t have to kill X11 manually and fix your config file from a virtual terminal.
  • If you’re considering awesome, and all this talk of unstable repositories scares you, the truth is that awesome is--at this point--not exactly adding new features to the core code base. There are some new features and reorganizations of the code, but the software is generally getting more and more stable. Also, the config file has been (and is becoming less of) a moving target, so given that it’s pretty stable and usable, it makes sense to “buy in” with the most current version of the configuration so you’ll have less tweaking in general.
  • The new (default) config file is so much better than the old ones. I basically reimplemented my old config into the new default config and have been really happy with that. It’s short(er) and just yummy.
  • I did have some sort of perverse problems with xmodmap which I can’t really explain but they’re solved.
  • If you’re use a display manager (like gdm) to manage your x sessions, I know you can just choose awesome from the default sessions list, but I’d still recommend triggering awesome from an .xinit/.Xsessions file so that you can load network managers and xmodmap before awesome loads. Which seems to work best for me.
  • I’d never used naughty, which is a growl-like notification system before, and now that it’s included by default I am using it, and I quite adore it.

More later.

why tiling window managers matter

I’ve realized much to my chagrin that I haven’t written a post on about the Awesome Window Manager in a long time. It’s funny how window managers just fade into the background, particularly when they work well and suit your needs. Why then, does this seem so important to me and why am I so interested in this? Funny you should ask.

Tiling window managers aren’t going to be the Next Big Thing in computing, and if they (as a whole) have an active user-base of more than say 10,000 people that would be really surprising. While I think that a lot of people would benefit and learn from using Awesome (or others), even this is something of a niche group.

As a result, I think something really interesting happens in the tiling window manger space. First, the project is driven by a rather unique driving force that I’m not sure I can articulate well. It’s not driven by a desire of profit, and it’s not driven by some larger Utopian political goal (as a lot of free software is). This is software that is written entirely for oneself.

That’s the way most (ultimately) free software and open source projects start. A lot of emphasis in the community (and outside) is made on the next stage of the progress, where a project that was previously “for oneself” becomes something larger with broader appeal. Lets start with the git version control system which started because the kernel development team needed a new version control system, but in the past couple of years has become so much more, by way of phenomena like github and flashbake. The free software and open source worlds are full of similar examples: the Linux Kernel, Drupal, b2/WordPress, Pidgin/Gain, Asterisk, and so forth.

But Awesome, and the other tiling window managers will, likely as not never make this jump. There is no commercial competitor for these programs, they’re never going to “breakthrough” to a larger audience. This isn’t a bad thing, it just effects how we think about some rather fundamental aspects of software and software development.

First, if developers aren’t driven by obvious “us versus them” competition, how can the software improve? And aren’t there a bunch of tiling window mangers that compete with each other?

I’d argue that competition, insofar as it does occur, happens within a project, and within the developer rather than between projects and between developers. Awesome developers are driven to make Awesome more awesome, because there’s no real competition to be had with Aqua (OS X window manger,) or other Kwin and Metacity (GNOME and KDE’s window mangers), or even other alternate X11 window managers like OpenBox.

Developers are driven to “do better,” than the people that preceded them, better than their last attempt, better than the alternate solutions provided by the community. Also, the principals of minimalism which underpins all of these window managers towards simple, clean, and lightweight code, inspires development and change (if not growth, exactly). This seems to hold true under anecdotal observation.

While there are a number of tiling window managers in this space, I’m not sure how much they actually compete with each other. I’d love to hear what some folks who use xmonad and StumpWM have to say about this, but it’s my sense that the field of tiling window managers has more to do with other interests. Xmonad makes a point about the Haskel programing language. Stump is targeted directly toward emacs users and demonstrates that Lisp/Common Lisp is still relevant. Awesome brings the notion of a framework to window management, and seems to perfectly balances customizable with lightweight design. While Awesome is a powerful player in this space, I don’t think that there’s a lot of competition.

Second, if there’s no substantive competition in this domain and if there’s a pretty clear “cap” to the amount of growth, how are tiling window managersnot* entirely pointless.*

I think there are two directions to go with this. First I think we’re seeing some of the benefits of these window managers in other projects like xcb (a new library for dealing with X11) and freedesktop benefit both directly and indirectly from the work being done in the tiling window manager space. Similarly, Xmonad is a great boon to the Haskell community and cause (I suspect).

The other direction follows an essay I wrote here a few months ago about the importance of thinking about the capabilities of programing languages even if you’re not a programmer because languages, like all sorts of highly technical concepts and tools create and constrain possibilities for all computer users, not just the people who ponder and use them. In the case of the tiling window manager, thinking about how the people who are writing computer programs is productive. In addition to the aforementioned thoughts about competition and open source.

So there we are. I’ll be in touch.

Cooperatives, Competition, Openness

I’ve been thinking, in light of the Oracle purchase of Sun Microsystems, about the role of big companies in our economy, the role of competition, and what open source business models look like. This is a huge mess of thoughts and trains but I have to start somewhere.

  • The Hacking Business Model isn’t so much a business model, as it is an operations model for hacker-run business. In that light it’s a quite useful document, and it’s understandable that it mostly ignores how to obtain “revenue” (and therefore, I think, falls in to the trap that assumes that new technology creates value which translates into income, when that doesn’t quite work pragmatically.)

I’m interested in seeing where this kind of thing goes, particularly in the following directions:

  • Where does capital come from in these systems? For start-up costs?
  • Where and how do non-technical (administrative, management, support, business development) staff/projects fit into these sorts of systems?
  • The conventional wisdom in proprietary software (and to a lesser extent in free software) is that in order to develop new technology and improve existing technology code-bases need to compete with each other, and I don’t really think that this is the case in open environments.

I’m not sure that the competition between Solaris, the BSDs, and Linux (augmented as they all are by GNU to various extents) pushes each UNIX/UNIX-like operating system to improve. Similarly, I don’t know that having vim and emacs around keeps pushing the development of the text-editor domain.

At the same time, competition does help regulate--after a fashion--the proprietary market. Having Oracle’s database products around help keep Microsoft’s database products on their toes. OS X spurs development in Windows (usually). Without serious competition we get things like the ribbon interface to Microsoft Office (ugg), and telecoms.

This ties into my work and thinking on distributed version control systems, but I think in open systems, (particularly where branching is supported and encouraged,) the competition can happen among a team or with one’s own history. We pitting code bases against each other seems to not make a great deal of economic sense.

  • I wish I had access to demographic data, but I suspect that there are few if any open source projects with development communities that are bigger than ~100-150 dunbar’snumber, and the bigger projects (eg. Drupal, KDE, GNOME, the Linux Kernel, Fedora, Debian) solve this by dividing into smaller working projects under a larger umbrella.

And yet, our culture supports the formation of companies that are many many times this big.

I’ve written before about the challenges of authenticity in economic activity, and I wonder if having large non-cooperative institutions (companies) is one of the chief sources of in-authenticity is the fact that we can’t remain accountable and connected to the gestalt of the most basic economic unit (the corporation).

I wonder if as we learn from free software and open practices, if cooperative-based business are more likely to become more predominant, or how else our markets and economies will change.

This brings us back to the revenue system in the hacking business model from above. In smaller operations we can imagine that some business opportunities would be viable that wouldn’t be viable in larger operations. Also, because smaller co-ops can specialize more effectively. These factors combine to signify that competition becomes an internal or “vertical” issue rather than an external/horizontal project and in these situations generating revenue becomes easier.

Thoughts?

More to come?