free network businesses

I’ve been reading the autonomo.us blog and even lurking on their email list for a while, so I’ve been thinking about “free network services,” and what it means to have services that respect users' freedom in the way that we’ve grown to expect and demand from “conventional” software. This post explores issues of freedom in network services, business models for networked services, and some cyborg issues related to network services. A long list indeed, so lets dive in.

I’ve been complaining on this blog about how much web applications, the web as a whole, and networked services on the whole suck. Not the concepts, exactly, those are usually fine, but suck for productive users of computers, and for the health of the Internet that first attracted me to cyberculture lo these many years ago. I still think that this is the case, but I’ve come to understand that a lot of the reason that I have heretofore been opposed to network services as a whole is because they’re sort of brazen in their disregard users freedom.

This isn’t to say that services which do respect users' freedom are--as a result--not sucky, but it’s a big step in the right direction. The barrier to free network services is generally one of business models. Non-free network services center around the provider deriving profit/benefit from collecting users' personal information (the reason why open-id never caught on), from running advertising along side user-generated content (difficult, but more effective than other forms of on-line advertising because the services themselves generally provide persuasive hooks to keep users returning,) or when all else fails, charging a fee.

So to back up for a minute, I suppose we should cover what it means to call a network service “free.” Basically, free network services are ones where fundamentally users have control over their data. They can easily import and export whatever data they need from the providers system. That users can choose to participate in the culture of a networked computing by running software on their computer. There are ideas about copy-left and open source with regards to running code on networked services that are connected to these ideas of freedom, but this is more a means to an end (as all copy-left is) rather than--I should think--an end in itself.

Basically, data independence and network federation or distribution. Which takes all of the, by now conventional, business models and tears them to bits. If users are free to move their data to another service (or their own servers) then advertising, leveraging personal information are all out of the window. Even free software advocates look at this problem and say, we have a right to keep network services closed. Which is understandable given that there aren’t many business models in the free world. While a lot of folks in the FNS space are working to build pillars of free network technologies, I think some theoretical work on the economics are in order. So here I am. Here are the ideas:

  • The primary “business” opportunity for free network service is in systems administration, and related kinds of tasks. If the software is (mostly) open source and design and implementation can’t possibly generate enough income, then keeping the servers running, the software up-to date, and providing support to users is something that provides and generates real value and is a concrete cost that users of software can identify with and justify.
  • Subscription fees are the new advertising. In a lot of ways what a particular service provides (in addition to server resources) is a particular niche community. While federation changes this dynamic somewhat, I think often people are going to be willing to pay some fee to participate in a particular community, so between entrance fees (like meta-filter) and subscription fees (like flickr) you should be able to generate a pretty good hourly rate for the work required.
  • Enterprise Services. We could probably support free network services (and the people behind them) by using those networks as advertisements for enterprise services. See a service on the Internet, and have a company deploy it for internal use on their intranet, and have the developers behind it sell support contracts.
  • Leach money from telecoms. This is my perpetual suggestion, but while most of us Internet folks and network service developers may or may not be making money from our efforts in cyberspace, the telecoms are making money in cyberspace hand over fist, largely on the merits of our work. It’s not really possible to bully Ma' Bell, but I think it’s a part of the equation that we should be focusing on.
  • Your Suggestion Here. The idea behind business in the free network service space, is that providers are paid for concrete value that they provide, rather than speculation on their abstract value, and as a result we can all think about business models without harming the viability of any of these business models.

new awesome

I’ve been (slowly) upgrading to the latest version of the Awesome Window Manager. Since Awesome is a pretty new program, and there was a debian code freeze during development for a huge chunk of the awesome3-series code… it’s been hard to install on ubuntu. Lots of dithering about, and then compiling by hand. For the uninitiated, ususally installing new software on a Debain-based system (like ubuntu; and many GNU/Linux systems are this way) is as simple as typing a single command. This hasn’t really been the case for awesome.

In any case, with the latest release candidates for awesome 3.3 in sid (debian unstable) I added a sid repository to my ubuntu system, updated, installed awesome, removed the sid repository. Breathed a huge sigh of relief, and then got to getting things setup again. I have the following responses to the new awesome:

  • I really like the fact that if you change something in your config file and it doesn’t parse, awesome loads the default config (at /etc/xdg/awesome/rc.lua) so that you don’t have to kill X11 manually and fix your config file from a virtual terminal.
  • If you’re considering awesome, and all this talk of unstable repositories scares you, the truth is that awesome is--at this point--not exactly adding new features to the core code base. There are some new features and reorganizations of the code, but the software is generally getting more and more stable. Also, the config file has been (and is becoming less of) a moving target, so given that it’s pretty stable and usable, it makes sense to “buy in” with the most current version of the configuration so you’ll have less tweaking in general.
  • The new (default) config file is so much better than the old ones. I basically reimplemented my old config into the new default config and have been really happy with that. It’s short(er) and just yummy.
  • I did have some sort of perverse problems with xmodmap which I can’t really explain but they’re solved.
  • If you’re use a display manager (like gdm) to manage your x sessions, I know you can just choose awesome from the default sessions list, but I’d still recommend triggering awesome from an .xinit/.Xsessions file so that you can load network managers and xmodmap before awesome loads. Which seems to work best for me.
  • I’d never used naughty, which is a growl-like notification system before, and now that it’s included by default I am using it, and I quite adore it.

More later.

why tiling window managers matter

I’ve realized much to my chagrin that I haven’t written a post on about the Awesome Window Manager in a long time. It’s funny how window managers just fade into the background, particularly when they work well and suit your needs. Why then, does this seem so important to me and why am I so interested in this? Funny you should ask.

Tiling window managers aren’t going to be the Next Big Thing in computing, and if they (as a whole) have an active user-base of more than say 10,000 people that would be really surprising. While I think that a lot of people would benefit and learn from using Awesome (or others), even this is something of a niche group.

As a result, I think something really interesting happens in the tiling window manger space. First, the project is driven by a rather unique driving force that I’m not sure I can articulate well. It’s not driven by a desire of profit, and it’s not driven by some larger Utopian political goal (as a lot of free software is). This is software that is written entirely for oneself.

That’s the way most (ultimately) free software and open source projects start. A lot of emphasis in the community (and outside) is made on the next stage of the progress, where a project that was previously “for oneself” becomes something larger with broader appeal. Lets start with the git version control system which started because the kernel development team needed a new version control system, but in the past couple of years has become so much more, by way of phenomena like github and flashbake. The free software and open source worlds are full of similar examples: the Linux Kernel, Drupal, b2/WordPress, Pidgin/Gain, Asterisk, and so forth.

But Awesome, and the other tiling window managers will, likely as not never make this jump. There is no commercial competitor for these programs, they’re never going to “breakthrough” to a larger audience. This isn’t a bad thing, it just effects how we think about some rather fundamental aspects of software and software development.

First, if developers aren’t driven by obvious “us versus them” competition, how can the software improve? And aren’t there a bunch of tiling window mangers that compete with each other?

I’d argue that competition, insofar as it does occur, happens within a project, and within the developer rather than between projects and between developers. Awesome developers are driven to make Awesome more awesome, because there’s no real competition to be had with Aqua (OS X window manger,) or other Kwin and Metacity (GNOME and KDE’s window mangers), or even other alternate X11 window managers like OpenBox.

Developers are driven to “do better,” than the people that preceded them, better than their last attempt, better than the alternate solutions provided by the community. Also, the principals of minimalism which underpins all of these window managers towards simple, clean, and lightweight code, inspires development and change (if not growth, exactly). This seems to hold true under anecdotal observation.

While there are a number of tiling window managers in this space, I’m not sure how much they actually compete with each other. I’d love to hear what some folks who use xmonad and StumpWM have to say about this, but it’s my sense that the field of tiling window managers has more to do with other interests. Xmonad makes a point about the Haskel programing language. Stump is targeted directly toward emacs users and demonstrates that Lisp/Common Lisp is still relevant. Awesome brings the notion of a framework to window management, and seems to perfectly balances customizable with lightweight design. While Awesome is a powerful player in this space, I don’t think that there’s a lot of competition.

Second, if there’s no substantive competition in this domain and if there’s a pretty clear “cap” to the amount of growth, how are tiling window managersnot* entirely pointless.*

I think there are two directions to go with this. First I think we’re seeing some of the benefits of these window managers in other projects like xcb (a new library for dealing with X11) and freedesktop benefit both directly and indirectly from the work being done in the tiling window manager space. Similarly, Xmonad is a great boon to the Haskell community and cause (I suspect).

The other direction follows an essay I wrote here a few months ago about the importance of thinking about the capabilities of programing languages even if you’re not a programmer because languages, like all sorts of highly technical concepts and tools create and constrain possibilities for all computer users, not just the people who ponder and use them. In the case of the tiling window manager, thinking about how the people who are writing computer programs is productive. In addition to the aforementioned thoughts about competition and open source.

So there we are. I’ll be in touch.

The Obvious and the Novel

I’ve been working a bit--rather a lot, actually--on getting myself ready to apply for graduate school (again) in a year to eighteen months, and one of the things that I’m trying to get figured out is the “why” question. Why go? Why bother? Questions like that. For starters, I hope to have some of the youthful angst regarding education knackered by the time I go back, and second, I think I’ll be able to make the most of the experience. This post speaks to one part of this challenge: about what research is productive and worthwhile (that is, novel and original), and what research is by contrast merely an explanation of the obvious.

This is all predicated on the assumption that there’s some sort of qualitative divide between the kind of causal observation and theoretical work that is what I do, (already), and “real work,” productive work that productively contributes to a discourse. (Too young for impostor syndrome? unlikely!) Now this might be a ill conceived separation but, nevertheless the thought is on my mind.

The trains of thought:

  • There’s some fundamental difference between blagging and productive “knowledge production.” Blogging is a practice that doesn’t lead to systematic investigation, and thus, while interesting and a productive tool for the development of my thinking, it’s a lousy end in and of itself.

    As I wrote that above paragraph, I remember that it resonated with a thought I’ve had about this website (in it’s previous iterations) many years ago. Interesting.

  • Fiction writing has (and continues) to be the most satisfying output of this impulse that I’ve been able to have thus far. While I do worry that my fiction isn’t novel enough, that’s a technical (eg. plot, setting, character) issue rather than a theoretical (eg. the science, and historiography) concern.

    Fiction writing also has a long publication cycle. My blog posts, from inception to posting, aren’t particularly time intensive. Fiction, even/especially short stories require a bunch of extra time, and being able to immerse myself in a collection of ideas for a long time has a bunch of benefit.

    Also, there’s a credential issue that I rather enjoy with-regards to Science Fiction. There’s no degree that I could possibly want. I mean, sure, there are popular fiction writing programs, but that’s not a requirement, and I suspect that I’ll (try) to go to viable paradise sometime in the 2010s (or Clarion if I am somehow, ever, able to spare 6 grand and the ability to take 6 weeks off of my life), these would just be “for me,” and there’s nothing other that the quality of my work and the merit of my ideas that are between me and acceptance as a science fiction writer. That’s really comforting, somehow.

  • Most of us read literature of some sort, and talk about literary texts of one stripe or another, but I don’t think that these activities necessarily make most of us literary critics. The art and project of literary criticism is something more. The difference between reading and talking about a text and practicing literary criticism is an issue of methodology. One of the chief reasons I want to go back to school is to develop an additional methodological tool kit, because my current one is a bit lacking. I’m pretty convinced that the difference between “thinking/doing cool things” and “doing/thinking important things,” is largely an issue of methodology.

While I don’t think this would short circuit the gradschool plans, but I think working to develop some sort of more rigorous methodological companion to the blogging process that goes beyond the general “so folks, I was thinking about foo so I’m going to tell you a story” (did I just give away my formula? Eep!)

Cooperatives, Competition, Openness

I’ve been thinking, in light of the Oracle purchase of Sun Microsystems, about the role of big companies in our economy, the role of competition, and what open source business models look like. This is a huge mess of thoughts and trains but I have to start somewhere.

  • The Hacking Business Model isn’t so much a business model, as it is an operations model for hacker-run business. In that light it’s a quite useful document, and it’s understandable that it mostly ignores how to obtain “revenue” (and therefore, I think, falls in to the trap that assumes that new technology creates value which translates into income, when that doesn’t quite work pragmatically.)

I’m interested in seeing where this kind of thing goes, particularly in the following directions:

  • Where does capital come from in these systems? For start-up costs?
  • Where and how do non-technical (administrative, management, support, business development) staff/projects fit into these sorts of systems?
  • The conventional wisdom in proprietary software (and to a lesser extent in free software) is that in order to develop new technology and improve existing technology code-bases need to compete with each other, and I don’t really think that this is the case in open environments.

I’m not sure that the competition between Solaris, the BSDs, and Linux (augmented as they all are by GNU to various extents) pushes each UNIX/UNIX-like operating system to improve. Similarly, I don’t know that having vim and emacs around keeps pushing the development of the text-editor domain.

At the same time, competition does help regulate--after a fashion--the proprietary market. Having Oracle’s database products around help keep Microsoft’s database products on their toes. OS X spurs development in Windows (usually). Without serious competition we get things like the ribbon interface to Microsoft Office (ugg), and telecoms.

This ties into my work and thinking on distributed version control systems, but I think in open systems, (particularly where branching is supported and encouraged,) the competition can happen among a team or with one’s own history. We pitting code bases against each other seems to not make a great deal of economic sense.

  • I wish I had access to demographic data, but I suspect that there are few if any open source projects with development communities that are bigger than ~100-150 dunbar’snumber, and the bigger projects (eg. Drupal, KDE, GNOME, the Linux Kernel, Fedora, Debian) solve this by dividing into smaller working projects under a larger umbrella.

And yet, our culture supports the formation of companies that are many many times this big.

I’ve written before about the challenges of authenticity in economic activity, and I wonder if having large non-cooperative institutions (companies) is one of the chief sources of in-authenticity is the fact that we can’t remain accountable and connected to the gestalt of the most basic economic unit (the corporation).

I wonder if as we learn from free software and open practices, if cooperative-based business are more likely to become more predominant, or how else our markets and economies will change.

This brings us back to the revenue system in the hacking business model from above. In smaller operations we can imagine that some business opportunities would be viable that wouldn’t be viable in larger operations. Also, because smaller co-ops can specialize more effectively. These factors combine to signify that competition becomes an internal or “vertical” issue rather than an external/horizontal project and in these situations generating revenue becomes easier.

Thoughts?

More to come?

martian economics

I’ve been reading--and by god I hope by the time I post this, I’m done reading--Kim Stanley Robinson’s Mars Trilogy. I read (parts of) these once before, but I was busy adjusting to college at the moment and I didn’t retain a great deal from that experience. In any case, there’s a lot in these stories to pick apart and absorb.

And I enjoy that. I really like science fiction that both tells a good story and contributes to some sort of intellectual conversation that’s bigger than it. Surely all literature has some theoretical conception of itself, but work that unabashedly tussles with relevant knowledge is particularly powerful.

Hell, at one point, a character in Blue Mars meditates on Deleuzian philosophy. My heart goes pitter pattter at the sight of people who are willing to mediate on Deleuze and do a good job at it. (Ironically, or perhaps not, I think a lot of academics don’t quite know what to do with Deleuze.) Anyway…

One of the things that I’ve really enjoyed thinking about while reading Green and Blue Mars is that Robinson does a lot of economic theorizing and imagination. I find this an interesting playground as a lesson from fiction, and also as a productive consideration of the issues I began to talk about in my essay on co-ops, competition, and openness.

So read the book, particularly if you haven’t or if you’re interested in thinking about economic systems and potentials, but the current economy is… boggling.

Robinson posits (a martian) system where land is collectively owned, where projects (research, farming, construction) are undertaken by ~100-person co-ops that workers have to buy-into (with money earned during internships), with everything overseen by a Judaical system that makes judgments on mostly with regards to environmental impact.

My father, upon reading this, made the very apt judgment that, the key here is that--on Mars--there’s no countryside, and that farming (because it’s attached to cities because of the Atmosphere issue). While this is a vast oversimplification--of course--he’s right: new age hacker-type economic models need to consider “industries” like materials engineering and food production more than they currently do.

We have a lot of thinking to do.

lessons from fiction

In the last several days, I’ve spent a lot of time writing and working on this new novel that seems to be capturing too much of my attention. It’s a nifty story, definitely the best piece of fiction that I’ve written henceforth, despite all my worry, dread, and seemingly limitless self-doubt in relation to the project. Despite the gremlins on my shoulder saying “why aren’t you working on short fiction; why aren’t your characters having more sex; do you really think you can float such a disjointed/complex narrative; do you have a clue where this is going? …and so forth,” I’ve learned a few rather interesting things from this story this past week.

It’s a time travel story, stupid!

Yeah, I’m well into the 7th chapter (of about 12?) and I finally figured out that I was, at it’s core telling a time travel story. No, it’s not a case of getting several tens of thousands of words on paper and realizing that you’re writing the wrong story, but rather that I’ve always thought of it as a quirky space opera, and just this week I realized that what makes it quirky is that it’s fundamentally a time travel story.

Right.

My goal in this project was to write about history and how “history” emerges from “a collection of things that happen” to something more coherent and recognizable as such.

In a weird way, my fiction (since I started writing again in early 2007, at least) have always addressed the issues at the very kernel of my academic/scholarly interest. I’m interested in how communities form, and how people negotiate individual identities amongst groups of people. Open source software, cyberculture in general, and hackers are one way of looking at this that is very much the center of how I’m looking at these questions. Queerness is another. Same kernel.

In any case, history--however defined or used--is a key part of this community-identity-individual loop. Can you participate in the emacs/emacs-lisp community without knowing about the history of the XEmacs fork? Linux without knowing a little about the early days with Minix and UNIX? Git without knowing a little about CVS and the bitkeeper story? If you can, not for very long. There are other more mainstream culture examples as well, Americans and the great depression (particularly Roosevelt’s fireside chats, say?). Queers and stonewall? Etc.

This stuff is, to my mind, an incredibly important factor in “who we are” and how we all exist in our communities and the world at large. And because I’m who I am, I’m writing a story about this.

The science fictional effect, at play is relativity--lacking fantastic super-liminal (FTL; faster than light) space-drives--our characters must endure some pretty intense time dilation during transit: it takes them t weeks to get from planet A to planet B but meanwhile, it’s t years later on both of the planets who more or less share a common time line.

Now I don’t do the math right, for it to work out as being sub-light speeds (exigencies of plot; interstellar spaces are really big), but the time dilation is a huge feature of the story and of (many) main characters place in the world, particularly in contrast to each other.

And thus, in a manner of speaking, it’s a time travel story. Albeit where the time travel is one way (future bound,) linear, based on Einsteinian principals, and common place.

And it took me half the book or more to recognize the story as such, which will--if nothing else--allow me to explain the story a bit better.

In the future Project Xanadu Worked

I’ll probably touch on this later, but I realized (and this might not be particularly unique to my story) but that my characters were interacting with “the database” in the world. An internet-like system, only more structured, and more distributed, easier to search, easier to operate locally.

Which was basically Project Xanadu, on an interstellar scale. The features that my characters take advantage of:

  • Distribution and federation of copies: I have ansible technology in the story, but even so, given the trajectories of data storage technology it makes more sense to store local copies of “the database” than it does, to route requests to the source of the data, or even your nearest peer for records. Assume massive storage capability, advanced rsync (a contemporary tool synching huge blobs of data across a network), and periods of, potentially, years when various ships, outposts, and systems would be out of contact with each-other. Nah, store stuff locally.
  • Versioning Having a data store that stores data along a temporal axis (versions across time) is handy when you’re working on your computer and you accidentally delete something you didn’t mean to. It’s absolutely essential if you have lots of nodes that aren’t always in constant contact. It means you don’t loose data after merges, it solves some concurrency problems, interstellar data would require this.
  • Structure: The contemporary world wide web (The Web) is able to function without any real structure, because we’ve imported data visualization from (more) analog formats (pamphlet layout/design; pages; index-like links, desktop metaphors), and we’ve developed some effective ad-hoc organizations (google, tags, microformats) which help ameliorate the disorganization, but the truth is that the web--as a data organization and storage tool--is a mess. My shtick about curation addresses this concern in one way. Creating a “new web” that had very strict page-structure requirements would be another. In the novel, their database grew out of the second option.

The future is here folks, but you knew that.

jaunty upgrade

So I’ve upgraded to the latest version of Ubuntu, 9.04 “Jaunty Jackalope,” and I thought I’d post some thoughts on the matter.

On my desktop the upgrade (eg. sudo apt-get dist-upgrade) was a bit touch and go, but I managed to save things (logging in running the old kernel into a root shell to fix the upgrade which mysteriously held back some packages it shouldn’t have) and here I am. The laptop was easier to upgrade, and I suspect this has something to do with the blob video card drivers I’m using on the desktop.

On the whole, I can’t say I’ve toyed with the updates to gnome terribly much so I don’t know what to say, but I suspect that they are in fact newer, better, and worthwhile if you’re using intrepid and or interested in trying out ubuntu. It’s really a great OS, and it does a great job--in my experience--of just working with minimal fussing.

I’m not sure that I’d choose an Ubuntu distribution again knowing what I know today. At the same time, I don’t know that I’d know as much as I do about Linuxes today without it, and given that this still works, I’m not switching.

My jaunty upgrade, however, inspired a few changes to my setup, and I’m quite happy with those improvements. They are:

  • I switched to using rxvt-unicode as a terminal emulator (I had been using gnome-terminal). I really like this, because the terminal is low resource (and can run demonized, so you can have a lot of windows open). It’s hard as hell to setup (in my experience,) but you can learn from my .Xdefaults file if you’re interested.
  • I started (finally) using gnome-color-chooser and gtk-chtheme (debian package names) to remove gnome-settings-daemon from my list of background programs, while still having windows that don’t look like 1992.
  • I stopped using vimperator in firefox, opting instead for firemacs (to control keybindings) and LoL for keyboard navigation (hit-a-hint links). Having access to the Awesome bar is a good thing indeed.

Still on the list of things to update?

  • I need to upgrade to the latest awesome version, as I’m behind.
  • I need to actually ditch gdm, which irritates me still.