Doing Wikis Right

Wiki started as this weird idea that seemed to work against all odds. And then it seemed to really work. And now wiki is just a way to make and host a website without taking full responsibility for the generation of all the content. To say wiki is to say "collaboration" and "distributed authorship," in some vague handwaving way.

But clearly, getting a wiki "right," is more difficult than just throwing up a wiki engine and saying "have at it people." Wiki's need a lot of stewardship, and care, that I think people don't realize off the bat. Even wikis that seem to be organic and loosey-goosey.

I have this wiki project, at the Cyborg Institute Wiki that I've put some time into, but not, you know a huge amount of time particularly recently. Edits have been good, when they've happened. But all additions have come from people who I have asked specifically for their contributions. I don't think this is a bad thing but this experience does run counter to the "throw up a wiki and people will edit it" mindset.

I've started (or restarted?) [a wiki that I set up for the OuterAlliance][oa-wik]. You can find out more about the OA there (as it gets added) or on the OuterAlliance Blog. Basically, O.A. is a group of Science Fiction writers, editors, and critics (and agents? do we have agents?) who are interested in promoting the presentation and visibility of positive queer characters and themes in science fiction (literature). [1]

In any case, the group needed a wiki, and unlike the C2 Wiki, the people who are likely to contribute to this wiki are probably not hackers in the conventional sense. As I've sort of taken this wiki project upon myself, I've been trying to think of ways to ensure success in wikis.


Ideas, thoroughly untested:

Invite people to contribute at every opportunity, but not simply by saying "please add your thoughts here." Rather, write in a way that leaves spaces for other people to interject ideas and thoughts.

Create stubs and pages where people can interject their own thoughts, but "red links" (or preceding question marks in my preferred wiki engine) are just as effective as stubs in many cases. The thing is that wikis require a lot of hands on attention. While stubs don't require a lot of attention and maintenance, they require some. My favored approach recently is to make new pages when the content in the current page grows too unwieldy and to resist the urge to make new pages except when that happens.

Reduce hierarchy in page organization unless totally needed. You don't want potential collaborators to have to thing very much about "where should I put this thing." The more hierarchy there is the greater the chance that they'll have to either think about it and/or that they'll not find a place to put their contribution and then not contribute. This is undesirable.

Hierarchy is problematic for most organizational systems, but in most wiki systems, it is really easy and attractive to divide things into lots of layers of hierarchy because it makes sense at the time. The truth is, however, that this almost never makes sense a couple of weeks or months down the road. Some hierarchy makes sense, but it'll take you hundreds of thousands of words to really need 3 layers of hierarchy in most wikis.

Leaders and instigators of wiki projects also, should know that creating and having a successful wiki represents the output of a huge amount of effort. There's a lot of figuring out what people mean to say and making sure that their words actually convey that. There's a lot of making sure people's comments really do belong on the page where they put them. And more often than not leaders put in the effort to write a huge amount of "seed" content as an example to contributors in the future. It's not a bad gig, but it's also not the kind of hting you can just sit back and let happen.


Other thoughts? Onward and Upward!

[1]It's an awesome group, and a useful and powerful mission, and I think the OA has learned a lot from, and is well connected to some of the activity around anti-racism, that's been lingering in science fiction over the last year to eighteen months as a result of the "RaceFail" hubbub of a year ago. The fact that there's this kind of activity in and around Science fiction is one of the reasons that I love being a part of this genre.

Notifications and Computing Contexts

Maybe it's just be but I think notifications of events on our computers suck. On OS X there's Growl, and GNU/Linux desktops have the libnotify stuff, and I'm sure there's something on windows, but I don't think this really addresses the problem at hand. Not that the software doesn't work, becuase it mostly does what it says it's going to do. The issue, I think is that we need, or will very shortly need, much more from a notification system than anything around can handle.

Lets back up.

I don't know about you but there are a lot of events and messages that I get or need to get, including: new mail, some instant messages, mentions of cerain words on IRC (perhaps only in certain channels), notifications of when a collaborator has pushed something to a git repository, updates to certain RSS feeds, notifications of the completion of certain long-running commands (file copies, data transferes, etc.) and so forth. I think everyone likely has their own list of "things it would be nice if their computer could tell them about."

The existing notification systems provide a framework that enables locally running applications to present little messages in a consistent, and unified manner. This is great. The issue is that for most of us the things that we need to be notified of aren't locally running. At least in my case, instant messaging, IRC, git, and the key RSS feeds that I want to follow aren't "locally running applications." And to further complicate matters, no matter how your slice things, I use more than one computer, and in an ideal world it would be nice for one machine to know what notifications I'd seen on another computer when I sat down. In other words, my personal notification system should retain memory of what it's shown me and what I've awknowladged across a number of machines.

That doesn't happen. At least not today.

I have a few ideas about the implementation that I will probably cobble together into another post, and I'd love to hear some feedback if any of you have addressed this problem and have solutions.

It strikes me that there are two larger themes at work here:

1. Personal computing events occur locally and remotely, and notification systems need to be able to seemlessly provide these kinds of notifications. While I think a lot of the hype around clound computing is--frankly--absurd, it is fair to say that our computing is incredibly networked.

2. People don't have "a computer," any more, but rather several: phones, desktops, "cloud services," virtual private servers, and so forth. While we use these systems differently, and our own personal "setup" are often unique, we need to move between these setups with ease.

These two shifts, networked computing and multiple computing contexts, affect more than just the the manner in which we receive notifications. But really, I think, outline the ways that the way we use computing has changed in the past few years. There's a lot of buzzwords around this shift, in the web appliacation and cloud computing space particularly, and I don't think that the "hipster"/"buzzword" experience is widely generalizable. It's my hope that these conclusions are more widely applicable and useful: both for the development of a notification system that we need, and for thinking about application development in the future.

Like I said above, I'd love to hear your thoughts on this subect, and perhaps we can work on collecting thoughts on the Cyborg Institute Wiki. Take care!

Window Sizes in Tiling Window Managers

There's an issue in tiling window managers, that I think a lot folks who are used to floating window managers never expect. I wrote a post to the Awesome listserv a while back explaining this to someone, and it seems to have struck a chord (I saw the post linked to last week). I thought I'd write a brief post here to explain what's going on in a more clear and general way.

The Problem

When tiled, windows don't seem to take up all the space that's available to them. This creates weird "gaps" between windows. But only some windows: Firefox is immune to this problem, while terminal emulators like xterm, and urxvt, and gVim, and emacs get all funky.

What's Happening

The application that are affected by this draw their windows based upon a number of fixed width columns. We'll note that terminal emulators, as well as GUI versions of programmer's text editors like vim and emacs, all used fixed-width fonts and often allow you to set window sizes based on the number of columns (of characters).

As a result, these applications are only able to use space on the screen in increments of full characters. Most of the time, in floating window managers, we never really notice this limitation.

In tiling window managers you do notice, because the window manager forces the windows to use all available space, except in some windows it leaves these weird gaps at the bottom and right of the window. Sometimes the gaps end up in the window, as unusable buffers, and sometimes they end up between windows. It looks funny, pretty much no matter how you slice it.

What You Can Do About It

The truth? Not much.

The Awesome Window Manager, by default shows the gaps between the windows. I always found this to be the "more ugly" option. You can alter this behavior by searching your configuration file for size_hints_honor and making the line look like this:

c.size_hints_honor = false

This tells Awesome to ignore windows (client's) when they say "I want to have these dimensions." It doesn't fix the problem but it does get rid of the gaps.

The real solution is to tweak text sizes, fonts, and any buffering elements (like a status bar, mode line, or widget box), and window borders so that the windows aren't left with extra space that they don't know how to cope with.

By real solution, I really mean "only option:" it's really impossible to get all of your fixed width applications to have exactly the right number of pixels. You can get close in a lot of situations, and I've always found this to be much less annoying than using floating window managers.

The Original Post

Just for giggles, I've included a quoted portion of what I posted original to the listserv on the topic.

The one big of information that might be important: The urxvt terminal emulator, when not "honoring size hints," is unable to really properly draw the "extra space" with the proper background. I suspect this is a bug with the pseudo-transparency system they use. As a result there are often a few pixels with the background in an inverted color scheme. Same problem as above, but it looks funny if you're not used to it.

What's happening is that urxvt (like many terminal emulators) can only draw windows of some specific sizes based on the size of the characters (eg. x number of rows, and y number of columns.) So while you may have larger and the equivalent of say 80.4x20.1, urxvt can't do anything with this extra space.

If you honor size hints, the windows will end wherever they can, and use as much space as they can, but leave gaps between windows if the total space isn't properly divisible. If you don't honor size hints, the windows themselves take up the extra room (but they can't do anything with the extra room, so they just leave it blank, and sometimes the transparency is a bit wonky in those "buffers").

So there you have it. I hope this helps!

Building the Argument

I was talking to my grandmother (Hi!) last week, as I do most weeks, and we discussed the blog. She's been a regular reader of the site for many years, and lately, we've enjoyed digging a little deeper into some of the things that I've written about here. She said, I think of the Owning Bits, and I agree, that it sort of seemed that I was building something... more.

But of course.

I don't know that I've connected all of the dots, either in my head or on the blog, but I think that the things I've been blogging about for the last year or so are all conected, interwoven, and illuminate incredibly interesting features of each other when considered as a whole. There is "something building" here. To recap, so that we're on the same page, the nexus of subjects that I've been milling over are:

  • Free Software and Open Source Software Development.

I'm interested in how communities form around these projects, how work is accomplished, both technically, and organizationally. I'm interested in how innovation happens or is stifled. How the communities are maintained, started, and lead. From a social and economic perspective there's something fundamentally unique happening in this domain, and I'd like to learn a lot more about what those things are.

This topic and area of thought have taken a backseat to other questions more recently, but I think it's fundamentally the core question that I'm trying to address at the moment. I think that I'm going to be making a larger point of addressing open source methodologies in the coming weeks and months as part of an attempt to pull things back together. I think.

I started writing about the IT industry because I found itreally difficult to think about Free Software without really knowing about the context of free software. One really needs to understand the entire ecosystem in order to really make sense of what open source means (and doesn't mean.) Particularly in this day and age. Initially I was particularly interested in the Oracle/Sun Merger, and the flap around the ownership of MySQL; but since then, I think I've branched out a little bit more.

I've tried very hard to not frame the discussion about the IT industry and open source as a "community" versus "enterprise" discussion, or as being "free" versus "non-free," or worse "free" versus "commercial." These are unhelpful lenses, as Free Software and Open Source are incredibly commercial, and incredibly enterprise-centric phenomena, once you get past the initial "what do you mean there's no cost or company behind this thing."

In the same way that thinking about the IT Industry provides much needed context for properly understanding why "open source communities" exist and persist: thinking about how we actually use technology, how we relate to techno-social phenomena, and how these relationships, interfaces, and work-flows are changing. Both in changing response to technology, and changing the technology itself. It's all important, and I think the very small observations are as useful as the very large observations.

In some respects, certainly insofar as I've formulated the Cyborg Institute, the "cyborg" moment can really be seen as the framing domain, but that doesn't strike me a distinction that is particularly worth making.

Interestingly, my discussion of cooperatives and corporate organization began as a "pro-queer rejection of gay marriage," but I've used it as an opportunity to think about the health care issue, as a starting point in my thinking regarding EconomyFail-2008/09. The economics of open source and Free Software are fascinating, and very real and quite important, and I found myself saying about six months ago that I wished I knew more about economics. Economics was one of those overly quantitative things in college that I just totally avoided because I was a hippie (basically.)

While it could be that my roots are showing, more recently I've come to believe that it's really difficult to understand any social or political phenomena without thinking about the underlying economics. While clearly I have opinions, and I'm not a consummate economic social scientist, I do think that thinking about the economics of a situation is incredibly important.

I've been blogging for a long time. And I'm a writer. And I want to write and publish fiction as a part of my "career," such as it is. As you might imagine these factors make me incredibly interested in the future of publishing of "content," and of the entire nexus of issues that relate to the notion of "new media."

Creative Commons shows us that there has been some crossover between ideas that originated in the "open source" world with "content" (writ large.) The future of publishing and media is a cyborg issue, an ultimately techno-social phenomena, and thinking about the technology. that underpins the new media is really important. And of course, understanding the economic context of the industry that's built around content is crucial.

So what's this all building to? Should I write some sort of monograph on the subject? Is there anyone out there who might want to fund a grad student on to do research on these subjects in a few years?

The problem my work here so far--to my mind--is that while I'm pretty interested in the analysis that I've been able to construct, I'm not terribly satisfied with my background, and with the way that I've been largely unable to cite my intellectual heritage for my ideas and thoughts. I never studied this stuff in school, I have a number of books of criticism, potentially relevant philosophy, and important books in Anthropology (which seems to fit my interests and perspectives pretty well.) I'm pretty good at figuring things out, but I'm acutely aware of a lacking in my work of reference, methodology, and structure. As well as of any sort of empirical practice.

So maybe that's my project for the next year, or the next few months at any rate: increase rigor, read more, consider new texts, pay more attention to citations, and develop some system for doing more empirical work.

We'll see how this goes. I'd certainly appreciate feedback here. Thanks!

technology task list

Thought I've gotten away from it a bit in recent months, tychoish has a long history of being an outlook of lists of various things. While I'm not sure I want to post all of my lists for everyone to point and laugh at, the following might be worth exploring.

This is a list of things I need to get worked out with my new computer, with technology in general. I post it both because I need an excuse to do a little brainstorming, and because it might be nice to get a little feedback from you all. Without further ado:

  • Get USB Mounting/Auto-mounting to work more smoothly.

I use USB mass storage devices so rarely that I'm totally oblivious as to how I should go about setting this up with Arch Linux.

  • Reformat and server-ify my desktop.

Since I'm basically not using my desktop as a desktop anymore... and there are some things that just don't work... and there are no files left on it that I don't have backed up elsewhere... I think it's time to do a system wipe. I want to put Arch on it. I had thought about putting Xen on it and using virtual machines, but I'm now in a place where the increased management burden of that would outweigh the benefits of that. So I think I'm just going to set it up like a server, (but I suppose setting up a lightweight desktop wouldn't be a big stress). Mostly I think having a server at home will be useful for testing, development, and othersuch projects. In any case, it's not terribly useful as it is.

  • Reorganize my music collection (now on laptop).

I copied over my music collection and while I've had a bunch of luck with mpd, I need to spend some time reorganizing the music. It's on the list, and I shall do it.

  • Straighten out the situation with my external hard drive.

Yeah, no clue here. I hope it's alright. I'm going to try and use the Mac at work to see if I can't make it work better. I may crack the enclosure and put it in my desktop once that's in better working order.

  • Acquire accessories:

There's stuff I've had on my shopping list for a while. In no particular order:

  • A more suitable laptop sleeve.

As it turns out, I have this backpack that's great for lugging stuff around, but it's bigger than I need most of the time, and the laptop padding could handle my 15.4" PowerBook back in the day. Current laptop is quite small, so it's sort of overkill. This is lower priority.

  • Additional power adapters.

The battery on this puppy is amazing. Having said that it's nice to have a power adapter that can just live in my bag so I don't have to fuss with repacking the power adapter every time I leave somewhere. I think one at my desk at work, one for home, and one for my bag is my usual complement and Lenovo power adapters are a lot cheaper than mac ones...

  • Wireless access point for home.

Somehow I don't have one. Oversight. Must procure soon. The thing is that I have an ancient 100 foot Ethernet cable that seems to do the job pretty well.

  • Sort out Sleep/Wake Cycle

I think I mostly have this one sorted out. Basically, I had problems with the new laptop freezing when waking up from suspend/sleep when the network state upon return was different than when the laptop went into suspend. A little tweak to the ACPI event script, and everything seems to be in order.

  • Write Network Management Triggers

I'm using the preferred network manager suite for Arch Linux (e.g. "netcfg") and it works great, except it's sometimes a bit bothersome to mange things, when I think it ought to just work. So I think I have a solution: create shortcuts and triggers in the window manager to get network stuff working a bit more smoothly. Now I just have to make it work.

  • Tinker with StumpWM contrib packages

Once I got Stump WM working and set up, I mostly abandoned it. There's all sorts of cool lisp things in the contrib/ directory that I haven't tinkered with. Well, except for mpd.lisp, and even then not terribly much. I think I'd get something out of playing with those and so it's on the list.

  • Figure out what to do with the x41.

I'm not sure. The old laptop works, and I feel like I should do something with it... But what?

The Web Application Layer

This post is an attempt to ask "what next?" in the world of contemporary application development. I'm disturbed by the conveyance of applications in this format. This is not news to popular readers, but, rather than complain extensively about the state of the contemporary technology, I think it would be more productive to muse on possible improvements and some of the underlying structural concerns in this space

In No Particular Order...

Today, we routinely design and implement user interfaces in HTML and JavaScript these days. I'm not convinced that HTML, or any XML based format is really all that good for conveying well formatted structured text, much less pixel perfect graphic design and application interface.

Lightweight text markups like Markdown, reStructuredText (for all it's warts), and Textile are human readable, provide structure, and convey text well. Furthermore, it's very possible to efficiently translate them into very high quality output formats, including XML formats and LaTeX.

One of the driving forces behind the convergence on "web-technologies" is that JavaScrpt/HTML/CSS are all thought to be "cross platform" technologies. It doesn't matter if you're running on a Mac, or a PC, or UNIX system, if it has a web browser it'll run there. The web application movement realizes the "write once run everywhere," notion that Sun attempted with Java in the 90s (except, that Java never really worked for that.) Except that every browser implements JavaScript/HTML/CSS in a different way which means, that it's really "write once and tweak it to death so that IE/Firefox/Webkit don't break." There are some things (like jQuery; HTML5) that make this better, but the browser market is dirty and browser makers will never be incentivized to comply with the standards. [1]

RESTful APIs [2] are, I think, leading to more desktop applications. Or at least making them more possible. It used to be very much the case that if you wanted web-connected data you had to go to a website. Now, if you want data from the Internet, in most cases it can be gotten in an easy to process formant (i.e. YAML/JSON) and then folded into a desktop application. In addition to "rounded corner power" and "social media," the biggest impact of "Web 2.0" has been the increasing awareness and interest the API[^quality] in the more general public.

Adobe AIR is a wonderful idea. Even smaller lightweight devices like smartphones and netbooks are so powerful that it doesn't make a lot of sense to have them operate as such "dumb" clients. Conventional web development has developers cobble together server applications that put together content and then chuck it off to the client for rendering. With APIs (see above) it doesn't make a lot of sense to leave all the heavy lifting on the server. Adobe understood this with AIR, the problem with AIR? It's wildly proprietary, applications look out of place on every platform, and performance is miserable relative to "real" native applications. Its a great idea, and I'm terribly interested to see what comes next in this space.

I'm grumpy about HTML 5 because I remain unconvinced that web standards are really a viable way of regulating sane design and development practices. It also seems too likely that HTML 5 solves the problems we were having 2 years ago, rather than the problems we'll have over the next 10 years. Also, I think this world needs a hell of a lot less XML, in any form.

What are your thoughts?

[^qualit]y]:I'm not sure there is any singular aspect of the whole "Web 2.0 thing" that is unequivocally bad or good. I think on the whole web design is better now than it used to be, but "rounded corner power" at this point all looks the same, and it's really difficult to achieve in a clean technological sort of way. And the web has always been social; so while that's not new, it's nice that the web has caught on even if the whole "social networking silo" phenomena is less than desirable. The same thing goes for RESTful APIs: it's great that data is more accessible, but it sucks that APIs can be so proprietary and I'm not convinced that HTTP is the "right" or "best" protocol for this technology. But these things happen.

[1]You may think that I'm simply being pessimistic, and you might say that IE 7 and 8 are a huge step in the right direction and I think that this might be true, but the only reason to create and maintain a browser (to my mind) are: masochism, to get people to use your search engine, and to be able to implement special proprietary (and non-standard) features. The competitive advantage comes from the unique enhancements that a given browser is able to offer over the other browsers in the market. For a while (e.g. 1999-2007?) the more standards compliant a browser was the better pages looked in it. I'm not sure that will continue to hold true.
[2]This is a simplification, but lets think of this as the obligatory API that all web-services provide today: from twitter to flickr, to YouTube, and beyond. These allow programmers to connect to the service using the HTTP protocol

Technology Update

I fear I've been posting too many posts in the vein of "so here's what I'm up to folks," rather than you know writing about things that may be interesting to folks out there in Internet land. Nevertheless, here's another slightly more technical post.

I've mentioned in passing a few times over the past few months that:

  • I've been interested in shifting to Arch Linux. While I've been running Arch in a VM on my work desktop, I have been quite slow to move additional machines over to Arch. It's not for lack of wanting, but I have a hard time disrupting something that works when it's already working.
  • Also on the software front, I've switched to using the Stump Window Manger, and while I've talked a bit about this on the blog, I've done virtually no reporting of my ongoing progress with this.
  • I missed the days when I only had one computer and it went everywhere with me. While I like having all this computing possibility around, I'm moving around enough these days that it doesn't make sense to be tied down to a desktop. I like sitting on the couch and writing, and I like being able to go off for a weekend and be able to work on the projects that I really need/want to work on. That's hard when you have a desk and an "office."

This post provides updates with regards to these subjects.

Moving to Arch Linux and StumpWM

A few weeks ago, I had this massive cascade of software issues. Mostly things were provoked by the switch to Stump. Basically, the issue was that Stump wasn't embedded in all of the desktop frameworks that are so popular these days, there were a number of system resources that just didn't work with the new Window Manager.

The thing was that my systems were running a terribly hacked up version of Ubuntu. I was running weird kernels, I'd mostly given up on the display managers, and the systems were just messy. So the problem wasn't so much with Stump, as it was with the way that Ubuntu packages and manages certain aspects of the system inside of desktop functionality. I'm thinking specifically of the ways that networking and sound are managed by dbus. If that didn't make sense to you, don't worry.

Since the chief problem boiled down to "this system is too complex for me to be able to manage," and it no longer became an effective use of my time to maintain the system as it was... I wiped everything and finally put Arch on the laptop.

And it went on smoothly, and everything worked. Arch is a tinkerers distribution, there's no doubt about that. Since I did have Arch experience it wasn't a terribly traumatic experience. It took a little while to figure out how to make Suspend and Resume work (i.e. for the laptop when the lid closes,) and manually managing network connections isn't incredibly straightforward until you get the hang of it, but it all works now. And I couldn't be happier

The Experience of StumpWM

This isn't really a full report, but more a note to say that my brain has really adapted to Stump, and I'm quite happy with the experience. Stump doesn't in and of itself increase the ways I'm able to be productive, but... I do think that I work more efficiently when using Stump.

There's still a lot left to be done with regards to the tweaking of stump for me. I need to play some more with the MPD (music player) integration, and there are a number of other contributed Lisp packages that I really want to play with. Also, I finally figured out how the Key Binding Map works when I had gotten my basic keybidning needs taken care of and I haven't touched it since then. Now I know how I use the system and I'm ready to tweak things again, but I haven't gotten around to it.

Additional thoughts regarding Stump, from a more "objective" perspective: it is incredibly stable, and while it's not blindingly lightweight, it lives in 20 megs of ram and that's about it. I never need to restart the window manager or X anymore, and that's kind of nice.

So in short, the Stump WM is a great thing and I need to write a bit more about the actual using of it. But first I need to do a little more tinkering. Because I'm like that.

The Consolidation of the Gear and Laptops

What a strange heading. In any case. I gave in and bought a new laptop, last week. I found a great deal on a used Lenovo x200 with great specs, and I thought that it would solve the majority of my issues with my existing technology.

First, it was considerably newer than the laptop I have been using for most of the past year: more RAM, dual core system, bigger and faster hard drive. Second, it had all of the qualities of the old laptop that I adored: it's a 12" laptop which means very portable without making sacrifices, and it forgoes a trackpad for a "ThinkPad Nipple" for a mouse. Finally (and perhaps most importantly,) the screen resolution is 1200x800 (up from 1024×768) which makes it possible to comfortably tile two windows next to each-other at once. This is the same resolution as my 15" PowerBook G4 (and I think all of the 13 inch MacBooks). It's a good size, and I was really aching for the increased screen space.

It turns out that all of these concerns were addressed fully with this new system. The screen is perfect, it's peppy. it's also nice to return to the modern computing world. I continue to be mightily impressed with the build quality, design, and functionality of IBM/Lenovo hardware.

My computer consolidation isn't yet complete: my desktop hasn't yet been backed up and converted to Arch, but it's getting there. I'm also not quite sure what happens with the old laptop. I'm thinking of keeping it around as a spare, but if anyone has a need for a really awesome ThinkPad x41 they should be in touch.

Onward and Upward!

On Wanting a Kindle

I have a confession to make. I really want a Kindle. Bad.

No really. I do. The DRM scares me, and I think the books are just the other side of "too expensive," and because I come from a long line of "book collecting people" I think there are a lot of books that I would want to own in the paper. Furthermore, I have a great laptop for reading books (a small tablet), and I have a very long history of using small form computing devices (think palm pilots and pocket pcs) to read books. And yet, I returned to paper a few years ago, and don't feel really bad about that.

I'm not going to get a Kindle, at least not yet. I want to see what the Barnes and Nobel "Nook" looks like. I need to upgrade the laptop more, and I think something like the Nokia n900 might end up being a better device in this space and even if it isn't, I think we're going to see a lot of development in the "tablet" space in the next year, and it seems premature to buy now. For me.

Given all these caveats, I think its interesting to think about why I want the Kindle so bad. Here are some questions and answers:

So given all these caveats, why do you want a Kindle so bad?

I've held one on a number of occasions, and I've always been struck with how nice they feel. They're solid and they're thin. The text is clear and readable, the page advance buttons fall wonderfully under your thumbs. The experience, at least on these second generation devices, is really quite good.

I've rather famously, taken an entire bag of books along with me for a long weekend trip. A weekend where, I ended up reading about two and a half pages. So, the fact that you can take a whole pile of books or more properly the potential for getting the one right book you want, is appealing in a practical way.

Is this just about the hardware, or is there more?

I think the Kindle is the ideal distribution mechanism for periodical literature. The codex is likely to be of enduring importance for quite while, but I'm almost certain that the magazine and newspaper isn't. While blogs are great, don't get me wrong, I think there's a need for publications that are in-between the "book" and the "blog," and I think the Kindle is a great space for those kinds of texts. Practically, I'd like to read more content of that sort, and if I had a kindle, I suspect that I'd get a lot of use out of it.

The instant distribution model is a huge plus, and I really like to read. Cory Doctorow says something to the effect of "Ebook readers will fail, because a 'good' ebook reader would need to remove distractions and malfunction possibilities as effectively as paper, and devices that 'only' read books, won't sell very well next to devices which also check your email and play games." And I think that's probably a true observation, but it looks like the Kindle does single-function pretty well. I think the next year, or so, will be really interesting as we see more tablets in the market.

You're obviously not going to get one today, so what would make you change your mind?

The DRM and the price of the books. The DRM really needs no additional condemnation. I think 10 dollars is a bit steep for books, particularly because it's so flat rate, and while it's cheaper than the hardcover (and that's good,) it's also more than a paperback in most cases. And at least in a paperback you have something on your shelf. And the DRM really adds insult to injury. If they distributed the files in plain text/html and some weirdass XML format that would be one thing, but they give you a blob that is certain to be next to useless in a year or two. If books were 3 bucks, or 5 bucks, or even 6 or 7 bucks--even if the device was 300--or there was some sort of subscription service, I wouldn't mind the DRM, but as it is... the DRM makes the economics difficult for me to compute.

If the DRM is such an issue why have you gotten this far?

A lot of times in the paper-book world you buy a book. Read a hundred pages (or maybe twenty?), and then are so disgusted by the book that you can't bare to read any more, and you set it aside. And often times a trip to the bookstore (particularly in advance of a trip) means buying a number of books, when only some of these books will be of worth (to you) to justify their expense.

These situations are less likely to happen with a Kindle. There are significant samples, and you carry the bookstore around with you. I suspect the chances are that you only really need to "buy" the books that you read, which might end up being significantly cheaper in the long run.

The Kindle is a physical manifestation of a shift away from the physicality of information, but it's only really a symptom and not a leading cause this shift, right? If you accept this, if you accept that most information and knowledge only exists as bits and photons, then all of the rituals that we build around books (collections, libraries, shelves) are less important.

What about the *Nook*?

The nook is a more impressive platform. For sure, it fails the Doctorow test of (potentially) being too interesting for tasks that aren't reading books.

I think I probably have some more writing to do on this subject, but, in general I think Amazon is a better and smarter company than Barnes and Nobel, and if the name of the game in ebook readers is "vendor lock-in" then I trust Amazon a bit more. In a lot of ways, I hold B&N responsible for the ongoing/impending collapse of the publishing industry. [1]

In any case, mostly, at the moment I just want to wait and see before I make any sort of decision on the subject.

Thoughts?

[1]The consolidation that B&N and Borders organized for the sale of books collapsed a lot of the niche markets that were maintained by niche booksellers, and the much lamented disappearance of the midlist and backlist. The current "blockbuster supported" book industry isn't sustainable beyond the next 5 to 10 years or so.