Hypertextuality

I recently took some of my writing time to create a makefile (Novel Makefile) to manage work I hope to be doing on a new novel project. I’ve started outlining and researching the story in earnest after having spent the past few couple of years talking about it, and I think writing will commence soon. In another post I’d like to write up some thoughts on the tooling and technology of writing non-technical/non-manual long-form.

This post, drawing from the spending some time buried deep in production is about the state of (conceptually) longer form work in digital mediums. Or, at least a brief commentary on same.


The tools that I use to write technical materials do all sorts of cool things, like:

  • provide instant cross referencing,
  • generate great indexes, and
  • automatically generate and link glossaries.

This is not particularly unusual, and in fact Sphinx is somewhat under-featured relative to other documentation generation systems like DocBook.1

And yet people publish ebooks that virtually identical to paper books. Ebooks seem to say “*this electronic book is the best facsimile of a paper book that we can imagine right now,*” while totally ignoring anything more that a *hyper*text rightfully might be.

I enjoy reading ebooks, just as I have enjoyed reading paperbooks, but mostly because ebooks basically are paperbooks. I’ve written posts in the past challenging myself,and fiction writers in general, to actually do hypertext rather than recapitulating prior modalities in digital form.

At various points I’ve thought that wikis might be a good model of how to do hypertext, because the form is structurally novel. Any more, I don’t think that this is the case: wikis are unstructured and chaotic, and I’ve come to believe that the secret to hypertext is structure. There are so many possibilities in hypertext, and I think much experimentation in hypertext has attempted to address the chaos of this experience. This does serve to highlight the extent to which “the future is here,” but it obscures the fact that structure makes narratives understandable. Think about how much great, new, innovative (and successful!) fiction in the past decade (or so) is not structurally experimental or chaotic. (Answer: there’s a lot of it.)

The not-so-secret of hypertext, is (I suspect,) tooling: without really good tools the mechanics of producing a complex, interactive textual experience2 is difficult for a single writer, or even a small group of writers. Most tools that manage the publication and writing of text are not suited to helping the production of large-multi-page and mutli-component texts. One potential glimmer of hope is that tools for developing programs (IDEs, build systems, compilers, templating systems, introspection tools, pattern matching, etc.) are well developed and could modified for use in text production.

The second non-so-secret of hypertext is probably that hypertext is an evolution of text production and consumption, not a revolution. Which only seems reasonable. We have the technology now to produce really cool text product. While tooling needs to get better, the literature needs to do some catching up.

Lets start making things!


  1. It’s not that Sphinx is “bad,” but it’s clearly designed for a specific kind of documentation project, and if you stray too far outside of those bounds, or need formats that aren’t quite supported, then you end up without a lot of recourse. Having said that, the “normal,” well supported and most projects--documentation or otherwise--will only very rarely hit upon an actual limitation of Sphinx itself. ↩︎

  2. To be clear, I’m partial to the argument that today’s computer games, particularly role-playing games, are the things that the futurists of the 1960s and 70s (e.g. Theodor Holm Nelson) called “hypertext.” ↩︎

The Death of Blogging

I think blogging died when two things happened:

1. A blog became a required component in constructing a digital identity, which happened around the time that largely-static personal websites started to disappear. Blogs always dealt in the construction of identities, but until 2004, or so, they were just one tool among many.

2. Having a blog became the best, most efficient way for people to sell things. Blogging became a tool for selling goods and services, often on the basis of the reputation of the writer

As these shifts occurred, blogs stopped being things that individual people had, and started being things that companies created to post updates, do out reach, and “do marketing.” At about the same time, traditional media figured out, at least in part, what makes content work online. The general public has become accustomed to reading content online. The end result is that blogs are advertising and sales vectors, and this makes them much less fun to read.

When blogging was just a thing people did, mostly because it let them present and interact with a group of writers better than they could otherwise, there was vitality: people were interested in reading other people’s blogs and comment threads. This vitality makes it more interesting to write blogs than pretty much any kind of content. The excitement of direct interaction with readers, the vitality of blogging transcends genre, from technical writing and documentation to fiction to news analysis and current events.

The vitality of blogging is what makes blogs so attractive to traditional media and to corporations for marketing purposes, so maybe you can’t have the good without the bad.

Everyone blogs. And perhaps that’s a bit of the problem: too much content means that it’s hard to have a two way conversation between blogs and bloggers. Who has time to read all those words anyway? Blogging is great in part because it’s so democratic: anyone can publish a blog. This isn’t without a dark side: we run the risk of blogging without audience, or without significant interaction with the audience, as a result of the volume of content which threatens the impact of that democracy. But it makes sense, New forms and media don’t solve the problem cultural participation and engagement, they just shift the focus a little bit.

Professional Content Generation

I’m a writer. I spend most of my day sitting in front of a computer, with an open text editing program, and I write things that hopefully--after a bit of editorial work--will be useful, enlightening, and/or entertaining as appropriate. I’ve been doing this since I was a teenager and frankly it never seemed to be particularly notable a skill. The fact that I came of age with the Internet a member of its native participant-driven textual culture had a profound effect, without question. This is a difficult lineage to manage and integrate.

Obviously I’m conflicted: on the one hand I think that the Internet has been great for allowing people like me to figure out how to write. I am forever thankful for the opportunities and conversations that the Internet has provided for me as a writer. At the same time, the Internet, and particularly the emergence of “Social Media” as a phenomena complicates what I do and how my work is valued.

Let’s be totally clear. I’m not exactly saying “Dear Internet, Leave content generation to the professionals,” but rather something closer to “Dear Internet, Let’s not distribute the responsibility of content generation too thinly, and have it come back to bite us in the ass.” Let me elaborate these fears and concerns a bit:

I’m afraid that as it becomes easier and easier to generate content, more will start creating things, and there will be more and more text and that will lead to all sorts of market-related problems, as in a vicious cycle. If we get too used to crowd sourcing content, it’s not clear to me that the idea of “paying writers for their efforts,” will endure. Furthermore, I worry that as the amount of content grows, it will be harder for new content to get exposure and the general audience will become so fragmented that it will be increasingly difficult to generate income from such niche groups.

Some of these fears are probably realistic: figuring out how we will need to work in order to our jobs in an uncertain future is always difficult. Some are not: writing has never been a particularly profitable or economically viable project, and capturing audience is arguably easier in the networked era.

The answer to these questions is universally: we’ll have to wait and see, and in the mean time, experimenting with different and possibly better ways of working. My apologies for this rip-off, but it’s better to live and work as if we’re living in the early days of an exciting new era, rather than the dying days of a faltering regime.

Perhaps the more interesting implication of this doesn’t stem from asking “how will today’s (and yesterday’s) writers survive in the forthcoming age,” but rather “how do these changes affect writing itself.” If I don’t have an answer to the economic question, I definitely don’t have an answer to the literary question. I’m hoping some of you do.


As an interesting peak behind the curtain, this post was mostly inspired as a reaction to this piece of popular criticism that drove me batty. It’s not a bad piece and I think my objections are largely style and form related rather than political. Perhaps I’m responding to the tropes of fan writing, and in retrospect my critique of this piece isn’t particularly relevant here. But that article might provide good fodder for discussion. I look forward to your thoughts in comments or on a wiki page.

Onward and Upward!

What We Learn from Wikileaks

Wikileaks, and the drama that has surrounded it for the past few months, brings forth images of the Internet as a very lawless and juvenile place, exactly the kind of thing that the cyberpunks of the 1980s were predicting. This isn’t always far from the truth, but the story of spies and international espionage, and digital attacks and counter attacks may distract us from thinking about other issues. Obviously Wikileaks causes us think about censorship, but issue of publishing, journalism, audience and community participation, transparency, and globalism in the digital context are also at play. Lets take the highlights:

Censorship

In the print world, I tend to think of post-facto censorship is incredibly difficult. Once print copies of something exist they’re there and it’s hard to get every copy and you can’t get people to “unread” what they’ve already seen. In the digital world, it’s really difficult to get content taken down, and once there are copies in people’s hands, the cost of making additional copies is low enough that censorship stops working. Right?

I suppose the appearance of multiple mirrors and copies of Wikileaks post-takedown proves this, but it also proves that there are aspects of the world wide web that are not decentralized and it’s possible to pull a domain and site off the Internet at a single point. That’s a very scary proposition. While information survives, I think many people thought that “you couldn’t effectively censor the Internet,” and Wikileaks says “yes you can, and here’s how.” (cite)

In response I think people have started to think about the shape and structure of the network itself. The Internet is designed to be resilient to this kind of thing, and this is a very startling example that it’s not.

Publishing, Journalism and Wikis

I suppose the thing that I get most offended by “Wikileaks” for is the appropriation of the term “Wiki,” because in the current (or last) form, Wikileaks wasn’t a wiki. Content was not edited or supervised by a community, the Wiki process didn’t allow the site to provide consumers with a more diverse set of opinions, it didn’t increase transparency. In this respect, Wikileaks mostly resembles a traditional “old media,” publication.

Once wikileaks stops being this radical new journalistic departure, and this community-driven site, what remains may be pretty difficult to reconcile. Is it useful for journalists to publish raw source material without analysis? What audience and purpose does that serve? The censorship of Wikileaks is problematic and requires some reflection, but there are problems with wikileaks itself that also require some serious thinking.

And just because it’s a sore point, particularly since Wikileaks is/was a traditional publication and is not a platform for independent speech, we have to think about this as “freedom of the press” issue rather than a “freedom of speech” issue.

Community Involvement

The involvement of a community in Wikileaks, is over-shadowed by the groundswell of “community” activity by Anonymous and other groups. Look to Gabriella Coleman (twitter) for more thorough analysis. I don’t have any answers or conclusions but: the role and effectiveness of (distributed) denial of service attacks in this instance is really quite important.

Usually DoSes are quick, messy and easily dealt with affairs: DoS someone, get noticed, target and attacking addresses get taken off the air and people loose interest and things return to normal. This back and forth, seems a bit unique (but not unheard of) the fact that the 4chan/Anon gang picked Wikileaks jives with their ethos, but it is impressive that they were able to get organized to support Wikileaks. I find myself curious and more surprised that someone was able to, at least for a while, throw something in the neighborhood of 10 gigabits (an unverified number, that originates with Wikileaks itself, so potentially inflated) at the original Wikileaks site. That’s huge, and I think largely unexplored.

Transparent Operations

In July, Quinn Norton wrote about transparency and wikileaks, basically, that exposing information doesn’t solve the problem that governments don’t operate in a transparent manner, and that access to documents and transparency are the result of a more open way of doing business/government. Particularly in light of this, the fact that wikileaks focuses on data dumps rather than more curated collections of information or actual analysis, is all the more problematic.

Similarly, “wiki” as practiced in the original model (as opposed to wikileaks,) is about editing and document creation in an open and transparent manner. Thus the issue with wikileaks is not that they have/had a professional staff, but that they didn’t disclose their process.

Onward and Upward!

A Modest Blogging Proposal

I’m going to present this post somewhat out of order. Here’s the proposal.

I want to think about moving this blog (or starting another?) to be a blog/wiki hybrid, and at the very least, moving forward I’d like the “discussion” or comment’s link to link to a wiki page rather than a comments thread.


I’ve been thinking about the blog recently. I really enjoy writing posts, and there are days when I rely on writing a blog post to get me thinking and moving in the morning (or afternoon!) and kinds of projects that I don’t think I would be able to work on if it weren’t for having the space to write and the opportunity to have conversations with you all.

I’ve done some work recently to streamline and simplify the publishing process, which does a lot to make me more likely to post on the fly, but I’m not sure that the tone of this site, or the current design, or my own habits would really support a different kind of publishing schedule.

As an aside, I think the technological shift that made blogs possible were “content management systems” and website building tools that made updating a website with new content incredibly simple. While blogging has come to mean many other things and is defined by a number of different features, having the ability to publish on very short notice has a large effect on the way people write blog content.

So here’s the thing about blog comments: I don’t think that they’re used particularly well, and there are some important flaws in so many of the options around. First, the best systems, like the one used on LiveJournal, IntenseDebate, and Disqus (which I use on this site) are all proprietary systems that are depending on an external service to function. The worse systems all have independent authentication methods, often lack proper threading (which most comm enters aren’t terribly good at using anyway,) and it’s very difficult to prevent all these systems from being filled with spam.

What’s more, people don’t really comment that much. At least for most blogs.

  • better comment systems, better discussions.
  • catering to people who want to write a lot. in comments.
  • allowing the conversation to grow from comments, in productive rather than purely discursive ways.

And once you’ve moved comments into a wiki, why not move the rest of the blog as well? My preferred engine, ikiwiki, has support for blog-like content so while there would be some work involved, it wouldn’t be a major hassle to manage. And the worst case scenario is that the old content remains in the old system, which might not be a bad thing in the end.

Anyone out there in reader land have any thoughts on the subject? While I’ll probably make some sort of revision to the way I blog/maintain tychoish.com, any such change is probably a month or two in the future.

Strategies for Organizing Wiki Content

I’ve been trying to figure out wikis for a long time. It always strikes me that the wiki is probably the first truly unique (and successful) textual form of the Internet age. And there’s a lot to figure out. The technological innovation of the wiki is actually remarkably straightforward,1 and while difficult the community building aspects of wikis are straightforward.2 The piece of the wiki puzzle that I can’t nail down in a pithy sentence or two is how to organize information effectively on a wiki.

That’s not entirely true.

The issue, is I think that there are a number of different ways to organize content for a wiki, and no one organizational strategy seems to be absolutely perfect, and I’ve never been able to settle on a way of organizing wiki pages that I am truly happy with. The goals of a good wiki “information architecture” (if I may be so bold) are as follows:

  • Clarity: It should be immediately clear to the readers and writers of a wiki where a page should be located in the wiki. If there’s hierarchy, it needs to fit your subject area perfectly and require minimal effort to grok. Because you want people to focus on the content rather than the organization, and we don’t tend to focus on organizational systems when they’re clear.
  • Simplicity: Wikis have a great number of internal links and can (and are) indexed manually as needed, so as the proprietor of a wiki you probably need to do a lot less “infrastructural work” than you think you need to. Less is probably more in this situation.
  • Intuitive: Flowing from the above, wikis ought to strive to be intuitive in their organization. Pages should answer questions that people have, and then provide additional information out from there. One shouldn’t have to dig in a wiki for pages, if there are categories or some sort of hierarchy there pages there shouldn’t be overlap at the tips of various trees.

Strategies that flow from this are:

  • In general, write content on a very small number of pages, and expand outward as you have content for those pages (by chopping up existing pages as it makes sense and using this content to spur the creation of new pages.
  • Use one style of links/hierarchy (wikish and ciwiki fail at this.) You don’t want people to think: Should this be a camel case link? Should this be a regular one word link? Should this be a multiple word link with dash separated words or underscore separated words? One convention to rule them all.
  • Realize that separate hierarchies of content within a single wiki effectively create separate wikis and sites within a single wiki, and that depending on your software, it can be non-intuitive to link between different hierarchies.
  • As a result: use as little hierarchy and structure as possible. hierarchy creates possibilities where things can go wrong and where confusion can happen. At some point you’ll probably need infrastructure to help make the navigation among pages more intuitive, but that point is always later than you think it’s going to be.
  • Avoid reflexivity. This is probably generalizable to the entire Internet, but in general people aren’t very interested in how things work and the way you’re thinking about your content organization. They’re visiting your wiki to learn something or share some information, not to think through the meta crap with you. Focus on that.
  • Have content on all pages, and have relatively few pages which only serve to point visitors at other pages. Your main index page is probably well suited as a traffic intersection without additional content, but in most cases you probably only need a very small number of these pass through pages. In general, make it so your wikis have content everywhere.

… and other helpful suggestions which I have yet to figure out. Any suggestions from wiki maintainers?


  1. There are a number of very simple and lightweight wiki engines, including some that run in only a few lines of Perl. Once we had the tools to build dynamic websites (CGI, circa 1993/1994), the wiki became a trivial implementation. ↩︎

  2. The general Principal of building a successful community edited wiki is basically to pay attention to the community in the early stages. Your first few contributors are very important, and contributions have to be invited and nurtured, and communities don’t just happen. In the context of wikis, in addition to supporting the first few contributors, the founders also need to construct a substantive seed of content. ↩︎

Tumblr Killed the Tumblelog Star

A few years ago, highly citational, link/youtube video blogs came back into style again. This time rather than calling them blogs, we called them “tumblelogs.” I never really got into it, though I tried, and even my original inspiration for starting tychoish.com was to do a more “tumblelog-esque” blog.It never quite worked out. Then I read this post by Michael Coté which inspired a few things:

First, it got the following title to stick in my head and refuse to get out. Second, it left me with the idea that, although successful, sites like tumblr and to a different extent posterous basically ruined the tumblelogging revival.

Here’s the thing about tumblelogs: they worked and worked so well because they were efficient, because the people creating tumblogs were doing something unique and had unique voices, because you could keep your pulse on most of a single discourse by watching only a few sites/rss feeds. And then it became, very suddenly, trivial to make a tumble log. And so everyone had a tumblelog, and it was like blogging was new again, except things “meme’d out” at an epic pace and it became difficult to track what anyone was saying. It was like a distributed denial of service attack on our attention spans.

And as the dust settled, tumblelogs, at least as far as I could see, became less about a sort of delightful amalgamation of interesting content and more about templates, about piping in a fire hose of content from delicious/twitter/etc. So not only were there too many tumblelogs, but the style had devolved somewhat into this weird unedited, awkwardly template-ed mass of “crap” that is (in my opinion) quite hard to read or derive value from.

What Made Tumblelogs Work Originally

  • The systems that powered them were kludgy but they made it very possible to post content easily. That’s a good thing.
  • They used a unique sort of web design where design elements (tables/grids/CSS magic,) reflected and accented the content type.
  • They were largely editorial functions. People followed tumblelogs because their authors were able to filter though content with exceptional speed and grace, and in the process of filtering provide real value.
  • They were multimedia, and incorporated many different kinds of content. Not just links, not just embedded youtube videos, but snippets of IM and IRC conversations, song lyrics, pictures from flickr, and so forth.
  • projectionist one of the first and best, was a group effort: when group blogs work, they really work. The tumblelog, seems like an ideal platform for group blogging.

How We Can Make Tumblelogs Work Again

  • We use publishing systems and tools that are unique and that stretch and bend the form. A tumblelog theme for Wordpress, will probably always reek like wordpress. Same with other popular content management systems. Tumblelogs work because they’re not just blogs, they need to distinguish themselves both visually, and in terms of how their authors write the content.
  • We undertake tumblogs as a collaborative effort. Group projects complicate things, of course, but they also create great possibilities.
  • Vary content, intentionally, post quotes, chat excerpts, links, videos, lyrics, etc. Make sure that there’s a great deal of diversity of content. This is perhaps a problem to be solved in software, at least in port.
  • Emphasize and cultivate editorial voice, and create an interface that forces authors and editors to touch the data.

Thoughts? Suggestions?

ETA: I’ve started to work on this wiki page outlining a “tumble manager” tool. I also did a bit of textual refactoring on February 27, 2010

End User RSS

I’m very close to declaring feed reader bankruptcy. And not just simple “I don’t think I’ll ever catch up with my backlog,” but rather that I’ll pull out of the whole RSS reading game all together. Needless to say, because of the ultimate subject matter--information collection and utilization and cultural participation on the Internet--and my own personal interests and tendencies this has provided some thinking… Here goes nothing:

Problems With RSS

Web 2.0 in a lot of ways introduced the world to ubiquitous RSS. There were now feeds for everything. Awesome right?

I suppose.

My leading problem with RSS is probably a lack of good applications to read RSS with. It’s not that there aren’t some good applications for RSS, its that RSS is too general of a format, and there are too many different kinds of feeds, and so we get these generic applications that simply take the chronology of RSS items from a number of different feeds and present them as if they were emails or one giant feed, with some basic interface niceties. RSS readers, at the moment, make it easier to consume media in a straightforward manner without unnecessary mode switching, and although RSS is accessed by way of a technological “pull,” the user experience is essentially “push.” The problem then, is that feed reading applications don’t offer a real benefit to their users beyond a little bit of added efficiency.

Coming up a close second, is the fact that the publishers of RSS sometimes have silly ideas about user behaviors with regards to RSS. For instance there’s some delusion that if you truncate the content of posts in RSS feeds, people will click on links and visit your site, and generate add revenue. Which is comical. I’m much more likely to stop reading a feed if full text isn’t available than I am to click through to the site. This is probably the biggest single problem with that I see with RSS publication. In general, I think publishers should care as much about the presentation of their content in their feed as they do about the presentation of content on their website. While it’s true that it’s “easier” to get a good looking feed than it is to get a good looking website, attending to the feed is important.

The Solution

Web 2.0 has allowed (and expected) us to have RSS feeds for nearly everything on our sites. Certainly there are so many more rss feeds than anyone really cares to read. More than anything this has emphasized the way that RSS has become the “stealth data format of the web,” and I think it’s pretty clear, that for all its warts, RSS is not a format that normal people are really meant to interact with.

Indeed, in a lot of ways the success of Facebook and Twitter have been as a result of the failure of RSS-ecosystem software to present content to us in a coherent and usable way.

Personally, I still have a Google Reader account, but I’m trying to cull my collection of feeds and wean myself from consuming all feeds in one massive stew. I’ve been using notifixlite for any feed where I’m interested in getting the results in very-near-real time. Google alerts, microblogging feeds, etc.

I’m using the planet function in ikiwiki, particularly in the cyborg institute wiki as a means of reading collection of feeds. This isn’t a lot better than the conventional feed reader, but it might be a start. I’m looking at plagger for the next step.

I hope the next “thing” in this space are some feed readers that add intelligence to the process of presenting the news. “Intelligent” features might include:

  • Noticing the order you read feeds/items and attempting to present items to you in that order.
  • Removing duplicate, or nearly duplicate items from presentation.
  • Integrate--as appropriate--with the other ways that you typically consume information: reading email and instant messaging (in my case.)
  • Provide notifications for new content in an intelligent sort of way. I don’t need an instant message every time a flickr tag that I’m interested in watching updates, but it might be nice if I could set these notifications up on a per-folder or per-feed manner. Better yet, the feed reader might be able to figure this out.
  • Integrate with feedback mechanisms in a clear and coherent way. Both via commenting systems (so integration with something like Disqus might be nice, or the ability auto-fill a comment form), and via email.

It’d be a start at any rate. I look forward to thinking about this more with you in any case. How do you read RSS? What do you wish your feed reader would do that it doesn’t?