Poems are Made out of Words

I remember having this epic fight conversation with a poet-friend from college about aesthetics and art and literature. I’m not sure exactly what brought it on, or particularly why I thought my side of the argument was in any way defensible, but it came back to me recently. So as I’m wont to do, here’s a post in review of these thoughts.

Act One: Poems are Just Words

I think in the first iteration of the argument, I took the opinion that poems existed (mostly) to transcend the experience of the written word on the page. That the project of poetry was about getting past words and constructing some sort of image or transcendent experience, or something.

Did I mention that I wasn’t a poet? I’m not. Not at all. I’m not even particularly good at reading poetry. I’ve sometimes written poems, and even I am a good enough reader to tell that they’re crap.

In any-case, H.S.’s argument was that poems were just words on paper (or screens)1 and that’s it. That writing itself is an act of putting words together, and experimenting with how words come together in (quasi) fixed mediums. And nothing more.

I don’t really know what my beef in this argument was. This was certainly before I started writing again. I guess my argument was that writing was simply an imperfect means of conveying an idea, and the real work and creativity of “being a writer” was really in coming up with good ideas and practical logic that illustrates your arguments.

And while that’s true, from one perspective if you squint at things the right way, I don’t think it’s really true about writing as a whole, and certainly not creative projects. It might be true that that’s a pretty good summary of academic writing, particularly entry level academic writing, but I’m not sure.

When I find writing that I’m impressed with, I keep coming back to the idea that it’s just “words on the page,” and somehow that makes. My skill--insofar as I have one--and the asset that makes me employable (I think) is the fact that I can turn ideas and thoughts (which are thick on the ground) into something useful and understandable by normal folk.

Act Two: Rethinking William Gibson

So, ok, lets be honest. I don’t really like William Gibson’s work very much. I thought Neuromancer expressed a social commentary that was totally obvious almost instantly, and it hadn’t stood the test of time particularly well, and I felt it sort of read like the rehab journal of an addict who hadn’t quite cleaned up entirely. This was just my reaction on reading it, and not a particularly well reasoned critique.

I mean I will acknowledge the book’s impact, and I think I read it too late which probably accounts for my reaction. And although I responded so poorly to it, I don’t really have a lot of a problem with literature that is of its time. In any case, I was thinking about Gibson recently, and casually comparing him to some other writers, and I found myself saying (of another writer of the cyberpunk ilk), pretty much without realizing it:

…which is fine, except [they] didn’t have Gibson’s literary chops. I mean Gibson’s work is incredibly frustrating but his writing is superb.

And I sort of realized after I’d said the above, that I had inadvertently conceded the argument from Act One, years later. Sure there’s a lot of idealism in writing, but writers aren’t differentiated on the basis of how awesome their ideas are. It all comes down to how they put the words together.


The side effect of this transposition is that, somehow, I’ve started to be able to read (and enjoy) short stories more than I ever was before. And much to my surprise, I’ve been writing the end of this (damned) novel as a sequence of short stories. At least in how I’ve been thinking of it. I could go on with more and additional examples, but I think I better leave it at that for now. Thoughts? Anyone?


  1. Or in my case emacs buffers. ↩︎

Getting the Most from Arch Linux

Much to my surprise, my essay of a few months ago on Why Arch Linux Rocks is quickly becoming one of my most popular posts. Who would have thunk? In any case, while I’ve written a few additional bits here and there about using Arch, I thought it would be good to write some more concrete and practical reflections on what I’ve learned from Arch and hopefully someone out there will find this useful.

The thing about arch, is that it’s by nature a very minimal operating system, and it doesn’t shy away from the command line. There’s this peculiar logic amongst GNU/Linux Distribution Developers that says: we want people to use our operating system and to use Free Software, and since most people are scared off by command line interfaces and editing config files, lets try and obscure the command lines and the config files as much as possible. That’s a good idea.

Right. Not really.

And for some distributions, this really works. Arch says, rather, that while command line interfaces and config files can be confusing they’re not “bad,” and it’s possible to teach people how to interact with the command line and edit their config files. Furthermore, while command lines and config files might be different and unfamiliar to users who are used to checkboxes and menus, they are extraordinarily simple.

Back to the minimal part.

When you install Arch, you get a prompt, the basic GNU tool chain (install base-devel during the install process, it’s worth it,) the Kernel (of course), the GRUB boot loader, and that’s about it. Everything else you have to install yourself. While you might thing “this is a huge bother” the first time you install Arch, the when you get to the end of the process you have a system that does exactly what you want it to and nothing more. Having a computer system so tailored to your needs and workflows is actually, sort of a unique and wonderful experience. Rarely, I think, in this day and age do we get work with a computer in this way.

Nevertheless, having a system that is this minimalist means that setup can be a bit… intense. Once things are installed the right way it works great, but I found that the first few times I ran an Arch install, it was like I spent the next two weeks installing little things over and over because I couldn’t keep track of what I needed. And then I installed a second system, and it was the same thing over again. The third time I did it, I’d wised up and managed to have a better time with it. A few weeks ago I created and redeployed a virtual machine (in virtual box) that I use as my primary work computer (long story), and it was painless.

What remains of this post are a collection of lessons that I’ve learned and some suggestions I have for having the best arch experience possible.

  1. Save your /home directory and copy it to the new machine. I used the following command over the local network from the home server: :

    rsync -azrv /home/tychoish tychoish@192.168.256.1:/home
    

    Changing the paths, usernames, and IP address to their relevant and valid values. This way all your configuration files and data end up on the new machine, with permissions preserved. This typically takes the longest time of any setup operation.

  2. Run system updates as soon as possible. Arch is a rolling release distribution. The longest you’ll typically go between updates is the time between when the installation media was created and when you install your machine. The longer the divide between current status and when you run an update the greater the chance of breakage happening.

  3. Use ABS, the “Arch Build System,” to compile any software that isn’t in the main arch repositories. Save the PKGBUILD scripts (if not the packages you’ve made with them,) in your `~/abs/`` directory. This makes installing all of the weird and tedious software much easier.

  4. Avoid getting a machine with weird or uncommon wireless or video drivers. At this point, I’m choosy enough about my hardware that I won’t get a computer (much less install Linux on it,) if it’s not “Intel everything:” wireless card, video card, chipset, etc. Sure a fancy NVidia card might be more sexy, and there are a lot of good reasons to use AMD and ATI silicone. But, one can be very sure that Intel hardware is going to work with Linux; and with other gear, it’s much more hit and miss. Or it can be. And my time and serenity is not without value.

  5. Maintain a list of what packages you want to install on your system:

    • X11 Dependencies

      Lets not talk about how long it’s taken for me to remember to install the X11 keyboard and mouse drivers:

      xf86-input-keyboard xf86-input-mouse
      
    • Additional package management tools

      The abs tool provides a BSD-ports like interface for building packages, while the ‘yaourttool makes it easy to make and build packages from the [Arch User Repository][aur] (AUR.) Typically I use [yaourt][] to download a package and then build it in theabsdirectory. Make sure you've installedbase-devel` as well, because you’ll want to eventually.

  • Music Applications

    I use the music player daemon and a number of additional applications to play music. And I install these packages as a matter of course.

    mpd mpc gmpc alsa-utils jack-audio-connection-kit
    
  • The tychoish Tool Chain

    Your specific list of “tools that you must have in order to function properly,” probably varies a bit. Here’s mine (some packages are from AUR):

    git emacs emacs-org-mode emacs-w3m-cvs rxvt-unicode
    screen wcalc sudo htop urxvtcd zsh swiftfox-i686
    

    To explain quickly those packages which aren’t obvious from their title: wcalc is a command line calculator tool; swiftfox-i686 is an optimized build of Firefox, except it runs more smoothly in my experience and is compatible with the FF plugins that I depend upon. urxvtcd is a shell wrapper for urxvt that opens a client of the urxvt-daemon (urxvtc) if there’s a daemon already running, and starts a daemon and opens a window if there isn’t. htop is a system process monitor.

  • Email Tool Chain

    I use the following packages to manage my email:

    procmail msmtp fetchmail gnugpg
    

    From AUR I also use:

    lbdb mutt-sidebar
    

    lbdb is an address book database tool, and I use my build to get the best mail-reading client in the world.

  • Window Management

    Arch is great for having pretty good support and inclusion of up-to-date packages for esoteric window managers. The sequence for installing StumpWM is a bit non intuitive. First, install the following packages from the normal repositories:

    sbcl nitrogen gtk-chtheme
    

    The last two are tools for changing the look and feel the desktop and GTK applications (respectively). Now from AUR, install (in the following sequence):

    clx cl-ppre stumpwm-git
    

    Put the following code in your ~/.sbcl file:

    (require 'asdf)
    (pushnew #p"/usr/share/common-lisp/systems/" asdf:*central-registry* :test #'equal)
    (asdf:operate 'asdf:load-op 'cl-ppcre)
    (push #p"/usr/share/common-lisp/systems/" asdf:*central-registry*)
    (asdf:operate 'asdf:load-op 'cl-ppcre)
    

    And you should be good. Configure your .xinitrc to your liking, and make sure your have a .stumpwmrc.

  • Networking Tools

    To manage my own connections and connect to the OpenVPN I use netcfg and OpenVPN. This requires the following packages:

    netcfg netcfg-auto zsh-netcfg netcfg-openvpn openvpn
    

    There are also a suite of network diagnostic tools that I always install, but seem to forget for various parts of a week.

    mtr whois dnsutils
    

    Also, I’m always befuddled by this for a few moments, but ssh isn’t installed by default. I, as a result install the following tools:

    openssh sshfs
    

    And add the following line to /etc/hosts.allow to permit inbound SSH connections.

    sshd: ALL
    
  • Spelling Correction

    This always frustrates me, though I understand the logic. Install the following three packages to get spelling support thought the system:

    aspell ispell aspell-en
    

    If you need spelling correction for a non-English language, replace aspel-en with that language, or add that language to the end of this list.

Links on the Art of Techology

I have a collection of links that I’d like to share with you. I hope you enjoy and find them as enlightening as I have. Some of these are dated but, I’ve been milling through them for a while and I feel like they’re worth sharing. In three parts:

Computer Programming and Hacking

There’s a long tradition of computer scientists and eminent developers thinking about their software development as an Art. Donald Knuth’s major work is called “The Art of Computer Programming,” and literate programing is fundamentally an artistic rather than a technical idea. The idea of the “lazy programmer” from the Perl world has some obvious artistic implications (hell, Perl’s license is called the “Artistic License”), and the Extreme Programing (XP)/Agile Programming world is very much about addressing programming as creative challenge rather than a purely technical challenge.

I particularly like the contrast of the first two articles with the third. While I’m not sure it’s particularly conclusive in the brief overview and with such a small sample, the insights about the ways that programmers approach problems is pretty useful. Makes me want to get the book.

I may not be a programmer in the conventional sense (or even any of the unconventional senses that I’m aware of) but there are two things that I know for sure. First: functional programming makes sense to me in a way that nothing else ever really has, and secondly that I am fascinated by the different ways that people use and manage projects with Git.

I think functional program makes sense to me because little blocks that “do something” matches the way my brain is shaped in some useful way. Every introduction to object oriented programming I’ve ever experienced starts with like 50 pages (or equivalent) of crud about data structures and types . Which I’m sure make sense if you know what’s coming next, but if you’re a programing n00b: less than helpful.

Also, regarding the use of git: it’s fascinating how many different ways and different work-flows people manage to squeeze out of software like this! What he’s doing makes sense, a lot of sense on paper, but most people don’t publish long-running branches in such an organized manner. Sure there are “vendor branches” for release maintenance, but branches in git tend to be much more ad hoc from what I’ve seen. Anyway, good one for the file.

I’ve been looking over weppy for the past few weeks, along with the bindings for a couple of weird databases (of the NoSQL variety, loosely), as part of a brief attempt to learn how to think like a web-developer. (It’s hard more on this later.) I find myself both incredibly enchanted with the prospect of some of these frameworks (particularly the python ones), and yet at the same time very unsure of what they represent in terms of the Internet. Frameworks aren’t going anywhere, and I think by some measure they are “getting better,” I do think they make it awfully easy to avoid putting in the time to “get it right,” which might not matter most of the time, but when it does oh boy, does it.

Academia, Anthropology, Literature

I wrote a series of articles on the future of publishing and I found myself returning again and again to these two essays as a source of inspiration, and I haven’t quite gotten them out of my head.

I’m not sure if I agree with this consensus, but it seems pretty clear that multi-media websites, twitter, and “blogs” (by some definition of the term) are acceptable replacements for journalistic publishing (newspapers, magazines). These essays, engage literary publishing, and force (particularly the second) us to think about the economics of booksellers, including branding, brick and mortar shops, which I think is incredibly important.

In the piece on Cyborg Anthropology, Amber Case discusses looks at the Singularity from a much less technical perspective. In a lot of ways this reminds me of my post on the Dark Singularity from a few months back. The singularity is, of course ultimately a cyborg problem.

Aaron Swartz’s piece on the academy… I don’t know that he’s wrong, exactly, but I think I would avoid being as unilateral as he is. On the one had disciplines exist mostly to organize education not research, and if I were going to make a conjecture: we see more disciplanarity and sub-disciplining in fields with substantive funding outside of the academy. Doctors, lawyers, psychologists, biologists, chemists, have teeny-tiny little sub-fields; and by contrast you see a lot more interdisciplinary activity in the academic study of anthropology, literature, mathematics, and, say musicology. Interesting, nonetheless.

The Industry of Technology

Two posts here from James Governor of RedMonk that have guided (at least topically) my thinking in a couple of areas recently. This is particularly noticeable if you look over the recent archives. The first addresses Flash and the business of advancing the platform of the web. My trepidation with regards to flash is mostly the same as my trepidation with regards to all web technology. When you get down to it, my grumbling all goes back to the fact that the web is developing into this thing that is all about media rich software, and less about hypertext, which is where my heart has always been. My argument keeps going back to “take the applications off of the web as we know it, and use a platform that (like GTK+ or QT) that’s designed for applications, and create a hypertext experience that really works.” But it’s nice to have articles like this one to pull my head out of (or into?) the clouds to remind me (and us?) what’s really going on in the industry.

This is an old one from the same source, about the new “patronage economy” which in many ways defines what’s going on at RedMonk. I particularly enjoy how Governor contrasts the New Patronage with other popular web 2.0-era business models. I do worry, looking at this in retrospect about the ways in which such patronages are stable even when the economy is in the crapper (as it were.) I’m not sure if there’s an answer to that one yet, but we’ll see. I guess my questions, at this juncture are: first, does patronage-type-relationships give “start ups” another funding option which is more stable than venture capital. Second, doesn’t the kind of organization and work that’s funded by these patronages subvert the kind of work that they depend upon (i.e. big high-margin businesses?)

That’s all I have at the moment. If you have links or thoughts to share, I’d live to see them!

Email is Still Broken, but at Least People are Talking

Email is broken. I don’t think the assertion really even requires an example for everyone to concur. Spam, ongoing wars over decades old standards and protocols, bacon (automatically generated email that you want), listservs, poorly written clients, and mixed expectations regarding replays, make email frustrating to deal with under the best of situations.

Email is also inescapable. There’s no other technology around at the moment which pushes messages out to users as effectively (every other thing is pull), provides administrators with as many tools and options, or has such ubiquitous adoption rates. I’d love to be able to say “don’t send me emails, I won’t get it,” but the truth is that I get and send a bunch of email, and every other option for “next wave” digital communication fails to impress.

Including Google Wave.

Don’t get me wrong, Wave is nifty. Hampered mostly by an interface which is foolish and gimmicky, I think Wave provides capabilities that we need on the Internet. Wave makes multi-user chat useful for people who have trouble wrapping their brain around IRC. Wave strikes the right balance between “real-time” and persistence (which is mighty difficult), and most importantly…

Wave gets people thinking about email, and what the ideal replacement might look like. And that’s a good thing indeed.

Maybe, more importantly, Wave moves many of the trivial uses of email, into a more suitable format (XMPP) so that email, or whatever replaces it, has a more easily defined task. Maybe there are a couple of XMPP applications: Jabber, Wave, and (----)1 that work in concert to alleviate the pressure on email and make it much more clear what “the email killer” will look like.

To my mind, one of the biggest problems with email is that we use it for too much, including a number of things that it’s not ideally suited for. Perhaps the way to fix email is to fix all of these other communication problem, and see what’s left.

Here’s a list of the things what I think we use email for today, to which it is unsuited, followed by a couple of examples?

  • Notifications. There’s been a reply to a comment on your blog, someone has commented on your face-book status update, it’s time to renew your domain name, your payment has been processed. Etc.
  • File Transfer. Look at my resume. Here’s the paper I owe you. I was looking for your article and wondered if you might have a copy you could share? Here’s this funny picture my uncle sent me. Etc.
  • Content Review. Could you take a look at this essay or bug report? I mentioned this in an IM but didn’t want to send it in that format.

This leaves us with a relatively short list of things: genuine letters (i.e. first class mail; generally “asynchronous personal communication”) and listservs (which like Usenet are probably the best way to have async-group communication online from the users perspective). And spam, but we don’t care about that.

So what’s the best way to handle what’s left? Hell, what’s the best way to handle the rest of it?

I don’t think there are good answers yet, at all, and I look forward to continuing to thing about these issues in a big way.


  1. I think this might tie back to my posts on notification systems, but more along the lines of what we currently use SMS for (say,) but I’m not sure what this looks like in a personal (much less interpersonal) method. Lots and lots of whitelisting, that’s for sure. ↩︎

How I'd Like to Be Notified

In my last post on this topic I tried to avoid talking too much about the actual details of an implementation of what my ideal notification system would look like. In part because I think that the problem of staying on top of our digital (cyborg?) in a “real time” manner is something that transcends the specific method of implementation. I and many other awesome people have been thinking about this “real time” problem for a couple of years, and mostly we’ve thinking about things like microblogging (think twitter and identi.ca) and XMPP and related topics. These are all important issues that we need to get worked out, but there are issues regarding the use of “real time data” that I think need to be addressed from a more indivudal/user perspective. This notification issue is one of them.

My requirements, in summary:

  1. There should be some common method of creating and distributing notifications that’s easy to setup up and tie into scripts (i.e. with some sort of command line interface.) and that is independent on application. I don’t want notifications from my email to appear in a different place from notifications from IRC, for instance.
  2. Taking the above one step further, notifications should be machine independent. If an event that I’d like to be notified of happens on my server (in the cloud, as it were) or the desktop at home, I’d like to know about it on my laptop when I’m on the road. Preferably, I’d like the system to know if I’ve seen a notification on my work computer before alerting me to it at home, but I’m willing to slide on this one.
  3. I need the ability to filter and control what notifications I see. There are times when I really want to track a particular word or phrase on twitter, and there are times when the last thing I want to know is weather or not someone has said something or other on twitter. Fine grained controls are key. I’d really like to put some sort of filtering pipe intweent “messages received” and “notifications sent,” so I can use it the way I use procmail for email.

Thoughts on Implementation:

  • XMPP seems like an obvious tool for implementing the backbone of this. From my perspective of a slightly-more-than-casual observer, getting one machine to know if a notification has been seen by machines is a bit thorny at best.
  • Notifications need to tie into the window manager and desktop experiences. in some sort of unobtrusive way, but also in a way that is consistent with that window manager normally works.
  • While many notifications will probably be created by sending a simple string to a command, some notifications will probably be generated from RSS or JSON feeds, and the best systems would provide some baked in support for pulling events from these kinds of data.
  • You can get irssi to send xmpp notifications, which is both incredibly ironic, and kind of awesome, as a building block. Also, this emacs-fu for using lib-notify from emacs might be another good starting point.
  • Dumping a firehose of notifications in an XMPP window, might be a good start, but I think once a notification has been sent (and logged and dismissed?), notifications need to disappear forever. Also--and this is the point of the filtering--raw events take processing in order to be useful. The firehose, of anything when exposed to the user, isn’t particularly useful.
  • [insert your thoughts here]

I will, of course let you know how this goes.

Five. More. Things.

The latest in a fine tradition of tychoish posts

Five People I Enjoy on Twitter

Five Improvements to Web Browsers

  • Enforce document structure standards on the server. Documents must be structured and organized within the constraints of a couple of conventions or else the server throws a 500 error.
  • Locate all design and presentation on the client side, and allow designs to be fully independent of the design.
  • Scripting happens on the client side, in sandboxes, and are integral parts of the browser applications.
  • Therefore, scripts shouldn’t be provided with the browser or by the viewer, not by the content creator. In other words, die JavaScript, die, and replace it with lightweight greasemonkeyesque+webkit-style browsers. Except might as well use python/perl/ruby/etc. while you’re at it.
  • Applications, can and should interact with servers and infrastructure and data over the network, but “the web” shouldn’t convert interactive applications. The goal here is to make the web-as-information-distribution work.

Five Things I’m quasi likely to Acquire in the Next Few Months

  • An Amazon Kindle
  • Some sort of roasting pan
  • A New Laptop Bag
  • A table and chairs (one of which might be good for spinning)
  • A gym membership

Five Things That are Awesome about Dance Weekends

  • You can dance with someone more than once, without it being creepy or weird. At an evening of a dance, it’s only really realistic to dance with someone once because there are 15 or 20 dances max. At an all day dance, we’re talking 60 or 70 dances.
  • Innocent and awesome dance crushes.
  • The “gender” thing is more flexible at big dance weekends. Which is fun. It’s probably true that part of the fun is in part due to the fact that queering up the gender roles in a contra dance makes contra a little harder, but I also just enjoy it for it’s awesome factor. M.N. also noted that when you come across older men dancing together in the contra lupine, you know it’s all gravy. And that’s more likely to happen at a dance weekend.
  • Awesome bands. In short: Giant Robot Dance.
  • Squares that don’t suck.

Five Changes in Habits that I Would Have Never Expected to See In Myself

  • Sacred Harp/Shape Note Singing
  • A Decrease in knitting output and interest
  • Weekend plans and events three to four weekends out of five.
  • The degree with which I’m a neat freak about my apartment. (Not absurd, but I still have moments where I surprise myself.)
  • Contra dancing as my main form of dance, as opposed to Morris and/or International folk.

Doing Wikis Right

Wiki started as this weird idea that seemed to work against all odds. And then it seemed to really work. And now wiki is just a way to make and host a website without taking full responsibility for the generation of all the content. To say wiki is to say “collaboration” and “distributed authorship,” in some vague handwaving way.

But clearly, getting a wiki “right,” is more difficult than just throwing up a wiki engine and saying “have at it people.” Wiki’s need a lot of stewardship, and care, that I think people don’t realize off the bat. Even wikis that seem to be organic and loosey-goosey.

I have this wiki project, at the Cyborg Institute Wiki that I’ve put some time into, but not, you know a huge amount of time particularly recently. Edits have been good, when they’ve happened. But all additions have come from people who I have asked specifically for their contributions. I don’t think this is a bad thing but this experience does run counter to the “throw up a wiki and people will edit it” mindset.

I’ve started (or restarted?) [a wiki that I set up for the OuterAlliance][oa-wik]. You can find out more about the OA there (as it gets added) or on the OuterAlliance Blog. Basically, O.A. is a group of Science Fiction writers, editors, and critics (and agents? do we have agents?) who are interested in promoting the presentation and visibility of positive queer characters and themes in science fiction (literature).1

In any case, the group needed a wiki, and unlike the C2 Wiki, the people who are likely to contribute to this wiki are probably not hackers in the conventional sense. As I’ve sort of taken this wiki project upon myself, I’ve been trying to think of ways to ensure success in wikis.


Ideas, thoroughly untested:

Invite people to contribute at every opportunity, but not simply by saying “please add your thoughts here.” Rather, write in a way that leaves spaces for other people to interject ideas and thoughts.

Create stubs and pages where people can interject their own thoughts, but “red links” (or preceding question marks in my preferred wiki engine) are just as effective as stubs in many cases. The thing is that wikis require a lot of hands on attention. While stubs don’t require a lot of attention and maintenance, they require some. My favored approach recently is to make new pages when the content in the current page grows too unwieldy and to resist the urge to make new pages except when that happens.

Reduce hierarchy in page organization unless totally needed. You don’t want potential collaborators to have to thing very much about “where should I put this thing.” The more hierarchy there is the greater the chance that they’ll have to either think about it and/or that they’ll not find a place to put their contribution and then not contribute. This is undesirable.

Hierarchy is problematic for most organizational systems, but in most wiki systems, it is really easy and attractive to divide things into lots of layers of hierarchy because it makes sense at the time. The truth is, however, that this almost never makes sense a couple of weeks or months down the road. Some hierarchy makes sense, but it’ll take you hundreds of thousands of words to really need 3 layers of hierarchy in most wikis.

Leaders and instigators of wiki projects also, should know that creating and having a successful wiki represents the output of a huge amount of effort. There’s a lot of figuring out what people mean to say and making sure that their words actually convey that. There’s a lot of making sure people’s comments really do belong on the page where they put them. And more often than not leaders put in the effort to write a huge amount of “seed” content as an example to contributors in the future. It’s not a bad gig, but it’s also not the kind of hting you can just sit back and let happen.


Other thoughts? Onward and Upward!


  1. It’s an awesome group, and a useful and powerful mission, and I think the OA has learned a lot from, and is well connected to some of the activity around anti-racism, that’s been lingering in science fiction over the last year to eighteen months as a result of the “RaceFail” hubbub of a year ago. The fact that there’s this kind of activity in and around Science fiction is one of the reasons that I love being a part of this genre. ↩︎

Notifications and Computing Contexts

Maybe it’s just be but I think notifications of events on our computers suck. On OS X there’s Growl, and GNU/Linux desktops have the libnotify stuff, and I’m sure there’s something on windows, but I don’t think this really addresses the problem at hand. Not that the software doesn’t work, becuase it mostly does what it says it’s going to do. The issue, I think is that we need, or will very shortly need, much more from a notification system than anything around can handle.

Lets back up.

I don’t know about you but there are a lot of events and messages that I get or need to get, including: new mail, some instant messages, mentions of cerain words on IRC (perhaps only in certain channels), notifications of when a collaborator has pushed something to a git repository, updates to certain RSS feeds, notifications of the completion of certain long-running commands (file copies, data transferes, etc.) and so forth. I think everyone likely has their own list of “things it would be nice if their computer could tell them about.”

The existing notification systems provide a framework that enables locally running applications to present little messages in a consistent, and unified manner. This is great. The issue is that for most of us the things that we need to be notified of aren’t locally running. At least in my case, instant messaging, IRC, git, and the key RSS feeds that I want to follow aren’t “locally running applications.” And to further complicate matters, no matter how your slice things, I use more than one computer, and in an ideal world it would be nice for one machine to know what notifications I’d seen on another computer when I sat down. In other words, my personal notification system should retain memory of what it’s shown me and what I’ve awknowladged across a number of machines.

That doesn’t happen. At least not today.

I have a few ideas about the implementation that I will probably cobble together into another post, and I’d love to hear some feedback if any of you have addressed this problem and have solutions.

It strikes me that there are two larger themes at work here:

1. Personal computing events occur locally and remotely, and notification systems need to be able to seemlessly provide these kinds of notifications. While I think a lot of the hype around clound computing is--frankly--absurd, it is fair to say that our computing is incredibly networked.

2. People don’t have “a computer,” any more, but rather several: phones, desktops, “cloud services,” virtual private servers, and so forth. While we use these systems differently, and our own personal “setup” are often unique, we need to move between these setups with ease.

These two shifts, networked computing and multiple computing contexts, affect more than just the the manner in which we receive notifications. But really, I think, outline the ways that the way we use computing has changed in the past few years. There’s a lot of buzzwords around this shift, in the web appliacation and cloud computing space particularly, and I don’t think that the “hipster”/“buzzword” experience is widely generalizable. It’s my hope that these conclusions are more widely applicable and useful: both for the development of a notification system that we need, and for thinking about application development in the future.

Like I said above, I’d love to hear your thoughts on this subect, and perhaps we can work on collecting thoughts on the Cyborg Institute Wiki. Take care!