Thinking Like a Web Developer

I've been reading a lot about web development in the last few weeks. I'm not exactly sure why. There are some interesting things going on in terms of the technology. Frameworks that provide for some interesting possibilities abound, and while I don't know if the web is the only future for programing, it's certainly a big part of the future of the way we interact with computers.

So what are you working on developing tycho?

A whole lot of noting. I know how the technology works. I know--more or less--how to build things for the web, and yet I've not said "you know what I need to build? A web app that does [this awesome thing]"

Maybe it's because I'm unimaginative in this regard, or that I tend to think of web applications being a nearly universally wrong solution to any given problem.

I think it's quite possible that both of these things are true. It's also likely that when approached with a problem with technology or with data, I don't instinctively think about how to solve it pragmatically, much less with some sort of web-based system. As I think about it might be the fact that my mind is intensely qualitative. In my psych major days I always had problems coming up with ides for non-hokey quantitative studies (Insofar as such things exist.)

In a lot of ways the questions I ask of the technology aren't (particularly) "how can I manage this data better," but rather how can I interact with this technology more efficiently. While I don't think data interaction is a solved problem, I feel like I'm pretty far ahead of the game, and that the things I do to improve how I work has more to do with tweaking my system to shape the content and way that I'm working. While there's often some little bits of code involved, it's not the kind of thing that's generalizable in the way that an application or web site might be.

The Imperative Tense

Most of the time, you put me in a room with programmers and tell us to talk about our work, the conversation will be really lively. Aside from the fact that I use programmers tools to write, and use a very iterative approach

One thing I notice many of my coworkers doing is saying "I'm going to write a program that's going to do these four things, and it's going to be written in such a way as to make these other things possible," (insert words of awesomeness in this sentence.) And I think "Cool! I can't wait!"

For a long time this way of talking confused me and almost put me on edge. When I have an idea for a new project, I get these images, and an interesting concept to toy with and I have little conversations in my head with the characters, and I see their world from their eyes, and it's sort of an absurd experience and I don't tell people about this. I mean, I might say "I got an awesome idea for a new book," but usually not more than that. And the truth is that I get ideas for stories all the time and I know that I'll never really be able to write most of them.

I'm okay with the way programmers plan projects, and I'm pretty happy with my own methodology. Having said that, I think the difference in the way that I think about and plan projects has a lot to do with the way I think about these things.

Onward and Upward!

Reading Habits

I bought a Kindle. I am weak.

(Note: I drafted this post early last week, and it arrived last Wednesday, and I started using it in earnest over the weekend. Nevertheless, this post is written from the perspective of my past self.)

In any case, there are a number of questions that you may be asking yourself at this juncture.

ZOMG That's a lot of DRM that you've signed up for. How does that make you feel?

I'm not wild about it. I mostly view the DRMed kindle stuff as: not as a collection, but rather as a convince for reading specific texts on demand and as needed. And on those terms, I can live with it. There's all sorts of things wrong with what I'm about to say but: DRM is most onerous if you think that the files you download are "your possessions." Because they aren't. When it's just a dinky file that you have the ability to read in a highly convenient way, that's easier to swallow. Having said that, if you're not buying something that you get to keep the books as they are now are too expensive.

I'm about 100 pages into the book I'm currently reading (it rocks, more on this later) and I picked it up the other day to discover that the cats had helpfully chewed the back corner. This isn't the first time this has happened. While I don't really care, I can still read it, part of reason I don't seem to care is that the quality of books as objects these days doesn't particularly impress me. So I don't feel like I'm loosing anything. And if I want a real book-object, such a thing can be had.

I have a suspicion that you have more than a few paper books that you haven't read. What are you going to do with them now?

Read them. I don't think that I'll stop reading paper books, though I think a great deal depends on context. I suspect that I might not take paper-books out of the house very much. I don't have a lot of books, but I certainly have a few, and I know that I mostly have them with me for nostalgia, and not because I actually intend to read them any time soon.

Paper books, on my shelf, represent possibilities, in a way that the object of the Kindle is a possibility. I think even considering the limitations of the Kindle, these two truisms balance each-other out.

How do you think you'll use the kindle?

I think once the initial buzz of the Kindle wears off, I'll probably settle into a rhythm whereby I'll read periodicals, fiction, and documents that I generate (along the lines of slush) on the Kindle along with anything I read out of the house, and then read reference material off of paper. I'm mostly worried about how the kindle might screw with my--often quite good--spatial memory for texts. We'll have to see how this develops.

I'm strongly considering joining a gym in the next few weeks and I hope/expect to read whilst doing the aerobic thing. The kindle seems ideal for this.

I hear your a slow reader, is this really worthwhile?

Perhaps, and I think that my main issue is that I'm really bad at setting aside time to read when I'm awake enough to actually read. This is separate issue from the Kindle, and one I suspect I'll address in future posts. Having said that, I'm attempting to carve out a bit more time for reading in my day--as reading more is a personal goal--so I'd say that yes: Despite my apparent slow pace, a Kindle is worthwhile.


Do you have any Kindle related questions for me?

Three Predictions

Ok folks, here we go. I've put the word futurism is in the title of the blog and its near the end of the calendar year, so I think it's fair that I do a little bit of wild prediction about the coming year (and some change.) Here goes:

Open technology will increasingly be embedded into "closed technology"

The astute among you will say, "but that is already the case:" The Motorola Razor cell phone that my grandmother has runs the Linux Kernel (I think.) And there's the TiVo, lets not forget the TiVo. Or, for that matter the fact that Google has--for internal use, of course--a highly modified branch of the Linux Kernel that will never see the light of day.

That's old news, and in a lot of ways reflects the some of the intended and unintended business models that naturally exist around Open Source.

I'm not so much, in this case talking about "openness" as a property of code, but rather openness as a property of technology, and referring to long running efforts like XMPP and OpenID. These technologies exist outside of the continuum of free and proprietary code but promote the cyborg functioning of networks in an transparent and networked way.

XMPP says if you want to do real time communication, here's the infrastructure in which to do it, and we've worked through all the interactions so that if you want to interact with a loose federation (like a "web") of other users and servers, here's how.

OpenID solves the "single sign on" problem by creating an infrastructure for developers to be able to say "If you're authenticated to a third party site, and you tell me that authenticating to that third party site is good enough to verify your identity, then it's good enough for us." Which makes it possible to preserve consistent identity between sites, it means you only have to pass credentials to one site, and I find the user experience to be better as well.

In any case, we've seen both of these technologies become swallowed up into closed technologies more and more. Google Wave and Google Talk use a lot of XMPP, and most people don't know this unless their huge geeks (compared to the norm.) Similarly, even though it's incredibly easy to run and delegate OpenIDs through third parties, the main way that people sign into OpenID sites is with their Flickr or Google accounts.

I'm not saying that either of these things are bad, but I think we're going to see a whole lot more of this.

A major player on the content industry will release a digital subscription plan

I think, perhaps the most viable method for "big content" to survive in the next year or so, will be to make content accessible as part of a subscription model. Pay 10 to 20 dollars a month and have access to some set quantity of stuff. Turn it back in, and they give your more bits. Someone's going to do this: Amazon, Apple, Comcast, etc.

It's definitely a hold over from the paper days when content was more scarce. But it gets us away from this crazy idea that we own the stuff we downaload with DRM, it makes content accessible, and it probably allows the of devices to shoot down (to nominal amounts). While it probably isn't perfect, its probably sustainable, and it is a step in the right direction.

Virtualization technology will filter down to the desktop.

We have seen tools like VirtualBox and various commercial products become increasingly prevalent in the past couple of years, to decrease the impact of operating system bound compatibility issues. This is a good thing, but I think that it's going to go way further, and we'll start to see this technology show up on desktops in really significant ways. I don't think desktop computing is in need of the same kinds of massive parallelism that we need on servers, but I think we'll see a couple of other tertiary applications of this technology.

First, I think hypervisors will abstract hardware interfaces away from operating systems. No matter what kind of hardware you have or what it's native method of communication is, the operating system will be able to interact with it in a uniform manner.

Second, there are a number of running image manipulation functions that think operating system developers might be able to take advantage of: first the ability to pause, restart, and snapshot the execution stat of a running virtual machine has a lot of benefit. A rolling snapshot of execution state makes suspending laptops much easier, it makes consistent desktop power is less crucial. And so forth.

Finally, system maintenance is much easier. We loose installation processes: rather than getting an executable that explodes over our file system and installs an operating system, we just get a bootable image. We can more easily roll back to known good states.

Not to mention the fact that it creates a lot of economic benefits. You don't need IT departments maintaining desktops, you just have a guy making desktop images and deploying them. Creating niche operating system images and builds is a valuable service. Hardware vendors and operating system vendors get more control over their services.

There are disadvantages: very slight performance hits, hyepervisor technology that isn't quite there yet, increased disk requirements. But soon.

Soon indeed.

Web Frameworks

I'm not a web developer. I write the content for (a couple) of websites, and I'm a fairly competent systems administrator. Every once and a while someone will need a website, or I'll need my site to do something new that I haven't needed to do before and I'll hack something together, but for the most part, I try and keep my head out of web development. Indeed, I often think that designing applications to run in the web browser is the wrong solution to most technological problems. Nevertheless, my work (and play) involves a lot of tinkering and work with web-applications, and I do begrudgingly concede the relevance of web applications.

In any case I've been reading through the following recently, and I (unsurprisingly have a few thoughts:)

The Trouble With Frameworks

I really enjoyed how this post located "web frameworks" in terms of the larger context: what they're good for, what they're not good for, and why they're so popular. I often feel like I see a lot of writing about why FrameworkA is better or worse than FrameworkB, which doesn't really answer a useful question. While I wouldn't blame my gripe with web-based applications entirely on the shoulders of frameworks, it's interesting to think of "the framework problem" as being a problem with the framework (and the limitations therein) rather than a problem with the web itself.

This isn't to say that frameworks are inherently bad. Indeed, there is a great deal of work that websites require in-order to function: HTML is a pain to write "by hand," consistent URLs are desirable, but it's undesirable to have to mange that by hand. If you need content that's dynamic, particularly content that is database-backed, there is all sorts of groundwork that needs to be done that's basic and repetitive even for the most basic functionality. Eliminating this "grunt work" is the strength of the framework, and in this they provide a great utility.

However, from an operations (rather than development) perspective, frameworks suck. By producing tools that are broadly useful to a large audience, the frameworks are by nature not tuned for high performance operations, and they don't always enforce the most efficient operations (with regards to the databases). Thankfully this is the kind of issue that can be safely delegated to future selves, as premature optimization remains a challenge.

Thoughts on Web.py

Though I'm not much of a Python person, I have a great deal of respect for Python tools. I swear if I were going to learn a language of this type it would almost certainly be Python. Having said that, the tool looks really interesting: it's minimal and stays out of the way for the most part. It does the really "dumb" things that you don't want to have to fuss with, but doesn't do a lot of other stuff. And that's a great thing.

I'm not sure how accurate this is, but one of the things that initially intrigued me about web.py is that it sort of felt like it allows for a more "UNIX-y" approach to web applications. Most frameworks and systems for publishing content to the web work really well as long as you don't try and use anything but the application or framework. Drupal, Wordpress, and Rails seem to work best this way. Web.py seems to mostly be a few hooks around common web-programing tasks for Python developers, so that they can build their apps in whatever way they need to. The the monolithic content management approach doesn't feel very UNIXy by contrast. I think this is something that I need to explore in greater deal.

Having said that, I'm not terribly sure if there are problems that I see in the world that need to be addressed in this manner. So while I can sort of figure out how to make it work, I don't find myself thinking "wow, the next time I want to do [this], I'll definitely use web.py."

But then I'm just a dude who writes stuff.

Getting the Most from Arch Linux

Much to my surprise, my essay of a few months ago on Why Arch Linux Rocks is quickly becoming one of my most popular posts. Who would have thunk? In any case, while I've written a few additional bits here and there about using Arch, I thought it would be good to write some more concrete and practical reflections on what I've learned from Arch and hopefully someone out there will find this useful.

The thing about arch, is that it's by nature a very minimal operating system, and it doesn't shy away from the command line. There's this peculiar logic amongst GNU/Linux Distribution Developers that says: we want people to use our operating system and to use Free Software, and since most people are scared off by command line interfaces and editing config files, lets try and obscure the command lines and the config files as much as possible. That's a good idea.

Right. Not really.

And for some distributions, this really works. Arch says, rather, that while command line interfaces and config files can be confusing they're not "bad," and it's possible to teach people how to interact with the command line and edit their config files. Furthermore, while command lines and config files might be different and unfamiliar to users who are used to checkboxes and menus, they are extraordinarily simple.

Back to the minimal part.

When you install Arch, you get a prompt, the basic GNU tool chain (install base-devel during the install process, it's worth it,) the Kernel (of course), the GRUB boot loader, and that's about it. Everything else you have to install yourself. While you might thing "this is a huge bother" the first time you install Arch, the when you get to the end of the process you have a system that does exactly what you want it to and nothing more. Having a computer system so tailored to your needs and workflows is actually, sort of a unique and wonderful experience. Rarely, I think, in this day and age do we get work with a computer in this way.

Nevertheless, having a system that is this minimalist means that setup can be a bit... intense. Once things are installed the right way it works great, but I found that the first few times I ran an Arch install, it was like I spent the next two weeks installing little things over and over because I couldn't keep track of what I needed. And then I installed a second system, and it was the same thing over again. The third time I did it, I'd wised up and managed to have a better time with it. A few weeks ago I created and redeployed a virtual machine (in virtual box) that I use as my primary work computer (long story), and it was painless.

What remains of this post are a collection of lessons that I've learned and some suggestions I have for having the best arch experience possible.

  1. Save your /home directory and copy it to the new machine. I used the following command over the local network from the home server:

    rsync -azrv /home/tychoish tychoish@192.168.256.1:/home
    

    Changing the paths, usernames, and IP address to their relevant and valid values. This way all your configuration files and data end up on the new machine, with permissions preserved. This typically takes the longest time of any setup operation.

  2. Run system updates as soon as possible. Arch is a rolling release distribution. The longest you'll typically go between updates is the time between when the installation media was created and when you install your machine. The longer the divide between current status and when you run an update the greater the chance of breakage happening.

  3. Use ABS, the "Arch Build System," to compile any software that isn't in the main arch repositories. Save the PKGBUILD scripts (if not the packages you've made with them,) in your `~/abs/`` directory. This makes installing all of the weird and tedious software much easier.

  4. Avoid getting a machine with weird or uncommon wireless or video drivers. At this point, I'm choosy enough about my hardware that I won't get a computer (much less install Linux on it,) if it's not "Intel everything:" wireless card, video card, chipset, etc. Sure a fancy NVidia card might be more sexy, and there are a lot of good reasons to use AMD and ATI silicone. But, one can be very sure that Intel hardware is going to work with Linux; and with other gear, it's much more hit and miss. Or it can be. And my time and serenity is not without value.

  5. Maintain a list of what packages you want to install on your system:

    • X11 Dependencies

      Lets not talk about how long it's taken for me to remember to install the X11 keyboard and mouse drivers:

      xf86-input-keyboard xf86-input-mouse
      
    • Additional package management tools

      The abs tool provides a BSD-ports like interface for building packages, while the 'yaourt``tool makes it easy to make and build packages from the [Arch User Repository][aur] (AUR.) Typically I use [yaourt][] to download a package and then build it in the``absdirectory. Make sure you've installedbase-devel` as well, because you'll want to eventually.

  • Music Applications

    I use the music player daemon and a number of additional applications to play music. And I install these packages as a matter of course.

    mpd mpc gmpc alsa-utils jack-audio-connection-kit
    
  • The tychoish Tool Chain

    Your specific list of "tools that you must have in order to function properly," probably varies a bit. Here's mine (some packages are from AUR):

    git emacs emacs-org-mode emacs-w3m-cvs rxvt-unicode
    screen wcalc sudo htop urxvtcd zsh swiftfox-i686
    

    To explain quickly those packages which aren't obvious from their title: wcalc is a command line calculator tool; swiftfox-i686 is an optimized build of Firefox, except it runs more smoothly in my experience and is compatible with the FF plugins that I depend upon. urxvtcd is a shell wrapper for urxvt that opens a client of the urxvt-daemon (urxvtc) if there's a daemon already running, and starts a daemon and opens a window if there isn't. htop is a system process monitor.

  • Email Tool Chain

    I use the following packages to manage my email:

    procmail msmtp fetchmail gnugpg
    

    From AUR I also use:

    lbdb mutt-sidebar
    

    lbdb is an address book database tool, and I use my build to get the best mail-reading client in the world.

  • Window Management

    Arch is great for having pretty good support and inclusion of up-to-date packages for esoteric window managers. The sequence for installing StumpWM is a bit non intuitive. First, install the following packages from the normal repositories:

    sbcl nitrogen gtk-chtheme
    

    The last two are tools for changing the look and feel the desktop and GTK applications (respectively). Now from AUR, install (in the following sequence):

    clx cl-ppre stumpwm-git
    

    Put the following code in your ~/.sbcl file:

    (require 'asdf)
    (pushnew #p"/usr/share/common-lisp/systems/" asdf:*central-registry* :test #'equal)
    (asdf:operate 'asdf:load-op 'cl-ppcre)
    (push #p"/usr/share/common-lisp/systems/" asdf:*central-registry*)
    (asdf:operate 'asdf:load-op 'cl-ppcre)
    

    And you should be good. Configure your .xinitrc to your liking, and make sure your have a .stumpwmrc.

  • Networking Tools

    To manage my own connections and connect to the OpenVPN I use netcfg and OpenVPN. This requires the following packages:

    netcfg netcfg-auto zsh-netcfg netcfg-openvpn openvpn
    

    There are also a suite of network diagnostic tools that I always install, but seem to forget for various parts of a week.

    mtr whois dnsutils
    

    Also, I'm always befuddled by this for a few moments, but ssh isn't installed by default. I, as a result install the following tools:

    openssh sshfs
    

    And add the following line to /etc/hosts.allow to permit inbound SSH connections.

    sshd: ALL
    
  • Spelling Correction

    This always frustrates me, though I understand the logic. Install the following three packages to get spelling support thought the system:

    aspell ispell aspell-en
    

    If you need spelling correction for a non-English language, replace aspel-en with that language, or add that language to the end of this list.

Links on the Art of Techology

I have a collection of links that I'd like to share with you. I hope you enjoy and find them as enlightening as I have. Some of these are dated but, I've been milling through them for a while and I feel like they're worth sharing. In three parts:

Computer Programming and Hacking

There's a long tradition of computer scientists and eminent developers thinking about their software development as an Art. Donald Knuth's major work is called "The Art of Computer Programming," and literate programing is fundamentally an artistic rather than a technical idea. The idea of the "lazy programmer" from the Perl world has some obvious artistic implications (hell, Perl's license is called the "Artistic License"), and the Extreme Programing (XP)/Agile Programming world is very much about addressing programming as creative challenge rather than a purely technical challenge.

I particularly like the contrast of the first two articles with the third. While I'm not sure it's particularly conclusive in the brief overview and with such a small sample, the insights about the ways that programmers approach problems is pretty useful. Makes me want to get the book.

I may not be a programmer in the conventional sense (or even any of the unconventional senses that I'm aware of) but there are two things that I know for sure. First: functional programming makes sense to me in a way that nothing else ever really has, and secondly that I am fascinated by the different ways that people use and manage projects with Git.

I think functional program makes sense to me because little blocks that "do something" matches the way my brain is shaped in some useful way. Every introduction to object oriented programming I've ever experienced starts with like 50 pages (or equivalent) of crud about data structures and types . Which I'm sure make sense if you know what's coming next, but if you're a programing n00b: less than helpful.

Also, regarding the use of git: it's fascinating how many different ways and different work-flows people manage to squeeze out of software like this! What he's doing makes sense, a lot of sense on paper, but most people don't publish long-running branches in such an organized manner. Sure there are "vendor branches" for release maintenance, but branches in git tend to be much more ad hoc from what I've seen. Anyway, good one for the file.

I've been looking over weppy for the past few weeks, along with the bindings for a couple of weird databases (of the NoSQL variety, loosely), as part of a brief attempt to learn how to think like a web-developer. (It's hard more on this later.) I find myself both incredibly enchanted with the prospect of some of these frameworks (particularly the python ones), and yet at the same time very unsure of what they represent in terms of the Internet. Frameworks aren't going anywhere, and I think by some measure they are "getting better," I do think they make it awfully easy to avoid putting in the time to "get it right," which might not matter most of the time, but when it does oh boy, does it.

Academia, Anthropology, Literature

I wrote a series of articles on the future of publishing and I found myself returning again and again to these two essays as a source of inspiration, and I haven't quite gotten them out of my head.

I'm not sure if I agree with this consensus, but it seems pretty clear that multi-media websites, twitter, and "blogs" (by some definition of the term) are acceptable replacements for journalistic publishing (newspapers, magazines). These essays, engage literary publishing, and force (particularly the second) us to think about the economics of booksellers, including branding, brick and mortar shops, which I think is incredibly important.

In the piece on Cyborg Anthropology, Amber Case discusses looks at the Singularity from a much less technical perspective. In a lot of ways this reminds me of my post on the Dark Singularity from a few months back. The singularity is, of course ultimately a cyborg problem.

Aaron Swartz's piece on the academy... I don't know that he's wrong, exactly, but I think I would avoid being as unilateral as he is. On the one had disciplines exist mostly to organize education not research, and if I were going to make a conjecture: we see more disciplanarity and sub-disciplining in fields with substantive funding outside of the academy. Doctors, lawyers, psychologists, biologists, chemists, have teeny-tiny little sub-fields; and by contrast you see a lot more interdisciplinary activity in the academic study of anthropology, literature, mathematics, and, say musicology. Interesting, nonetheless.

The Industry of Technology

Two posts here from James Governor of RedMonk that have guided (at least topically) my thinking in a couple of areas recently. This is particularly noticeable if you look over the recent archives. The first addresses Flash and the business of advancing the platform of the web. My trepidation with regards to flash is mostly the same as my trepidation with regards to all web technology. When you get down to it, my grumbling all goes back to the fact that the web is developing into this thing that is all about media rich software, and less about hypertext, which is where my heart has always been. My argument keeps going back to "take the applications off of the web as we know it, and use a platform that (like GTK+ or QT) that's designed for applications, and create a hypertext experience that really works." But it's nice to have articles like this one to pull my head out of (or into?) the clouds to remind me (and us?) what's really going on in the industry.

This is an old one from the same source, about the new "patronage economy" which in many ways defines what's going on at RedMonk. I particularly enjoy how Governor contrasts the New Patronage with other popular web 2.0-era business models. I do worry, looking at this in retrospect about the ways in which such patronages are stable even when the economy is in the crapper (as it were.) I'm not sure if there's an answer to that one yet, but we'll see. I guess my questions, at this juncture are: first, does patronage-type-relationships give "start ups" another funding option which is more stable than venture capital. Second, doesn't the kind of organization and work that's funded by these patronages subvert the kind of work that they depend upon (i.e. big high-margin businesses?)

That's all I have at the moment. If you have links or thoughts to share, I'd live to see them!

Email is Still Broken, but at Least People are Talking

Email is broken. I don't think the assertion really even requires an example for everyone to concur. Spam, ongoing wars over decades old standards and protocols, bacon (automatically generated email that you want), listservs, poorly written clients, and mixed expectations regarding replays, make email frustrating to deal with under the best of situations.

Email is also inescapable. There's no other technology around at the moment which pushes messages out to users as effectively (every other thing is pull), provides administrators with as many tools and options, or has such ubiquitous adoption rates. I'd love to be able to say "don't send me emails, I won't get it," but the truth is that I get and send a bunch of email, and every other option for "next wave" digital communication fails to impress.

Including Google Wave.

Don't get me wrong, Wave is nifty. Hampered mostly by an interface which is foolish and gimmicky, I think Wave provides capabilities that we need on the Internet. Wave makes multi-user chat useful for people who have trouble wrapping their brain around IRC. Wave strikes the right balance between "real-time" and persistence (which is mighty difficult), and most importantly...

Wave gets people thinking about email, and what the ideal replacement might look like. And that's a good thing indeed.

Maybe, more importantly, Wave moves many of the trivial uses of email, into a more suitable format (XMPP) so that email, or whatever replaces it, has a more easily defined task. Maybe there are a couple of XMPP applications: Jabber, Wave, and (----) [1] that work in concert to alleviate the pressure on email and make it much more clear what "the email killer" will look like.

To my mind, one of the biggest problems with email is that we use it for too much, including a number of things that it's not ideally suited for. Perhaps the way to fix email is to fix all of these other communication problem, and see what's left.

Here's a list of the things what I think we use email for today, to which it is unsuited, followed by a couple of examples?

  • Notifications. There's been a reply to a comment on your blog, someone has commented on your face-book status update, it's time to renew your domain name, your payment has been processed. Etc.
  • File Transfer. Look at my resume. Here's the paper I owe you. I was looking for your article and wondered if you might have a copy you could share? Here's this funny picture my uncle sent me. Etc.
  • Content Review. Could you take a look at this essay or bug report? I mentioned this in an IM but didn't want to send it in that format.

This leaves us with a relatively short list of things: genuine letters (i.e. first class mail; generally "asynchronous personal communication") and listservs (which like Usenet are probably the best way to have async-group communication online from the users perspective). And spam, but we don't care about that.

So what's the best way to handle what's left? Hell, what's the best way to handle the rest of it?

I don't think there are good answers yet, at all, and I look forward to continuing to thing about these issues in a big way.

[1]I think this might tie back to my posts on notification systems, but more along the lines of what we currently use SMS for (say,) but I'm not sure what this looks like in a personal (much less interpersonal) method. Lots and lots of whitelisting, that's for sure.

How I'd Like to Be Notified

In my last post on this topic I tried to avoid talking too much about the actual details of an implementation of what my ideal notification system would look like. In part because I think that the problem of staying on top of our digital (cyborg?) in a "real time" manner is something that transcends the specific method of implementation. I and many other awesome people have been thinking about this "real time" problem for a couple of years, and mostly we've thinking about things like microblogging (think twitter and identi.ca) and XMPP and related topics. These are all important issues that we need to get worked out, but there are issues regarding the use of "real time data" that I think need to be addressed from a more indivudal/user perspective. This notification issue is one of them.

My requirements, in summary:

  1. There should be some common method of creating and distributing notifications that's easy to setup up and tie into scripts (i.e. with some sort of command line interface.) and that is independent on application. I don't want notifications from my email to appear in a different place from notifications from IRC, for instance.
  2. Taking the above one step further, notifications should be machine independent. If an event that I'd like to be notified of happens on my server (in the cloud, as it were) or the desktop at home, I'd like to know about it on my laptop when I'm on the road. Preferably, I'd like the system to know if I've seen a notification on my work computer before alerting me to it at home, but I'm willing to slide on this one.
  3. I need the ability to filter and control what notifications I see. There are times when I really want to track a particular word or phrase on twitter, and there are times when the last thing I want to know is weather or not someone has said something or other on twitter. Fine grained controls are key. I'd really like to put some sort of filtering pipe intweent "messages received" and "notifications sent," so I can use it the way I use procmail for email.

Thoughts on Implementation:

  • XMPP seems like an obvious tool for implementing the backbone of this. From my perspective of a slightly-more-than-casual observer, getting one machine to know if a notification has been seen by machines is a bit thorny at best.
  • Notifications need to tie into the window manager and desktop experiences. in some sort of unobtrusive way, but also in a way that is consistent with that window manager normally works.
  • While many notifications will probably be created by sending a simple string to a command, some notifications will probably be generated from RSS or JSON feeds, and the best systems would provide some baked in support for pulling events from these kinds of data.
  • You can get irssi to send xmpp notifications, which is both incredibly ironic, and kind of awesome, as a building block. Also, this emacs-fu for using lib-notify from emacs might be another good starting point.
  • Dumping a firehose of notifications in an XMPP window, might be a good start, but I think once a notification has been sent (and logged and dismissed?), notifications need to disappear forever. Also--and this is the point of the filtering--raw events take processing in order to be useful. The firehose, of anything when exposed to the user, isn't particularly useful.
  • [insert your thoughts here]

I will, of course let you know how this goes.