Mutt Sucks Less

I use a mail client called mutt. The quality of this software may largely explain my opinion this post on the continued relevance of email

I think mutt warrants a bit of extra attention for two reasons. First, because I think there are enough people out there who don't use mutt who could and perhaps should, and I'd like to do a little encouraging; and second,like all fundamentally wonderful pieces of software, mutt can teach us something important about what makes technology great and pleasurable to use.

Working with any new kind of software is always a challenge. It is unfortunate that "features" and "functions" are the currency by which we judge software. Which is unfair to both the technology and ourselves, as the utility and quality of these features/functions depends on a number of subjective/individual factors. That said, with regards to mutt, my list is as follows:

  • Mutt is agnostic on the editor question. I suspect the fact that I could use any text editor I wanted to write email was probably my original reason for switching to mutt in the first place. It's amazing what a sane editing experience can do for the overall experience of writing emails.
  • Support for PGP/GPG encryption. Signing and encrypting emails with PGP is probably only a minor advantage, and of limited actual utility, but I think it's important and valuable to have this capability in your email client. After all, the success of PGP depends on a crowd effect: if it's easy, sign all your email and hope that others will join you. Mutt makes this easy, which is a good thing indeed.
  • Mutt operates independently of mail transmission protocols, which are universally flawed. In many ways, by not including support for mail transmission, mutt is more useful and more flexible than it would be if it was designed to handle mail transmission. Having said that, recent versions of mutt have internal support for IMAP/POP/SMUT. Not that I'd use it or recommend that you do use it and I suspect most mutt users don't either.
  • Mutt operates independently of mail storage format: you can maintain complete control over your mail data, and store email pretty much however you like. While this may be a burden to some, I'm somewhat controlling when it comes to data storage and preservation, and I think email archives are incredibly important. And I'm a weirdo about email storage.
  • Mutt's "sidebar patch" isn't even a part of the core of the software, but it's absolutely crucial to my experience of the software. Basically it gives you a heads-up-display of your mailboxes and tells you at a glance: if there are new messages and how many messages (new, flagged, read) are in an mailbox. While it eats into some screen real estate, it's generally unused screen space and it's more than worth the expenditure of pixels.
  • Mutt runs on console and can be compiled on pretty much any contemporary UNIX-like system. Chances are there are packages for most operating system. So I feel pretty confident that I'll pretty much be able to use mutt no matter what kind of system I end up using. Also console apps generally run pretty well in screen, which makes them accessible (and persistent) across the internet.

Onward and Upward!

Desks and Stationary Mobility

This is a post about mobile technology in an unconventional sense. I think I'm probably an extreme "mobile" technology user: I ride a lot commuter rail and use my laptop extensively on the train. Then, I work on a laptop all day. In the evening, I often do at least a little additional work, again on the same laptop. There is, after all, always writing (like this post!) to fill any remaining free time.

I'm not a terribly typical mobile user. My main "mobile device" is a little ThinkPad (and sometimes a larger ThinkPad,) running Linux and a lot of Lisp (emacs and otherwise.) It's not ideal for every situation: there are times when I just can't bare to open the laptop again or it's unfeasible (and there's always the Kindle for times like those.) Most of the time it works well.

It's hard to omit discussion of the "tablet" and the iPad. For me, the fact that tablets are not general purpose computers is a huge deterrent. This is probably not the case for everyone, though there are lots of shades to this debate. I think the more interesting question is not "do people need general purpose tablets?" and more "how will more ubiquitous embeded-type systems effect the way people will approach 'general purpose' computing environments" from here on out? Honestly, this in computing practice has already happened, but I think it will continue to pose important questions for users and developers as it continues.

The struggle, for me, revolves less around the question "how do I work remotely?" and more around "how do I also work when I'm at a desk?" The adjustment can be hard: For a while, I was so used to working on the train, and in random chairs, that I had a hard time focusing if the computer wasn't actually on my lap. Bad ergonomics is only the start of this.

The current solution is to set up desks and workstations that use the same laptops and systems so that I'm not perpetually switching between fixed computers and mobile computers. I'm also keen for these desks to have their own appeal: bigger monitors, nice keyboards, and easy to attach power cords. I've also attempted to tie together all of the "I'd like to switch between laptop-mode and desk-mode," functions (e.g. network connection, monitor attachment, window layout) into easy to trigger operations, so I can get started more quickly. Nice. Seamless. Efficient.

The lessons: There are many ways to maintain technical (cyborg) coherence despite/during geographical movement and sometimes that technology isn't particularly cutting edge. Sometimes the best way to break yourself of a habit you don't like is to play a game with yourself where you establish a more attractive option. Finally, a very small change or automation can be enough to take something difficult and make it much easier or something unpleasantly and make it workable.

Anti-Rodentia

I hate computer mice. A lot.

The closest I've gotten to liking a pointing device is an acquiescence to the TrackPoint on the laptops I use. That's the little red dot in the middle of the ThinkPad keyboards. My problem with computer mice is the context switch between "typing-mode" and "mousing-mode." Moving between the modes is jarring and inefficient. I've been using StumpWM and other similar window managers for years now and as my need for the mouse decreases my irritation with needing to use a mouse increases.

I've been struggling for a few months with a bit of a problem: Several months ago I got a new bigger ThinkPad, a T510, while it has my beloved TrackPoint there is also a TouchPad. After years of only using laptops with the red dot, this was very disconcerting. How did I keep from triggering the touchpad with my wrists? Couldn't I just turn the damn thing off?

I did, and everyone who tried to use the computer after that was dismayed, and I didn't care. Except, I found out that, apparently, disabling the touchpad also disables all non-TrackPoint pointers. So when I plugged the laptop into the docking station, the external mouse didn't work.

Blast.

The solution to disabling and enabling the mouse on the fly, that follows isn't as pretty as I'd like, but it works.

UPDATE: Turns out that my original procedure only appears to work. I've made the following modification to the toggle-mouse script, using a stock xorg.conf file.

File: /usr/bin/toggle-mouse

#!/bin/sh

TOUCHPAD=`xinput list | egrep "TouchPad" | sed -r 's/.*id=([0-9]*).*$/1/'`

if [ xinput list-props "$TOUCHPAD" | egrep -o "[0-9]$" | head -n1 -eq 0 ]; then xinput set-prop "$TOUCHPAD" "Device Enabled" 1 elif [ xinput list-props "$TOUCHPAD" | egrep -o "[0-9]$" | head -n1 -eq 1 ]; then xinput set-prop "$TOUCHPAD" "Device Enabled" 0 else xmpp-notify "Your mouse is probably screwed up somehow" fi

Test the output of "xinput list | egrep "TouchPad" | sed -r 's/.*id=([0-9]*).*$/\1/'", and inspect "xinput list" to make sure that the value of $TOUCHPAD is the xorg id of the touchpad (or other device) that you want to disable.

I'd actually recommend not putting this in /usr/bin/, but just so long as it's in the path. Then run toggle-mouse at the command line. You may need to run this as root-suid, for your system system is configured. Tweak the TOUCHPAD variable as needed.

If you have a better solution, I would be terribly interested in hearing about it.

City Infrastructure

I'm always interested in how the lessons that people learn in IT trickle down to other kinds of work and problems. This is one of the reasons that I [1] am so interested in what developers are interested in: if you want to know what's happening in the technology space, it's best to start at the top of the food chain. For this reason this article from IBM, which addresses the use of IT/Data Center management tools outside of the data center was incredibly interesting for me.

When you think about it, it makes sense. IT involves a lot of physical assets, even more virtual assets, and when projects and systems grow big enough, it can be easy to lose track of what you have, much less what state it's in at any given time. Generalized, this is a prevalent issue in many kinds of complex systems.

As an aside, I'm a little interested when software that provides asset management and monitoring features, will scale down to the personal level. That'll be interesting too. There are the beginnings of this kind of thing (e.g. iTunes, and git-annex) but only the beginnings.

I'm left with the following questions:

  • Obviously moving from managing and monitoring networked devices to managing and monitoring infrastructure objects like water filtration systems, storm water drainage, the electrical grid, snow removal, etc. presents a serious challenge for the developers of these tools, and this adaptation will likely improve the tools. I'm more interested in how cities improve in this equation. And not simply with regards to operating efficiencies. What do we learn from all this hard data on cities?
  • Will cities actually be able to become more efficient, or will they need to expand to include another layer of management management, that nullifies the advances. There are also concerns about additional efficiency increasing the "carrying capacity of cities," into unsustainable levels.
  • Can the conclusions from automated city-wide reporting lead to advancements in quality of service, if we're better at determining defective practices and equipment. In this vein, how cities share data between them will also be quite interesting.

I'd love to hear from you!

[1]RedMonk also use a similar argument.

Git Sync

With the new laptop, I once again have more than one computer, and with it a need to synchronize the current state of my work. This is a crucial function, and pretty difficult to do right: I've had multiple systems before and condensed everything into one laptop because I didn't want to deal with the headache of sitting down in front of a computer and cursing the fact that the one thing that I needed to work on was stuck somewhere that I couldn't get to.

I store most of my files and projects in git version control repositories for a number of reasons, though the fact that this enables a pretty natural backup and synchronization system, was a significant inspiration for working this way. But capability is more than a stones throw from working implementation. The last time I tried to manage using more than one computer on a regular basis, I thought "oh, it's all in git, I'll just push and pull those repositories around and it'll be dandy." It wasn't. The problem is, if you keep different projects in different repositories (as you should, when using git,) remembering to commit and push all repositories before moving between computers is a headache.

In the end synchronization is a rote task, and it seems like the kind of thing that was worth automating. There are a number of different approaches to this and what I've done is some very basic bash/zsh script [1] that takes care of all of this syncing process. I call it "git sync," you may use all or some of this as you see fit.

git sync lib

The first piece of the puzzle is a few variables and functions. I decided to store this in multiple files for two reasons: First, I wanted access to the plain functions in the shell. Second, I wanted the ability to roll per-machine configurations using the components described within. Consider the source.

The only really complex assumption here is that, given a number git repositories, there are: some that you want to commit and publish changes too regularly and automatically, some that you want to fetch new updates for regularly but don't want to commit, and a bunch that you want to monitor but probably want to interact with manually. In my case: I want to monitor a large list of repositories, automatically fetch changes from a subset of those repositories, and automatically publish changes changes to a subset of the previous set.

Insert the following line into your .zshrc:

source /path/to/git-sync-lib

Then configure the beginning of the git-sync-lib file with references to your git repositories. When complete, you will have access to the following functions in your shell: gss (provides a system-wide git status,) autoci (automatically pulls new content and commits local changes to the appropriate repository,) and syncup (pulls new content from the repositories and publishes any committed changes.

syncup and autoci do their work in a pretty straightforward for [...] done loop, which is great, unless you need some repositories to only publish in some situations (i.e. when you're connected to a specific VPN.) You can modify this section to account for this case, take the following basic form:

syncup(){

   CURRENT=`pwd`

   for repo in $force_sync_repo; do
       cd $repo;

       echo -- syncing $repo
       git pull -q
       git push -q

   done
   cd $CURRENT

}

Simply insert some logic into the `for`` loop, like so:

for repo in $force_sync_repo; do
   cd $repo;
   if [ $repo = ~/work ]; then
      if [ `netcfg current | grep -c "vpn"` = "1" ]; then
          echo -- syncing $repo on work vpn
          git pull -q
          git push -q dev internal
      else
         echo -- $repo skipped because lacking vpn connection
      fi
   elif [ $repo = ~/personal ]; then
       if [ `netcfg current | grep -c "homevpn"` = "1" ]; then
          echo -- syncing $repo with homevpn
          git pull -q
          git push -q
       else
          echo -- $repo skipped because lacking homevpn connection
       fi
   else
      echo -- syncing $repo
      git pull -q
      git push -q
   fi
done

Basically, for two repositories we test to make sure that a particular network profile is connected before operating on those repositories. All other operations are as in the first example. I use the output of "netcfg current", which is an ArchLinux network configuration tool that I use. You will need to use another test, if you are not using Arch Linux.

git sync

You can use the functions provided by the "library" and skip this part if you don't need to automate your backup and syncing process. The whole point of this project was specifically to automate this kind of thing, so this--though short--is kind of the cool part. You can download git sync here.

Put this script in your $PATH, (e.g. "/usr/bin" or "/usr/bin/local"; I keep a "~/bin" directory for personal scripts like this in my path, and you might enjoy.) You will then have access to the following commands at any shell prompt:

git-sync backup
git-sync half
git-sync full

Backup calls a function in git-sync to backup some site-specific files to a git repository (e.g. crontabs, etc.) The half sync only downloads new changes, and is meant to run silently on a regular interval: I cron this every five minutes. The full sync runs the backup, commits local changes, downloads new changes, and sends me an xmpp message to log when it finishes successfully: I run this a couple of times an hour. But there's an exception: if the laptop isn't connected to a Wifi or ethernet network, then it skips sync options. If you're offline, you're not syncing. If you're connected on 3g tethering, you're not syncing.

That's it! Feedback is of course welcome, and if anyone wants these files in their own git repository so they can modify and hack them up, I'm more than willing to provide that, just ask.

Onward and Upward!

[1]I wrote this as a bash script but discovered that something with the way I was handling arrays was apparently a zsh-ism. Not a big fuss for me, because I use zsh on all my machines, but if you don't use zsh or don't have it installed, you'll need to modify something in the array or install zsh (which you might enjoy anyway.)

9 Awesome SSH Tricks

Sorry for the lame title. I was thinking the other day, about how awesome SSH is, and how it's probably one of the most crucial pieces of technology that I use every single day. Here's a list of 10 things that I think are particularly awesome and perhaps a bit off the beaten path.

Update: (2011-09-19) There are some user-submitted ssh-tricks on the wiki now! Please feel free to add your favorites. Also the hacker news thread might be helpful for some.

SSH Config

I used SSH regularly for years before I learned about the config file, that you can create at ~/.ssh/config to tell how you want ssh to behave.

Consider the following configuration example:

Host example.com *.example.net
User root
Host dev.example.net dev.example.net
User shared
Port 220
Host test.example.com
User root
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
Host t
HostName test.example.org
Host *
Compression yes
CompressionLevel 7
Cipher blowfish
ServerAliveInterval 600
ControlMaster auto
ControlPath /tmp/ssh-%r@%h:%p

I'll cover some of the settings in the "Host *" block, which apply to all outgoing ssh connections, in other items in this post, but basically you can use this to create shortcuts with the ssh command, to control what username is used to connect to a given host, what port number, if you need to connect to an ssh daemon running on a non-standard port. See "man ssh_config" for more information.

Control Master/Control Path

This is probably the coolest thing that I know about in SSH. Set the "ControlMaster" and "ControlPath" as above in the ssh configuration. Anytime you try to connect to a host that matches that configuration a "master session" is created. Then, subsequent connections to the same host will reuse the same master connection rather than attempt to renegotiate and create a separate connection. The result is greater speed less overhead.

This can cause problems if you' want to do port forwarding, as this must be configured on the original connection, otherwise it won't work.

SSH Keys

While ControlMaster/ControlPath is the coolest thing you can do with SSH, key-based authentication is probably my favorite. Basically, rather than force users to authenticate with passwords, you can use a secure cryptographic method to gain (and grant) access to a system. Deposit a public key on servers far and wide, while keeping a "private" key secure on your local machine. And it just works.

You can generate multiple keys, to make it more difficult for an intruder to gain access to multiple machines by breaching a specific key, or machine. You can specify specific keys and key files to be used when connected to specific hosts in the ssh config file (see above.) Keys can also be (optionally) encrypted locally with a pass-code, for additional security. Once I understood how secure the system is (or can be), I found my self thinking "I wish you could use this for more than just SSH."

SSH Agent

Most people start using SSH keys because they're easier and it means that you don't have to enter a password every time that you want to connect to a host. But the truth is that in most cases you want to have unencrypted private keys that have meaningful access to systems because once someone has access to a copy of the private key the have full access to the system. That's not good.

But the truth is that typing in passwords is a pain, so there's a solution: the ssh-agent. Basically one authenticates to the ssh-agent locally, which decrypts the key and does some magic, so that then whenever the key is needed for the connecting to a host you don't have to enter your password. ssh-agent manages the local encryption on your key for the current session.

SSH Reagent

I'm not sure where I found this amazing little function but it's great. Typically, ssh-agents are attached to the current session, like the window manager, so that when the window manager dies, the ssh-agent loses the decrypted bits from your ssh key. That's nice, but it also means that if you have some processes that exist outside of your window manager's session (e.g. Screen sessions) they loose the ssh-agent and get trapped without access to an ssh-agent so you end up having to restart would-be-persistent processes, or you have to run a large number of ssh-agents which is not ideal.

Enter "ssh-reagent." stick this in your shell configuration (e.g. ~/.bashrc or ~/.zshrc) and run ssh-reagent whenever you have an agent session running and a terminal that can't see it.

ssh-reagent () {
  for agent in /tmp/ssh-*/agent.*; do
      export SSH_AUTH_SOCK=$agent
      if ssh-add -l 2>&1 > /dev/null; then
         echo Found working SSH Agent:
         ssh-add -l
         return
      fi
  done
  echo Cannot find ssh agent - maybe you should reconnect and forward it?
}

It's magic.

SSHFS and SFTP

Typically we think of ssh as a way to run a command or get a prompt on a remote machine. But SSH can do a lot more than that, and the OpenSSH package that probably the most popular implementation of SSH these days has a lot of features that go beyond just "shell" access. Here are two cool ones:

SSHFS creates a mountable file system using FUSE of the files located on a remote system over SSH. It's not always very fast, but it's simple and works great for quick operations on local systems, where the speed issue is much less relevant.

SFTP, replaces FTP (which is plagued by security problems,) with a similar tool for transferring files between two systems that's secure (because it works over SSH) and is just as easy to use. In fact most recent OpenSSH daemons provide SFTP access by default.

There's more, like a full VPN solution in recent versions, secure remote file copy, port forwarding, and the list could go on.

SSH Tunnels

SSH includes the ability to connect a port on your local system to a port on a remote system, so that to applications on your local system the local port looks like a normal local port, but when accessed the service running on the remote machine responds. All traffic is really sent over ssh.

I set up an SSH tunnel for my local system to the outgoing mail server on my server. I tell my mail client to send mail to localhost server (without mail server authentication!), and it magically goes to my personal mail relay encrypted over ssh. The applications of this are nearly endless.

Keep Alive Packets

The problem: unless you're doing something with SSH it doesn't send any packets, and as a result the connections can be pretty resilient to network disturbances. That's not a problem, but it does mean that unless you're actively using an SSH session, it can go silent causing your local area network's NAT to eat a connection that it thinks has died, but hasn't. The solution is to set the "ServerAliveInterval [seconds]" configuration in the SSH configuration so that your ssh client sends a "dummy packet" on a regular interval so that the router thinks that the connection is active even if it's particularly quiet. It's good stuff.

/dev/null .known_hosts

A lot of what I do in my day job involves deploying new systems, testing something out and then destroying that installation and starting over in the same virtual machine. So my "test rigs" have a few IP addresses, I can't readily deploy keys on these hosts, and every time I redeploy SSH's host-key checking tells me that a different system is responding for the host, which in most cases is the symptom of some sort of security error, and in most cases knowing this is a good thing, but in some cases it can be very annoying.

These configuration values tell your SSH session to save keys to `/dev/null (i.e. drop them on the floor) and to not ask you to verify an unknown host:

UserKnownHostsFile /dev/null
StrictHostKeyChecking no

This probably saves me a little annoyance and minute or two every day or more, but it's totally worth it. Don't set these values for hosts that you actually care about.


I'm sure there are other awesome things you can do with ssh, and I'd live to hear more. Onward and Upward!

Searching for Known Results

(Note: I was going through some old files earlier this week and found a couple of old posts that never made it into the live site. This is one of them. I've done a little bit of polishing around the edges, but this is as much a post for historical interest as is a reflection of the contemporary state of my thought.)

This post is a follow up to my not much organization post, and as part of my general reorganization, I've been toying with anything for emacs which is a tool, or set of tools, which provide search-based interaction with some tasks (opening files, finding files, accessing other information, etc.) in a real-time search-based paradigm. Mmmm buzzwords. Think of it as being like quicksilver or launchy, except for emacs. I've come to a conclusion, that I think is generalizable, but made particularly obvious by this particular problem space.

Search, as an interface to a corpus, is only more effective than other organizational methods when you don't know what the location of what your looking for is, or don't understand the organizational system that governs the collection where your object is located. When you do know where the needed object is, search may be more cumbersome.

This feels obvious, when put in this way, but is counter to contemporary practice. Take the Google search use case where you find websites that you already know exist. You'd be surprised at how many people find this site by searching for "tychoish" or "tycho garen blog." These are people who already know that the site exists and are probably people who have visited the site already. Google is forgiving in a way that typing an address into a search bar is not.

This works out alright in the end for websites: there's no organizing standard for mapping domain names to websites. This is mostly due to the fact that you don't, in the present practice, use the domain name system in the way that it was originally intended, in that the content of domain names are "brands" rather than a domain of systems and services described by the content of the domain. In the end this is not a huge problem since Google is around to help sort things out.

Similarly "desktop search" tools are helpful when you have a bunch of files scattered throughout file systems, with lots of hierarchy (directories and sub-directories). When you know where files are located, search less helpful. This is not to say that they're ineffective: you'll find what you're looking for, it'll just take longer.

I think this theory on the diminishing utility of search tool holds up, though I don't exactly know how to do the research to further the develop the idea in a more concrete direction. Having said that, I think the following questions are important.

  • Are there practical ways to organize our files, that don't require too much over-thinking before a collection grows unmanageable that make "resorting to search" less necessary?
  • Is (or might) building search tools for people who work with a given body of data (and therefore are familiar with the data, and are less likely to need search) different from building search for people who aren't familiar with a given corpus?

Onward and Upward!

Caring about Java

I often find it difficult to feign interest the discussion of Java in the post Sun Microsystems era. Don't get me wrong, I get that there's a lot of Java out there, I get that there are a number of technological strengths and advantages that Java has in contrast some other programming platforms. Consider my post about worfism and computer programing for some background on my interest in programing languages and their use.

I apologize if this post is more in the vein of "a number of raw thoughts," rather than an actual organized essay.

In Favor of Java

Java has a lot of things going for it: it's very fast, it runs code in a VM that lets the code execute in a mostly isolated environment which increases reliability and security of the applications that run on the Java Platform. I think of these as "hard features" or technological realities that are presently implemented and available for users.

There are also a number of "soft features," that Java has that inspire people to use it: an extensive and reliable standard library, a large expanse of additional library support for most things, a huge developer community, and it has inclusion in computer science curricula so people are familiar with it. While each of these aspects are relatively minor, and could theoretically apply to a number of different languages and development platforms, they represent a major rationale for it's continued use.

One of the core selling points of Java has long been the fact that because Java runs on a virtual machine that can abstract differences between different operating systems and architectures, it's possible to write and compile code once and then run that "binary" on a number of different machines. The buzzword/slogan for this is "write once, run anywhere." This doesn't fit easily into the hard/soft feature dichotomy I set up above, but it nevertheless and important factor.

Against Java

Teasing out the history of programing language development is probably a better project for another post (or career?), but while Java might have once had a greater set of support for many common programming tasks, I'm not sure that it's sizable standard library and common tooling continues to overwhelm it's peers. At best this is a draw with languages like Perl and Python, but more likely the fact that the JDK is so huge and varied increases incompatibility potentials. And needing to download the whole JDK to run even minimalist Java programs. Other languages have addressed the tooling and library support in different way, and I think the real answer to this problem is write with an eye towards minimalism and make sure that there are really good build systems.

Most of the arguments in favor of Java revolve around the strengths of the Java Virtual Machine, which is the substrate where Java programs run. And it is undeniable that the JVM is an incredibly valuable platform, and every report that I've seen concludes that the JVM is really fast, and the VM model does provide a number of persuasive features (e.g. sandboxing, increased portability, performance gains.) That's cool, but I'm not sure that any of these "hard" features matter these days:

Most programing languages use a VM architecture these days. Raw speed, of the sort that Java has, is less useful than powerful concurrent programing abilities and is offset by the fact that computers themselves are absurdly fast. It's not to say that Java fails because others have been able to replicate the strengths of the Java platform, but it does fail to inspire excitement.

The worth of Java's "cross platform" capabilities are probably negated by service-based computing (the "cloud,") and the fact that cross platform applications, GUI or otherwise, are probably an ill gotten dream anyway.

The more I construct these arguments, I keep circling around the idea that while Java pushed a lot of programmers and language designers to think about what kind of features that programing languages needed. The world of computing and programming has changed in a number of significant ways, and we've learned a lot about the art of designing programming languages in the mean time. I wonder if my lack of enthusiasm (and yours as well, if I may be so bold) has more to do with a set of assumptions about the way programing languages should be that haven't aged particularly well. Which isn't to say that Java isn't useful, or that it is no longer important, merely that it's become uninteresting.

Thoughts?