Phone Torched

I mentioned in a recent update post, that I had recently gotten a new cell phone, which given who I am and how I interact with technology means that I've been thinking about things like the shifting role of cell phones in the world, the way we actually use mobile technology, the ways that the technology has failed to live up to our expectations, and of course some thoughts on the current state of the "smart-phone" market. Of course.


I think even two years ago quasi-general purpose mobile computers (e.g. smart phones) were not nearly as ubiquitous as they are today. The rising tide of the iPhone has, I think without a doubt, raised the boat of general smart phone adoption. Which is to say that the technology reached a point where these kinds of devices--computers--are of enough use to most people that widespread adoption makes sense. We've reached a tipping point, and the iPhone was there at the right moment and has become the primary exemplar of this moment.

That's probably neither here nor there.

With more and more people connected in an independent and mobile way to cyberspace, via either simple phones, (which more clearly matches Gibson's original intentions for the term,) or via smart phones I think we might begin to think about the cultural impact of having so many people so connected. Cellphone numbers become not just convenient, but in many ways complete markers of identity and person-hood. Texting in most situations overtakes phone calls as the may way people interact with each other in cyberspace, so even where phone calls may be irrelevant SMS has become the unified instant messaging platform.

As you start to add things like data to the equation, I think the potential impact is huge. I spent a couple weeks with my primary personal Internet connection active through my phone, and while it wasn't ideal, the truth is that it didn't fail too much. SSH on Blackberries isn't ideal, particularly if you need a lot from your console sessions, but it's passable. That jump from "I really can't cut this on my phone," to "almost passable" is probably the hugest jump of all. The series of successive jumps over the next few years will be easier.

Lest you think I'm all sunshine and optimism, I think there are some definite short comings with contemporary cell phone technology. In brief:

  • There are things I'd like to be able to do with my phone that I really can't do effectively, notably seamlessly sync files and notes between my phone and my desktop computer/server. There aren't even really passable note taking applications.
  • There are a class of really fundamental computer functionality that could theoretically work on the phone, but don't because the software doesn't exist or is of particularly poor quality. I'm thinking of SSH, of note taking, but also of things like non-Gmail Jabber/XMPP functionality.
  • Some functionality which really ought to be more mature than it is (e.g. music playing) is still really awkward on phones, and better suited to dedicated devices (e.g. iPods) or to regular computers.

The central feature in all of these complaints is software related, and more an issue of software design, and an ability to really design for this kind of form factor. There are some limitations: undesirable input methods, small displays, limited bandwidth, unreliable connectivity, and so forth. And while some may improve (e.g. connectivity, display size) it is also true that we need to get better at designing applications and useful functionality in this context.

My answer to the problem of designing applications for the mobile context will seem familiar if you know me.

I'd argue that we need applications that are less dependent upon a connection and have a great ability to cache content locally. I think the Kindle is a great example of this kind of design. The Kindle is very much dependent upon having a data connection, but if the device falls offline for a few moments, in most cases no functionality is lost. Sure you can do really awesome things if you assume that everyone has a really fat pipe going to their phone, but that's not realistic, and the less you depend on a connection the better the user experience is.

Secondly, give users as much control over the display, rendering and interaction model that their software/data uses. This, if implemented very consistently (difficult, admittedly,) means that users can have global control over their experience, and users won't be confused by different interaction models between applications.

Although the future is already here, I think it's also fair to say that it'll be really quite interesting to see what happens next. I'd like a chance to think a bit about the place of open source on mobile devices and also the interaction between the kind of software that we see on mobile devices and what's happening in the so-called "cloud computing" world. In the mean time...

Outward and Upward!

Kindle and Paradise Regained

As you all might have heard that Amazon (finally) released a Kindle Application for the Blackberry. When I heard this I thought that this would be a good thing, as I have (and quite enjoy) both my Blackberry and Kindle. Here's the rundown:

  • The Kindle App for the blackberry is probably the most well designed blackberry application I've seen thus far. Having said that, the bar isn't terribly high.

    In a lot of ways, the way (before the Kindle app) to make a "successful" blackberry application is one that figures out how to make its data "fit" into an email or messaging context and then blend that data into the messaging/event feed in a useful sort of way.

    This doesn't do that, and I think learns a great deal from advancements made in iPhone app development. The resolution on the Blackberry Bold is amazing (same number of pixels as the iPhone, much greater density.) and the buttons/interface is really intuitive and well designed. The app itself gets as many thumbs up as I can manage.

  • I've been having phone angst recently. I don't use it very much, I need to have better filtering of my email and reorganize how I do my voice mail, and while this is easy enough to say here it's a much more substantial project than I've got time for now.

  • The Kindle App isn't a replacement for the kindle, but it's a great compliment, and it makes it much more possible to lighten the load in my back-bag, and it makes it easier for me to entertain myself with my phone. This might not seem like a bit deal, but I think it is.

    There are also situations where the Kindle isn't usable (in bed when the lights are off and various other low light situations) and that's alright, but the Kindle app is. So that's a good thing indeed.

  • I had hoped that the kindle would make it easier to read in the in between moments throughout the day when I might read but didn't. That isn't exactly true, as it turns out. Reading on the Kindle still requires a fair piece of directed attention, and it's not the kind of thing you can idly whip out while you're waiting in the grocery check out line.

    I'm not sure at this point, of course, but I do think that having access to the Kindle on the phone will improve this usability feature.

  • I'm sort of annoyed by the lack of subscriptions. While you can have multiple devices attached to your Kindle account, when you subscribe to a periodical, that content is only accessible to you on one of your devices. I don't really like this, and it represents a huge loss of value for the Kindle store.

While I got the Blackberry shortly after the first iPhone 3g came out, the "app explosion" hadn't really happened yet. I must confess some "app jealousy." The Blackberry is awesome, and really it does the messaging quite better than anything else around (I'm convinced.) And I love the hardware keyboard. But when I think "I'd like to do something with my phone," the chance of getting a Blackberry app to do that is... unlikely. I don't know if I want a lot of apps on my phone, in the end, but I know the hardware is capable, and it's nice to take advantage of that from time to time. In any case...

Onward and Upward!

Three Predictions

Ok folks, here we go. I've put the word futurism is in the title of the blog and its near the end of the calendar year, so I think it's fair that I do a little bit of wild prediction about the coming year (and some change.) Here goes:

Open technology will increasingly be embedded into "closed technology"

The astute among you will say, "but that is already the case:" The Motorola Razor cell phone that my grandmother has runs the Linux Kernel (I think.) And there's the TiVo, lets not forget the TiVo. Or, for that matter the fact that Google has--for internal use, of course--a highly modified branch of the Linux Kernel that will never see the light of day.

That's old news, and in a lot of ways reflects the some of the intended and unintended business models that naturally exist around Open Source.

I'm not so much, in this case talking about "openness" as a property of code, but rather openness as a property of technology, and referring to long running efforts like XMPP and OpenID. These technologies exist outside of the continuum of free and proprietary code but promote the cyborg functioning of networks in an transparent and networked way.

XMPP says if you want to do real time communication, here's the infrastructure in which to do it, and we've worked through all the interactions so that if you want to interact with a loose federation (like a "web") of other users and servers, here's how.

OpenID solves the "single sign on" problem by creating an infrastructure for developers to be able to say "If you're authenticated to a third party site, and you tell me that authenticating to that third party site is good enough to verify your identity, then it's good enough for us." Which makes it possible to preserve consistent identity between sites, it means you only have to pass credentials to one site, and I find the user experience to be better as well.

In any case, we've seen both of these technologies become swallowed up into closed technologies more and more. Google Wave and Google Talk use a lot of XMPP, and most people don't know this unless their huge geeks (compared to the norm.) Similarly, even though it's incredibly easy to run and delegate OpenIDs through third parties, the main way that people sign into OpenID sites is with their Flickr or Google accounts.

I'm not saying that either of these things are bad, but I think we're going to see a whole lot more of this.

A major player on the content industry will release a digital subscription plan

I think, perhaps the most viable method for "big content" to survive in the next year or so, will be to make content accessible as part of a subscription model. Pay 10 to 20 dollars a month and have access to some set quantity of stuff. Turn it back in, and they give your more bits. Someone's going to do this: Amazon, Apple, Comcast, etc.

It's definitely a hold over from the paper days when content was more scarce. But it gets us away from this crazy idea that we own the stuff we downaload with DRM, it makes content accessible, and it probably allows the of devices to shoot down (to nominal amounts). While it probably isn't perfect, its probably sustainable, and it is a step in the right direction.

Virtualization technology will filter down to the desktop.

We have seen tools like VirtualBox and various commercial products become increasingly prevalent in the past couple of years, to decrease the impact of operating system bound compatibility issues. This is a good thing, but I think that it's going to go way further, and we'll start to see this technology show up on desktops in really significant ways. I don't think desktop computing is in need of the same kinds of massive parallelism that we need on servers, but I think we'll see a couple of other tertiary applications of this technology.

First, I think hypervisors will abstract hardware interfaces away from operating systems. No matter what kind of hardware you have or what it's native method of communication is, the operating system will be able to interact with it in a uniform manner.

Second, there are a number of running image manipulation functions that think operating system developers might be able to take advantage of: first the ability to pause, restart, and snapshot the execution stat of a running virtual machine has a lot of benefit. A rolling snapshot of execution state makes suspending laptops much easier, it makes consistent desktop power is less crucial. And so forth.

Finally, system maintenance is much easier. We loose installation processes: rather than getting an executable that explodes over our file system and installs an operating system, we just get a bootable image. We can more easily roll back to known good states.

Not to mention the fact that it creates a lot of economic benefits. You don't need IT departments maintaining desktops, you just have a guy making desktop images and deploying them. Creating niche operating system images and builds is a valuable service. Hardware vendors and operating system vendors get more control over their services.

There are disadvantages: very slight performance hits, hyepervisor technology that isn't quite there yet, increased disk requirements. But soon.

Soon indeed.