Android Tablets and the Workstations of the Future

I’ve only had the tablet for a few weeks, but I’m pretty sure the tablet incarnation of Android is probably 80% of what most users need in a workstation. I’m not most users, but I figure: hook up a big screen and a real keyboard. Create some key bindings to replace most of the gestures, and write a few pieces of software to handle document production, presentations, and spreadsheets in a slightly more robust manner, and you’re basically there. I wouldn’t give up my laptop today for a tablet, and I think the platforms still have a ways yet to go, but that’s not insurmountable.

Prediction: in the next decade, we’ll see embeded tablet-like devices begin to replace desktops computers for some classes of use and users. General web surfing, reading, quick email, and watching videos on YouTube seem like the obvious niche for now. I started to explore this in “Is Android the Future of Linux,” but it’s not abusrd to suggest that Android or iOS like devices might begin to address more “general purpose desktop computing.”

I want to be clear: we’re not there yet. These systems aren’t versatile and fully featured enough to keep up with full time use on an extended basis. This is mostly an application/software problem. As applications evolve and as more functionality moves to remote systems anyway (this is the “cloud,” we’ve heard so much about,) tablet operating systems will seem much more capable for general purpose work. Better mobile productivity software will help as well. Eventually, I think Android and similar platforms will have a shot at the desktop market for most usage because:

  • IT departments will get a lot more control over intra-organization information flow, which could save a lot of money for various IT categories: administration, support, and data protection costs.

  • Behind the firewall dropbox-like services, and creating some sort of centralized workstation configuration management (which makes sense for a flash based device,) means backups can happen automatically, and if devices need to be re-imaged or are lost or damaged, it only takes a few minutes to get someone back to work after a technology failure.

  • Limited multi-tasking ability will probably increase productivity.

  • Disconnecting keyboards from the screen will probably lead to better ergonomic possibilities.

  • Eventually, it will be easier to integrate Android-like devices with various workflow management/content management systems.

    The technology needs to mature and workers and IT departments need to become more comfortable with tablets, without question. Also, there are some fundamental developments in the technology that need to transpire before “desktop tablets” happen, including:

  • More power user-type interface features.

  • Split screen operation. There are enough “common tasks” that require looking at two different pieces of information at the same time that I think tablets will eventually have to give up “full screen everywhere,” operation. Conventional windowing is unnecessary, and I don’t think anyone would go for that, but displaying two different and distinct pieces of information at once is essential.

  • Better “Office” software for spreadsheets, presentations and document preparation. A necessary evil.

  • Behind the firewall (preferably open source) solutions to replace services like Dropbox/Box.net and whatever other services emerge as essential parts of the “tablet/smartphone” stack.

  • VPN clients and shared file system clients that are de ad simple to use. I think these are features for operating system vendors to develop.

Thoughts? Onward and upward!

Operating Systems and the Driver Issue

I made a quip the other day about the UNIX Epoch problem (unix time stamps, are measured in seconds since Jan 1, 1970, and displayed in a 10 digit number. Sometime in 2038, there will need to be 11 digits, and there’s no really good way to fix that.) Someone responded “whatever, we won’t be using UNIX in thirty years!”

Famous last words.

People were saying this about UNIX itself years ago. Indeed before Linux had even begun to be a “thing,” Bell Labs had moved on to “Plan 9” which was to be the successor to UNIX. It wasn’t. Unix came back. Hell, in the late eighties and early nineties we even thought that the “monolithic kernel” as a model of operating system design was dead, and here we are. Funny that.

While it’s probably the case that we’re not going to be using the same technology in thirty years that we are today (i.e. UNIX and GNU/Linux,) it’s probably also true that UNIX as we’ve come to know it, is not going to disappear given UNIX’s stubborn history in this space. More interesting, I think, is to contemplate the ways that UNIX and Linux will resonate in the future. This post is an exploration of one of these possibilities.


I suppose my title has forced me to tip my hand slightly, but lets ignore that for a moment, and instead present the leading problem with personal computing technology today: hardware drivers.

“Operating System geeks,” of which we all know one or two, love to discuss the various merits of Windows/OS X/Linux “such and such works better than everything else,” “such and such is more stable than this,” “suck and such feels bloated compared to that,” and so on and so forth. The truth is that if we take a step back, we can see that the core problem for all of these operating systems is pretty simple: it’s the drivers, stupid.

Lets take Desktop Linux as an example. I’d argue that there are two large barriers to it’s widespread adoption. First it’s not immediately familiar to people who are used to using Windows. This is pretty easily addressed with some training, and I think Microsoft’s willingness to change their interface in the last few years (i.e. the Office “Ribbon,” and so forth,) is a great testimony to the adaptability of the user base. The second, and slightly more thorny issue is about hardware drivers: which are the part of any operating system that allow the software to talk to hardware like video, sound, and networking (including, of course, wireless) adapters. The Kernel has gotten much better in this regard in the past few years (probably by adding support for devices without requiring their drivers be open source), but the leading cause of an “install just not working,” is almost always something related to the drivers.

“Linux People,” avoid this problem by buying hardware that they know is well supported. In my world that means, “Intel everything particularly if you want wireless to work, and Nvidia graphics if you need something peppy, which I never really do,” but I know people who take other approaches.

In a weird way this “geek’s approach to linux” is pretty much the same way that Apple responds to the driver problem in OS X. By constraining their Operating System to run only on a very limited selection of hardware, they’re able to make sure that the drivers work. Try and add a third party wireless card to OS X. It’s not pretty.

Windows is probably the largest victim to the driver problem: they have to support every piece of consumer hardware and their hands are more or less tied. The famous Blue Screen of Death? Driver errors. System bloat (really for all operating systems) tends to be about device drivers. Random lockups? Drivers. Could Microsoft build better solutions for these driver problems, or push equipment manufacturers to use hardware that had “good drivers,” probably; but as much as it pains me, I don’t really think that it would make a whole lot of business sense for them to do that, at the moment.


More on this tomorrow…

Praxis and Transformational Economics

Here’s another one for the “economics” collection of posts that I’ve been working on for a while. Way back when, I started this series by thinking about Kim Stanley Robinson’s Mars Trilogy and by the model of economic development presented in the final two books. In short economic activity is organized around ~150 person co-operatives that people “buy into,” and then work for as long as the co-op exists or until they sell their spot so that they can work on a different project/co-op.

In the series, these co-operatives arose as part of a response to the multi/trans/meta-national corporations which were the books antagonists. Corporations which had grown so big, that they resembled nations as much as they did companies in the contemporary perspective. The co-ops came around in part as a response to the metanat’s, but then the corporations themselves restructured in response to an ecological/sociological catastrophe, so that they eventually started to look more like the cooperatives. The “progressive,” meta-national corporation was called “Praxis,” in the stories and Praxis was the organization that lead the transformation from metanational capitalism to, what followed. As part of this series, I’d very much like to think about Praxis and what kinds of lessons we can bring back from this thought, beyond the simplistic “cooperatives good, corporations bad,” notion that I’ve been toting for months. Thus,

  • The corruption and disconnect from authentic economic exchange in that the metanats display in the Mars Books, far outclasses anything that’s happening today. On the one hand, given the nature of Science Fictional criticism, this isn’t such a great barrier to importing ides from the books; on the other, we must also imagine that Praxis is able to “out compete” traditional meta-nationals because of the scale of the issue. That is, the Praxis critique and solution may be valid today, but things may have to get much worse before a Praxis-like solution becomes economically viable.

  • Praxis succeeds in the story, not because it can out compete the meta-nationals at their own game, not because it’s “right.” I appreciate fiction (and reality,) where the winning economic solution wins on economic rather than moral terms. While I’m hardly a Market proponent, it’s hard to divorce economics from exchanges, and I think the following logic fails to convince me: “we change current cultural practice to do something less efficient that may create less value, because it complies better with some specific and culturally constrained ethic.”

    One part of my own thinking on this issue has revolved around looking for mechanisms that produce change and I think Praxis is particularly interesting from a mechanistic perspective.

  • Praxis presents a case of a revolutionary-scale change, with evolutionary mechanisms, which is something that I think is hard to argue for, or encourage as the change itself is really a result of everything else that’s going on in the historical moment. Nevertheless, everyone in the story world is very clear that Praxis-post transformation is fundamentally not the same kind of organization that it was before. In a lot of ways it becomes its own “corporate successor state,” and I think that leaves us with a pretty interesting question to close with…

How do we setup and/or encourage successor institutions to the flawed economic organizations/coroprations we have today without recapitulating their flaws?

the future of universities

One element that has been largely missing from my ongoing rambling analysis of economies, corporations, co-ops, and institutions has been higher education and universities. Of course Universities are institutions, and function in many ways like large corporations, but, nostalgia notwithstanding, I don’t think it’s really possible to exempt Universities or dismiss them from this conversation.

Oh, and, there was this rather interesting--but remarkably mundane--article that I clipped recently about that addressed where universities are “going” in the next decade or two. I say mundane, because I think the “look there’s new technology that’s changing the rules game” is crappy futurism, and really fails to get at the core of what kinds of developments we may expect to see in the coming years.

Nevertheless… Shall we begin? I think so:

  • The expansion of university in the last 60 years, or so, has been fueled by the GI-Bill and the expansion of the student-loan industry. With the “population bubble” changing, and the credit market changing, universities will have to change. How they change is of course up in the air.
  • There aren’t many alternatives to “liberal arts/general education” post-secondary education for people who don’t want, need, or have the preparation for that kind of education at age 18. While I’m a big proponent (and product of) a liberal arts education, there are many paths to becoming a well rounded and well educated adult, and they don’t all lead through traditional-four-year college educations (or equivalents, particularly at age 18.)
  • Technology is changing higher education and scholarship, already, with all likelihood faster than technology has been and is changing other aspects of our culture (publishing, media production, civic engagement, etc.). Like all of these developments of culture, however, the changes in higher education are probably not as revolutionary as the article suggests.
  • There will probably always be a way in which degree granting institutions will be a “useful” part of our society, but I think “The College,” will probably change significantly, but I think forthcoming changes probably have less to do with education and the classroom, and more to do with the evolving role of the faculty.
  • As part of the decline of tenure-systems, I expect that eventually we’ll see a greater separation (but not total disconnect) between the institutions which employ and sponsor scholarship, and the institutions that educate students.
  • It strikes me that most of the systems that universities use to convey education online (Blackboard, moodle, etc.,) are hopelessly flawed. Either by virtue of being difficult and “gawky” to use, or because they’re proprietary systems, or that they’re not designed for the task at hand, all of the systems that I’m aware of are as much roadblocks to the adoption of new technology in education as anything else.
  • Although quality information (effectively presented, even) is increasingly available online for free, what makes this information valuable in the university setting, including interactivity, feedback on progress, individual attention, validation and certification of mastery, are all of the things that universities (particularly “research”-grade institutions) perform least successfully at.
  • We’ve been seeing research and popular press stuff on the phenomena of “prolonged adolescence,” where young people tend to have a period of several years post-graduation where they have to figure out “what next,” sometimes there’s graduate school, sometimes there’s odd jobs. I’ve become convinced that in an effort to help fill the gap between “vocational education” and “liberal arts/gen ed.” we’ve gotten to the point where we ask people who are 18 (and don’t have a clue what they want to do with their lives, for the most part) to make decisions about their careers that are pretty absurd. Other kinds of educational options should exist, that might help resolve this issue.

Interestingly these thoughts didn’t have very much to do with technology. I guess I mostly feel that the changes in technology are secondary to the larger economic forces likely to affect universities in the coming years. Unless the singularity comes first.

Your thoughts, as always, are more than welcome.

the dark singularity

I read a pretty cool interview with Vernor Vinge, in H+ magazine, where he talked about the coming technological singularity, which I thought was really productive. I’ve read and participated in a lot criticism of “Singularity Theory,” where people make the argument that the singularity is just a mystification on the process of normal technological development, and that all this attention to the technology distracts from “real” issues, and/or that singularity is too abstract, too distant, and will only be recognizable in retrospect.

From reading Vinge’s comments, I’ve come to several realizations:

  • Vinge’s concept of the singularity is pretty narrow, and relates to effect of creating human-grade information technology. Right now, there are a lot of things that humans can do that machines can’t, The singularity then, is the point where that changes.
  • I liked how--and I find this to be the case with most “science theory,” but the scientists often have very narrow theories and the popular press often forces a much more broad interpretation. I think we get too caught up with thinking about the singularity as this cool amazing thing that is the nerd version of “the second coming,” and forget that the singularity would really mark the end of society and culture as we know it now. That it’s a rather frightening proposition.
  • Vinge’s comparison of the singularity to the development of the printing press is productive. He argues that the printing press was conceivable before Gutenberg (they had books, the effects, however were unimaginable, admittedly), in a way that the singularity isn’t conceivable to us given the current state of our lives and technology. In a lot of ways, the technological developments required in the Singularity, without attending to the social and cultural facts. The singularity is really about the outsourcing of cognition (writing, computers, etc.) rather than cramming more computing power onto our microchips.

As i begin to understand this a bit better--as it’s pretty difficult to grok--I’ve begun to think about the singularity and post-singular experience as being a much more dark possibility than had heretofore. There are a lot of problems with “the human era,” and I think technology, particularly as humans interact with technology (eg. cyborg) is pretty amazing. So why wouldn’t the singularity be made of awesome?

Because it wouldn’t be--to borrow an idea from William Gibson--evenly distributed. The post-human era might begin with the advent of singularity-grade intelligences, but there will be a lot of humans left hanging around in the post-human age. Talk about class politics!

Secondly, the singularity represents the end of our society in a very real sort of sense. Maybe literature, art, journalism, manufacturing, farming, computer terminals and their operating systems (lending a whole new meaning to the idea of a “dumb terminal”), and the Internet will continue to be relevant in a post-human age. But probably not exactly. While the means by which these activities and cultural pursuits might be obsoleted (tweaking metabolisms, organic memory transfer, inboard computer interfaces) are interesting, the death of culture is often a difficult and trying process, particularly for the people (like academics, educators, writers, artists, etc.) “Unintelligible” is sort of hard to grasp.

And I think frightening as a result. Perhaps that’s the largest lesson that I got from Vinge’s responses: the singularity is on many levels something to be feared: that when you think about the singularity the response should be on some visceral level “I’d really like to avoid that,” rather than, “Wouldn’t it be cool if this happened.”

And somehow that’s pretty refreshing. At least for me.

Futuristic Science Fiction

If you ask a science fiction writer about the future, about what they think is going to be the next big cultural or technological breakthrough they all say something like, “science fiction is about the present, dontchaknow the future just makes it easier to talk about the present without getting in trouble.”

While this is true, it always sounds (to me) like an attempt to force the “mainstream” to take science fiction more seriously. It’s harder to be dismissive of people who define their work in terms to which you’re sympathetic.

I’m of the opinion that when disciplines and genres get really defensive and insistent in making arguments for their own relevancy, it usually reflects some significant doubt.

Science fiction reflects the present, comments on the present, this is quite true (and key to the genre), but it’s also about the future. And that’s ok. Thinking about the future, about possibilities, more than the opportunity for critique is (part) of what makes this genre so powerful and culturally useful. To deny this, is to draw attention away from imaginations of the future sacrifice distracts what probably makes the genre so important.