On Wireless Data

It's easy to look around at all of the "smart phones," iPads, wireless modems, and think that the future is here, or even that we're living on the cusp of a new technological moment. While wireless data is amazing particularly with respect to where it was a few years ago--enhanced by a better understanding of how to make use of wireless data--it is also true that we're not there yet.

And maybe, given a few years, we'll get there. But it'll be a while. The problem is that too much of the way we use the Internet these days assumes high quality connections to the network. Wireless connections are low quality regardless of speed, in that latency is high and dropped packets are common. While some measures can be taken to speed up the transmission of data once connections are established, and this can give the illusion of better quality, the effect is mostly illusory.

Indeed in a lot of ways the largest recent advancements in wireless technology have been with how applications and platforms are designed in the wireless context rather than anything to do with the wireless transmission technology. Much of the development in the wireless space in the last two or three years has revolved around making a little bit of data go a long way, in using the (remarkably powerful) devices for more of the application's work, and in figuring out how to cache some data for "offline use," when it's difficult to use the radio. These are problems that can be addressed and largely solved in software, although there are limitations and inconsistencies in approach that continue to affect user experience.

We, as a result, have a couple of conditions. First that we can transmit a lot of data over the air without much trouble, but data integrity and latency (speed) are things we may have to give up on. Second that application development paradigms that can take advantage of this will succeed. Furthermore, I think it's fairly safe to say that in the future, successful mobile technology will develop in this direction as opposed against these trends. Actual real-time mobile technology is dead in the water, although I think some simulated real-time communication works quite well in these contexts.

Practically this means, applications that tap an APO for data that is mostly processed locally. Queue-compatible message passing systems that don't require persistent connections. Software and protocols that assume you're always "on-line" and are able to store transmissions gracefully until you come out of the subway or get off of a train. Of course, this also means designing applications and systems that are efficient with regards to their use of data will be more successful.

The notion that fewer transmissions that consist of bigger "globs" of data will yield better performance than a large number of very small intermediate transmissions, is terribly foreign. It shouldn't be, this stuff has been around for a while, but nevertheless here we are.

Isn't the future grand?

New Technology

I was originally going to write this post as a "reasons I don't need a new computer," piece explaining my current setup (one laptop, a virtual server, and a lot of bailing wire) and explaining that despite some problems (a lack of local redundancy and small screen size) a new computer wasn't exactly warranted. Though I wanted one, particularly after seeing the new MacBook Air, and I've long thought about getting a 15 inch laptop as I still lament my last 15 inch machine. Since I didn't really need a new machine and there wasn't a convincing reason to do an upgrade, I was going to write about good reasons to avoid upgrading just 'cause.

Clearly I failed.

Particularly, since I'm writing this post from a new laptop.

A few weeks ago I saw a very good deal on a current-model 15" Lenovo ThinkPad (T510) with all of the specifications that I wanted: the larger resolution screen, integrated Intel graphics and wireless, a bunch of RAM (4g) and a 7200rpm drive. It even has a Core i7 processor (quad proc), which was a pleasant bonus, and so I went for it.

I'm quite happy with it. Besides a great deal and in many ways an ideal machine, I decided that being dependent on one (and only one!) system for all work and non-work computing was probably a bad idea. Additionally, I've wanted to reorganize the way my laptops' hard drive partitions in a way that requires at least a short period of down time, and a process that I didn't want to attempt without some sort of back up.

It took me a few days to get everything sorted out on the new machine, as it usually does, and there are some cool new things that I can do that I have yet to get ironed out, mostly around figuring out some virtualization technology to do awesome things with this system. But for the day to day stuff, it's perfect and works just as I like.

This is the first time in several years where I've regularly used two systems for day-to-day work, and it's the kind of thing that I've tended to avoid as much as possible. It's just a hassle to switch between systems in terms of getting everything synchronized. I've got a pretty clever setup sketched out that I hope to be able to share with you all shortly.

In the end, this might not have been an absolutely essential purchase, but I think it was wise (in terms of the redundancy,) it makes some interesting things possible (virtualization, more processor intensive tasks,) and for the kinds of things I do, the extra screen space is very appreciated.

I'm sure I'll write here from time to time about these things, but for the moment: Onward and Upward!

Against Open Stacks

I have misgivings about Open Stack. Open Stack is an open source "Cloud" or infrastructure/virtualization platform, that allows providers to create on-demand computing instances, as if "in the cloud," but running on their own systems. This kind of thing is generally refereed to as "private clouds," but as all things in the "cloud space," this is relatively nebulous concept.

To disclose, I am employed by a company that does work in this space, that isn't the company that is responsible for open space. I hope this provides a special perspective, but I am aware that my judgment is very likely clouded. As it were.

Let us start from the beginning, and talk generally about what's on the table here. Recently the technology that allows us to virtualize multiple instance on a single piece of hardware has gotten a lot more robust, easy to use, and performant. At the same time, for the most part the (open source) "industrial-grade" virtualization technology isn't particularly easy to use or configure. It can be done, of course, but it's non trivial. These configurations and the automation to glue it all together--and the quality therein--is how the cloud is able to differentiate itself.

On some level "the Cloud" as a phenomena is about the complete conversion of hardware into a commodity. Not only is hardware cheap, but it's so cheap that we can do most hardware in software, The open sourcing of this "OpenStack" pushes this barrier one step further and says, that the software is a commodity as well.

It was bound to happen at some point, it's just a curious move and probably one that's indicative of something else in the works.

The OpenStack phenomena is intensely interesting for a couple of reasons. First, it has a lot of aspects of some contemporary commercial uses of open source: the project has one contributor and initial development grows out of the work of one company that developed the software for internal use and then said "hrm, I guess we can open source it." Second, if I'm to understand correctly, OpenStack isn't software that isn't already open source software (aside from a bunch of glue and scripts), which is abnormal.

I'm not sure where this leads us, and I've been milling over what this all means for a while, and have largely ended up here: it's an interesting move, if incredibly weird and hard to really understand what's going on.

Ideology and Systems Administration

I do some work as a systems administrator, both personally and for friends. And I work with a lot of admins, but I don't really think of myself as a sys admin. Though you may feel free to argue the point. Nevertheless, I spend a lot of time trying to figure out the way systems administrators think and work. This makes sense: as my professional work is written for entry level systems administrators and I work with a bunch of admins. But I think it's probably bigger than that. This post is part of an ongoing thread on dialectical futurism about systems administration and its implications.

The best systems administrators are unnoticed and unremarkable. When a system is working smoothly, it works and no one has reason to think about who is maintaining the system. Thus, to be a better systems administrator you have to become confident in your abilities (leading to a somewhat grounded stereotype in arrogance) and you have to be resistant to change.

For example, take this slide deck of a systems administration problem. It presents a thorny sysadmin problem where the chmod utility (which is used to render files executable) has been marked unexecutable. The presentation goes through a number of different methods of fixing this, however (spoiler alert) the final solution is "the easy fix is to reboot the machine and fix it then (or something), and the machine's running so there isn't a problem." While this is a funny example, I think it's also largely a true example of the way systems administrators approach and resolve problems.

I've seen this kind of "well it' may not be perfect, but it works," logic as well as the "is it worth building something new and different that might be better?" reasoning at work, and I think it's probably apparent in all sorts of free software and other discussion forums where sys admins discuss things.

Thus, I wonder: Does this ideology extend beyond the administration of systems and into other spheres of life and thinking? About technology? About politics and economics? I'm not sure, though I'm of course inclined to say yes, and I think it's something that requires some deliberation, and further thinking.

I look forward to hearing your thoughts, and figuring out the best way to answer this question.

Onward and Upward!

Bitlbee, The Wrong Solution that Works

About a week ago, time of writing, I switched all of my instant messaging to a little program called Bitlbee. Basically this is a program that runs locally as an IRC server and connects to various instant messaging and "presence" protocols and exposes them to the end user client as if they were IRC. Weird.

This is, emphatically, the wrong solution to the problem of finding a sane technological solution to consuming real-time information (e.g. instant messaging, twitter, xmpp, etc.) Previously, I'd been using an XMPP-only client and running jabber-to-IM transports on the server, which I think is more of a right solution. Why then did I switch?

  • I wanted to use irssi, which I think one of the most cleverly designed and useful pieces of software out there.

  • Transports that allow XMPP to interact with other services are an ideal solution and I think the inclusion of transports in the design of the XMPP protocol is a major selling point for the XMPP technology. At the same time the most stable transports aren't terribly stable and while there could be transport widgets for all sorts of things there are only a few general purpose transports.

    Practically speaking the jabber-to-AIM transport that I had been using, had a habit of dying without cause once or twice a week, and it used a lot of system resources for something that could (should?) have been much simpler.

  • The truth is that while XMPP is a nifty technology, and I really enjoy using it, I'm starting to think that it's not ideal to expect that XMPP replace IRC, as both accomplish different things for their users. So while I always saw bitlbee as "giving into IRC" it's really just an interface. And frankly IRC clients do IM better than IM clients do IRC.

  • Bitlbee works really well as a client for Facebook chat (which is a weird XMPP flavor) and is a functional twitter client. With the delight of using irssi, I'm able to really interact on these networks without having to spend too much brain power sifting through crud.

So here I am. Switched. The buddy list on bitlbee leaves something to be desired (but I have a particularly large buddy list) and I've yet to get used to the syntax for creating and administering group chats inside of bitlbee, but other than that? It's pretty rocking.

Onward and Upward!

Phone Torched

I mentioned in a recent update post, that I had recently gotten a new cell phone, which given who I am and how I interact with technology means that I've been thinking about things like the shifting role of cell phones in the world, the way we actually use mobile technology, the ways that the technology has failed to live up to our expectations, and of course some thoughts on the current state of the "smart-phone" market. Of course.


I think even two years ago quasi-general purpose mobile computers (e.g. smart phones) were not nearly as ubiquitous as they are today. The rising tide of the iPhone has, I think without a doubt, raised the boat of general smart phone adoption. Which is to say that the technology reached a point where these kinds of devices--computers--are of enough use to most people that widespread adoption makes sense. We've reached a tipping point, and the iPhone was there at the right moment and has become the primary exemplar of this moment.

That's probably neither here nor there.

With more and more people connected in an independent and mobile way to cyberspace, via either simple phones, (which more clearly matches Gibson's original intentions for the term,) or via smart phones I think we might begin to think about the cultural impact of having so many people so connected. Cellphone numbers become not just convenient, but in many ways complete markers of identity and person-hood. Texting in most situations overtakes phone calls as the may way people interact with each other in cyberspace, so even where phone calls may be irrelevant SMS has become the unified instant messaging platform.

As you start to add things like data to the equation, I think the potential impact is huge. I spent a couple weeks with my primary personal Internet connection active through my phone, and while it wasn't ideal, the truth is that it didn't fail too much. SSH on Blackberries isn't ideal, particularly if you need a lot from your console sessions, but it's passable. That jump from "I really can't cut this on my phone," to "almost passable" is probably the hugest jump of all. The series of successive jumps over the next few years will be easier.

Lest you think I'm all sunshine and optimism, I think there are some definite short comings with contemporary cell phone technology. In brief:

  • There are things I'd like to be able to do with my phone that I really can't do effectively, notably seamlessly sync files and notes between my phone and my desktop computer/server. There aren't even really passable note taking applications.
  • There are a class of really fundamental computer functionality that could theoretically work on the phone, but don't because the software doesn't exist or is of particularly poor quality. I'm thinking of SSH, of note taking, but also of things like non-Gmail Jabber/XMPP functionality.
  • Some functionality which really ought to be more mature than it is (e.g. music playing) is still really awkward on phones, and better suited to dedicated devices (e.g. iPods) or to regular computers.

The central feature in all of these complaints is software related, and more an issue of software design, and an ability to really design for this kind of form factor. There are some limitations: undesirable input methods, small displays, limited bandwidth, unreliable connectivity, and so forth. And while some may improve (e.g. connectivity, display size) it is also true that we need to get better at designing applications and useful functionality in this context.

My answer to the problem of designing applications for the mobile context will seem familiar if you know me.

I'd argue that we need applications that are less dependent upon a connection and have a great ability to cache content locally. I think the Kindle is a great example of this kind of design. The Kindle is very much dependent upon having a data connection, but if the device falls offline for a few moments, in most cases no functionality is lost. Sure you can do really awesome things if you assume that everyone has a really fat pipe going to their phone, but that's not realistic, and the less you depend on a connection the better the user experience is.

Secondly, give users as much control over the display, rendering and interaction model that their software/data uses. This, if implemented very consistently (difficult, admittedly,) means that users can have global control over their experience, and users won't be confused by different interaction models between applications.

Although the future is already here, I think it's also fair to say that it'll be really quite interesting to see what happens next. I'd like a chance to think a bit about the place of open source on mobile devices and also the interaction between the kind of software that we see on mobile devices and what's happening in the so-called "cloud computing" world. In the mean time...

Outward and Upward!

Wikis are not Documentation

It seems I'm writing a minor series on the current status (and possible future direction?) of technical writing and documentation efforts. Both in terms of establishing a foundation for my own professional relevancy, as well as in and for itself because I think documentation has the potential to shape the way that people are able to use technology. I started out with Technical Writing Appreciation and this post will address a few sore points regarding the use of wikis as a tool for constructing documentation.

At the broadest level, I think there's a persistent myth regarding the nature of the wiki and the creation of content in a wiki that persists apart from their potential use in documentation projects. Wiki's are easy to install and create. It is easy to say "I'm making a wiki, please contribute!" It is incredibly difficult to take a project idea and wiki software and turn that into a useful and vibrant community and resource. Perhaps these challenges arise from the fact that wiki's require intense stewardship and attention, and this job usually falls to a very dedicated leader or a small core of lead editors. Also, since authorship on wikis is diffuse and not often credited, getting this kind of leadership and therefore successfully starting communities around wiki projects can be very difficult.

All wikis are like this. At the same time, I think the specific needs of technical documentation makes these issues even more prevalent. This isn't to say that wiki software can't power documentation teams, but the "wiki process" as we might think of it, is particularly unsuited to documentation.

One thing that I think is a nearly universal truth of technical writing is that the crafting of texts is the smallest portion of the effort of making documentation. Gathering information, background and experience in a particular tool or technology is incredibly time consuming. Narrowing all this information down into something that is useful to someone is a considerable task. The wiki process is really great for the evolutionary process of creating a text, but it's not particularly conducive to facilitating the kind of process that documentation must go through.

Wikis basically "here's a simple editing interface without any unnecessary structure: go and edit, we don't care about the structure or organization, you can take care of that as a personal/social problem." Fundamentally, documentation requires an opposite approach, once a project is underway and some decisions have been made, organization isn't the kind of thing that you want to have to manually wrestle, and structure is very necessary. Wikis might be useful content generation and publication tools, but they are probably not suited to supporting the work flow of a documentation project.

What then?

I think the idea of a structured wiki, as presented by twiki has potential but I don't have a lot of experience with it. My day-job project uses an internally developed tool, and a lot of internal procedures to enforce certain conventions. I suspect there are publication, collaboration, and project management tools that are designed to solve this problem, but I'm not particularly familiar with anything specific. In any case, it's not a wiki.

Do you have thoughts? Have I missed something? I look forward to hearing from you in comments!

Technical Writing Appreciation

I'm a technical writer. This is a realization that has taken me some time to appreciate and understand fully.

Technical writing is one of those things that creators of technology, a term that I will use liberally, all agree is required, but it's also something that's very difficult to do properly. I think this difficulty springs from the following concerns: What constitutes describing a technology or process in too much detail? Not enough? Are all users of a technology able to make use of the same level of documentation sets? If users are too diverse, what is the best way to make sure that their needs are addressed: do we write parallel documentation for all users, or do we try and equalize less advanced users so that the core documentation is useful to everyone?

The answers to these questions vary of course with the needs of the product being documented and the use cases, but I think resolving these concerns presents a considerable challenge to any kind of technical documentation project, and the way that the documentation developers resolve these issues can have a profound effect not only on the documentation itself but the value and usefulness of the documentation itself. As I've been thinking about the utility and value of technical writing, a professional hazard, I've come up with a brief taxonomy of approaches to technical writing:

  • First, there's the document everything approach. Starting with a full list of features (or even an application's source) the goal here is to make sure that there's no corner unturned. We might think of this as the "manual" approach, as the goal is to produce a comprehensive manual. These are great reference materials, particularly when indexed effectively, but the truth is that they're really difficult for users to engage with, even though they may have all the answers to a users questions (e.g. "RTFM.") I suspect that the people who write this kind of documentation either work closely with developers or are themselves developers.
  • Second, there's what I think of as the systems or solutions document, which gives up comprehensiveness, and perhaps even isolation to a single tool or application, and documents outcomes and processes. They aren't as detailed, and so might not answer underlying questions, but when completed effectively they provide an ideal entry point into using a new technology. In contrast to the "manual" these documents are either slightly more general interest or like "white papers." This class of documentation, thus, not simply explains how to accomplish specific goals but illuminates technical possibilities and opportunities that may not be clear from a function-based documentation approach. I strongly suspect that the producers of this kind of documentation are very rarely the people who develop the application itself.
  • In contrast to the above, I think documentation written for education and training purposes, may appear to be look either a "manual" or a "white paper," but have a fundamentally different organization and set of requirements. Documentation that supports training is often (I suspect) developed in concert with the training program itself, and needs to impart a level of deeper understanding of how a system works (like the content of a manual,) but doesn't need to be comprehensive, and needs mirror the general narrative and goals of the training program.
  • Process documentation finally, is most like solution documentation, but rather than capture unrealized technological possibilities or describe potentially hypothetical goals, these kinds of documents capture largely institutional knowledge to more effectively manage succession (both by future iterations of ourselves, and our replacements). These documents have perhaps the most limited audience, but are incredibly valuable both archival (e.g. "How did we used to do $*?") and also for maintaining consistency particularly amongst teams as well as for specific tasks.

I think the fundamental lesson regarding documentation here isn't that every piece of technology needs lots and lots of documentation, but rather that depending on the goals for a particular technology development program or set of tools, different kinds of documentation may be appropriate, and more useful in some situations.

As a secondary conclusion, or direction for more research: I'd be interested in figuring out if there are particular systems that allow technical writers (and development teams) to collect multiple kinds of information and produce multiple documentation for different organizations. Being able to automatically generate different wholes out of documentation "objects" if we may be so bold.

I must look into this. Onward and Upward!