Three Predictions

Ok folks, here we go. I’ve put the word futurism is in the title of the blog and its near the end of the calendar year, so I think it’s fair that I do a little bit of wild prediction about the coming year (and some change.) Here goes:

Open technology will increasingly be embedded into “closed technology”

The astute among you will say, “but that is already the case:” The Motorola Razor cell phone that my grandmother has runs the Linux Kernel (I think.) And there’s the TiVo, lets not forget the TiVo. Or, for that matter the fact that Google has--for internal use, of course--a highly modified branch of the Linux Kernel that will never see the light of day.

That’s old news, and in a lot of ways reflects the some of the intended and unintended business models that naturally exist around Open Source.

I’m not so much, in this case talking about “openness” as a property of code, but rather openness as a property of technology, and referring to long running efforts like XMPP and OpenID. These technologies exist outside of the continuum of free and proprietary code but promote the cyborg functioning of networks in an transparent and networked way.

XMPP says if you want to do real time communication, here’s the infrastructure in which to do it, and we’ve worked through all the interactions so that if you want to interact with a loose federation (like a “web”) of other users and servers, here’s how.

OpenID solves the “single sign on” problem by creating an infrastructure for developers to be able to say “If you’re authenticated to a third party site, and you tell me that authenticating to that third party site is good enough to verify your identity, then it’s good enough for us.” Which makes it possible to preserve consistent identity between sites, it means you only have to pass credentials to one site, and I find the user experience to be better as well.

In any case, we’ve seen both of these technologies become swallowed up into closed technologies more and more. Google Wave and Google Talk use a lot of XMPP, and most people don’t know this unless their huge geeks (compared to the norm.) Similarly, even though it’s incredibly easy to run and delegate OpenIDs through third parties, the main way that people sign into OpenID sites is with their Flickr or Google accounts.

I’m not saying that either of these things are bad, but I think we’re going to see a whole lot more of this.

A major player on the content industry will release a digital subscription plan

I think, perhaps the most viable method for “big content” to survive in the next year or so, will be to make content accessible as part of a subscription model. Pay 10 to 20 dollars a month and have access to some set quantity of stuff. Turn it back in, and they give your more bits. Someone’s going to do this: Amazon, Apple, Comcast, etc.

It’s definitely a hold over from the paper days when content was more scarce. But it gets us away from this crazy idea that we own the stuff we downaload with DRM, it makes content accessible, and it probably allows the of devices to shoot down (to nominal amounts). While it probably isn’t perfect, its probably sustainable, and it is a step in the right direction.

Virtualization technology will filter down to the desktop.

We have seen tools like VirtualBox and various commercial products become increasingly prevalent in the past couple of years, to decrease the impact of operating system bound compatibility issues. This is a good thing, but I think that it’s going to go way further, and we’ll start to see this technology show up on desktops in really significant ways. I don’t think desktop computing is in need of the same kinds of massive parallelism that we need on servers, but I think we’ll see a couple of other tertiary applications of this technology.

First, I think hypervisors will abstract hardware interfaces away from operating systems. No matter what kind of hardware you have or what it’s native method of communication is, the operating system will be able to interact with it in a uniform manner.

Second, there are a number of running image manipulation functions that think operating system developers might be able to take advantage of: first the ability to pause, restart, and snapshot the execution stat of a running virtual machine has a lot of benefit. A rolling snapshot of execution state makes suspending laptops much easier, it makes consistent desktop power is less crucial. And so forth.

Finally, system maintenance is much easier. We loose installation processes: rather than getting an executable that explodes over our file system and installs an operating system, we just get a bootable image. We can more easily roll back to known good states.

Not to mention the fact that it creates a lot of economic benefits. You don’t need IT departments maintaining desktops, you just have a guy making desktop images and deploying them. Creating niche operating system images and builds is a valuable service. Hardware vendors and operating system vendors get more control over their services.

There are disadvantages: very slight performance hits, hyepervisor technology that isn’t quite there yet, increased disk requirements. But soon.

Soon indeed.

Desktop Virtualization and Operating Systems

So what’s the answer to all this operating system and hardware driver angst?

I’m going to make the argument that the answer, insofar as there is one is probably virtualization.

But wait, tycho, this virtualization stuff all about servers. Right?

Heretofore, virtualization technology--the stuff that lets us take a single very powerful piece of hardware, and run multiple instances of an operating system that, in most ways “think of themselves” as being an actual physical computer--has been used in the server way, as a way of “consolidating” and utilizing the potential of given hardware. This is largely because hardware has become so powerful that it’s hard to write software that really leverages this effectively, and there are some other benefits that make managing physical servers “virtually” a generally good thing, and there aren’t a lot of people who would be skeptical of this assertion I think.

But on desktops? On servers where users access the computer over a network connection, it makes sense to put a number of “logical machines” on a physical machine. On a desktop machine this doesn’t make a lot of sense, after all, we generally interact with the physicality of the machine; so having multiple, concurrently running, operating systems on your desk (or in your lap!) doesn’t seem to provide a great benefit. I’d suggest the following two possibilities:

  • Hypervisors (i.e. the technology that talks to the hardware and the operating system instances running on the hardware,) abstract away the driver problem. The hypervisors real job is to talk to the actual hardware, and provide a hardware-like-interface to the “guest operating systems.” Turns out this technology is 80-90% of where it needs to be for desktop usage. This makes the driver problem a little easier to solve.
  • Application specific operating systems. One of the problems with desktop usability in recent years is that we’ve been building interfaces that have needed to do everything, as people use computers for everything. This makes operating systems and stacks difficult to design and support, and there is all sorts of unforeseen interactions between all of the different things that we do, which doesn’t help things. So desktop virtualization might allow us to develop very slim operating systems that are exceedingly reliable and portable, but also very limited in what they can accomplish. Which is ok, because we could have any number of them on a given computer.

I only need one instance of an operating system on my computer, why do you want me to have more?

See above for a couple of “ways desktop hypervisors may promote the growth of technology.” But there are a number of other features that desktop virtualization would convey to users, but it mostly boils down to “Easier management and backup.”

If the “machine” is running in a container on top of a hypervisor, its relatively easy to move it to a different machine (the worst thing that could happen is the virtual machine would have to be rebooted, and even then, not always.) It’s easy to snapshot known working states. It’s easy to redeploy a base image of an operating system in moments. These are all things that are, when we live “on the metal,” quite difficult at the moment.

For the record, I don’t think anyone is ever really going have more than five (or so) instances running on their machine, but it seems like there’s a lot of room for some useful applications around five machines.

And lets face it, TCP/IPA is the mode of inter-process communication these days, so I don’t think application architectures would likely change all that much.

Won’t desktop hypervisors have the same sorts of problems that “conventional operating systems,” have today. You’re just moving the problem around.

If you’re talking about the drivers problem discussed earlier, then in a manner of speaking, yes. Hypervisors would need to be able to support all kinds of hardware that (in many cases) they don’t already support. The argument for “giving this” to hypervisor developers is that largely, they’re already working very closely with the “metal” (a great deal of hardware today has some support for virtualization baked in,) and hypervisors are in total much simpler projects.

Its true that I’m mostly suggesting that we move things around a bit, and that isn’t something that’s guaranteed to fix a specific problem, but I think there’s some benefit in rearranging our efforts in this space. As it were

Don’t some of the leading hypervisors, like KVM and others, use the parts or all of the Linux Kernel, so wouldn’t this just recreate all of the problems of contemporary Linux anew?

I’ll confess that I’m a huge fan of the Xen hypervisor which takes a much more “thin” approach to the hypervisor problem, because I’m worried about this very problem. And I think Xen is more parsimonious. KVM might be able to offer some slight edge in some contexts in the next few years, like the ability to more intelligently operate inside of the guest operating system, but that’s a ways down the road and subject to the same problems that Linux has today.


So there you have it. Thoughts?

Operating Systems and the Driver Issue

I made a quip the other day about the UNIX Epoch problem (unix time stamps, are measured in seconds since Jan 1, 1970, and displayed in a 10 digit number. Sometime in 2038, there will need to be 11 digits, and there’s no really good way to fix that.) Someone responded “whatever, we won’t be using UNIX in thirty years!”

Famous last words.

People were saying this about UNIX itself years ago. Indeed before Linux had even begun to be a “thing,” Bell Labs had moved on to “Plan 9” which was to be the successor to UNIX. It wasn’t. Unix came back. Hell, in the late eighties and early nineties we even thought that the “monolithic kernel” as a model of operating system design was dead, and here we are. Funny that.

While it’s probably the case that we’re not going to be using the same technology in thirty years that we are today (i.e. UNIX and GNU/Linux,) it’s probably also true that UNIX as we’ve come to know it, is not going to disappear given UNIX’s stubborn history in this space. More interesting, I think, is to contemplate the ways that UNIX and Linux will resonate in the future. This post is an exploration of one of these possibilities.


I suppose my title has forced me to tip my hand slightly, but lets ignore that for a moment, and instead present the leading problem with personal computing technology today: hardware drivers.

“Operating System geeks,” of which we all know one or two, love to discuss the various merits of Windows/OS X/Linux “such and such works better than everything else,” “such and such is more stable than this,” “suck and such feels bloated compared to that,” and so on and so forth. The truth is that if we take a step back, we can see that the core problem for all of these operating systems is pretty simple: it’s the drivers, stupid.

Lets take Desktop Linux as an example. I’d argue that there are two large barriers to it’s widespread adoption. First it’s not immediately familiar to people who are used to using Windows. This is pretty easily addressed with some training, and I think Microsoft’s willingness to change their interface in the last few years (i.e. the Office “Ribbon,” and so forth,) is a great testimony to the adaptability of the user base. The second, and slightly more thorny issue is about hardware drivers: which are the part of any operating system that allow the software to talk to hardware like video, sound, and networking (including, of course, wireless) adapters. The Kernel has gotten much better in this regard in the past few years (probably by adding support for devices without requiring their drivers be open source), but the leading cause of an “install just not working,” is almost always something related to the drivers.

“Linux People,” avoid this problem by buying hardware that they know is well supported. In my world that means, “Intel everything particularly if you want wireless to work, and Nvidia graphics if you need something peppy, which I never really do,” but I know people who take other approaches.

In a weird way this “geek’s approach to linux” is pretty much the same way that Apple responds to the driver problem in OS X. By constraining their Operating System to run only on a very limited selection of hardware, they’re able to make sure that the drivers work. Try and add a third party wireless card to OS X. It’s not pretty.

Windows is probably the largest victim to the driver problem: they have to support every piece of consumer hardware and their hands are more or less tied. The famous Blue Screen of Death? Driver errors. System bloat (really for all operating systems) tends to be about device drivers. Random lockups? Drivers. Could Microsoft build better solutions for these driver problems, or push equipment manufacturers to use hardware that had “good drivers,” probably; but as much as it pains me, I don’t really think that it would make a whole lot of business sense for them to do that, at the moment.


More on this tomorrow…

the day wikipedia obsoleted itself

Remember how, in 2006 and 2007 there was a lot of debate over wikipedia’s accuracy and process, and people though about creating alternate encyclopedias that relied on “expert contributors?” And then, somehow, that just died off and we never hear about those kinds of projects and of concerns anymore? The biggest news regarding wikipedia recently has been with regards to a somewhat subtle change in their licensing terms, which is really sort of minor and not even particularly interesting even for people who are into licensing stuff.

Here’s a theory:

Wikipedia reached a point in the last couple of years where it became clear that it was as accurate as any encyclopedia had ever been before. Sure there are places where it’s “wrong,” and sure, as wikipedians have long argued, wikipedia is ideally suited to fix textual problems in a quick and blindingly efficient manner, but The Encyclopedia Britannica has always had factual inaccuracies, and has always reflected a particular… editorial perspective, and in light of its competition wikipedia has always been a bit better.

Practically, where wikipedia was once an example of “the great things that technology can enable,” the moment when it leap frogged other encyclopedias was the moment that it became functionally irrelevant.

I’m not saying that wikipedia is bad and that you shouldn’t read it, but rather that even if Wikipedia is the best encyclopedia in the world it is still an encyclopedia, and the project of encyclopedias is flawed, and in many ways runs counter to the great potential for collaborative work on the Internet.

My gripe with encyclopedias is largely epistemological:

  • I think the project of collecting all knowledge in a single

    fact that the biggest problem in the area of “knowing” in the contemporary world isn’t simply finding information, or even finding trusted information, but rather what to do with knowledge when you do find it. Teaching people how to search for information is easy. Teaching people the critical thinking skills necessary for figuring out if a source is trustworthy takes some time, but it’s not terribly complicated (and encyclopedias do a pretty poor job of this in the global sense, even if their major goal in the specific sense is to convey trust in the specific sense.) At the same time, teaching people to take information and do something awesome with it is incredibly difficult.

  • Knowledge is multiple and comes from multiple perspectives, and is contextually dependent on history, on cultural contexts, on sources, and on ideological concerns, so the project of collecting all knowledge in a value-neutral way from an objective perspective provides a disservice to the knowledge project. This is the weak spot in all encyclopedias regardless of their editorial process or medium. Encyclopedias are, by definition, imperialist projects.

  • The Internet is inherently decentralized. That’s how it’s designed, and all though this rounds counter to conventional thought in information management, information on the Internet works best when we don’t try to artificially centralize it, and arguably, that’s what wikipedia does: it collects and centralizes information in one easy to access and easy to search place. So while wikipedia isn’t bad, there are a lot of things that one could do with wikis, with the Internet, that could foster distributed information projects and work with the strengths of the Internet rather than against them. Wikis are great for collaborative editing, and there are a lot of possibilities in the form, but so much depends on what you do with it.

So I guess the obvious questions here are:

  • What’s next?
  • What does the post-wikipedia world look like?
  • How do we provide usable indexes for information that let people find content of value in a decentralized format, and preferably in a federated way that doesn’t rely on Google Search?

Onward and Upward!

the mainframe of the future

It seems really popular these days to say, about the future of computing, that “in a few years, you’ll have a supercomputer in your pocket."1 And it’s true: the computing power in contemporary handheld/embedded systems is truly astounding. The iPhone is a great example of this, it runs a variant “desktop operating system,” it has applications written in Objective-C, it’s a real computer (sans keyboard and a small screen). But the truth is that Andriod and Blackberries are just as technically complex. And lets not forget about how portable and powerful laptops are these days. Even netbooks, which are “underpowered,” are incredibly powerful in the grand scheme of things.

And now we have the cloud, where raw computing power is accessible and cheap: I have access to an always-on quad-core system, for something like 86 cents a day. That’s crazy cheap, and the truth is that while I get a lot for 86 cents a day, I never run up against the processor limitations. Or even gotten close. Unless you’re compiling software/graphics (gaming) the chances of running into the limits of your processor for more than a few seconds here and there, are remarkably slim. The notable exception to this rule, is that the speed of USB devices is almost always processor-bound.

All this attention on processing power, leads to predictions about “supercomputers in your pockets,” and the slow death of desktop computing as we know it. This is, while interesting and sexy to talk about, I think it misses some crucial details that are pretty important.

The thing about the “supercomputers in your pocket” is that mobile gear is almost always highly specialized and task specific hardware. Sure the iPhone can do a lot of things, and it’s a good example of a “convergence” device as it combines a number or features (web browsing/email/http client/phone/media viewer) but as soon as you stray from these basic tasks, it stops.

There are general purpose computers in very small packages, like the Nokia Internet tablets, and the Fujitsu ultra mobile PCs, but they’ve not caught on in a big way. I think this is generally because the form factor isn’t general purpose and they’ve not yet reached the commodity prices that we’ve come to expect for our general purpose computing gear.

So while I think the “how we’ll use pocket-sized” supercomputers still needs to be worked, I think the assertion that computing power will continue to rise, while the size will continue to shrink, at least for a few more years. There are physical limits to Moore’s Law, but I think we have a few more years (10?) before that becomes an issue.

The question that I’ve been asking myself for the past few days isn’t “what are we going to do with new supercomputers,” but rather, “what’s that box on your desktop going to be doing.”

I don’t think we’re going to stop having non-portable computers, and indeed, as laptops and desktops have functionally converged in the last few years: the decision between getting a laptop and a desktop is mostly about economics, and “how you work.” While I do think that a large part of people’s “personal computing” going to happen on laptops, I don’t think desktops are going to just cease to exist in a few years, to be replaced by pocket-sized supercomputers.

It’s as if we’ve forgotten about mainframe computing while we were focused on supercomputers.

The traditional divide between mainframes and supercomputer is simple, while both are immensely powerful supercomputers tend to be suited to address computationally complex problems, while mainframes are designed to address comparatively simple problems on massive data-sets. Think “supercomputers are processors” and “mainframes are input/output.”

My contention is that as, the kinds of computing that day-to-day users of technology starts to level off in terms of computational complexity (or at least is overtaken by Moore’s Law), the mainframe metaphor becomes a more useful perspective to extend into our personal computing.

This is sort of the side effect of thinking about your personal computing in terms of “infrastructure”2 While we don’t need super-powerful computers to run our Notepad applications, finding better ways to isolate and run our tasks in parallel seems to make a lot of sense. From the perspective of system stability, from the perspective of resource utilization, and from the perspective of security, parallelizing functionality offers end users a lot of benefits.

In point of fact, we’ve already started to see this in a number of contexts. First, mutli-core/multi-processor systems are the contemporary standard for processors. Basically, we can make processors run insanely fast (4 and 5 gigahertz clock speeds, and beyond) but no one is ever going to use that much, and you get bottlenecks as processes line up to be computed. So now, rather than make insanely fast processors, (even for servers and desktops) we make a bunch of damn fast processors (2 or 2.5ghz is still pretty fast) that are all accessible in one system.

This is mainframe technology, not supercomputing technology.

And then there’s virtualization, which is where we run multiple operating systems on a given piece of hardware. Rather than letting the operating system address all of the hardware at once as one big pool, we divide hardware up and run isolated operating system “buckets.” So rather than having to administer one system, that does everything with shared resources, and having the headache of making sure that the processes don’t inter-fear with each-other, we create a bunch of virtualized machines which are less powerful than the main system but only have a few dedicated features, and (for the most part) don’t affect each other.

This is mainframe technology.

Virtualization is huge on servers (and mainframes of course,) and we’re starting to see some limited use-cases take hold on the desktop (e.g. Parallels desktop, VMware desktop/fusion), but I think there’s a lot of potential and future in desktop virtualization. Imagine desktop hypervisors that allow you to isolate the functions of multiple users? That allow you to isolate stable operations (eg. fileserving, media capture, backups) from specific users' operating system instances, from more volatile processes (eg. desktop applications). Furthermore, such a desktop-hypervisor would allow users to rely on stable operating systems when appropriate and use less stable (but more feature rich) operating systems on a per-task basis. There are also nifty backup and portability related benefits to running inside of brutalized containers.

And that is, my friends, really flippin' cool.

The technology isn’t yet there. I’m thinking about putting a hypervisor and a few guest operating systems on my current desktop sometime later this year. It’s a start, and I’ll probably write a bit more about this soon, but in any case I’m enjoying this little change in metaphor and the kinds of potentials that it brings for very cool cyborg applications. I hope you find it similarly useful.

Above all, I can’t wait to see what happens.


  1. Admittedly this is a bit of a straw-man premise, but it’s a nifty perception to fight against. ↩︎

  2. I wrote a series of posts a few weeks ago on the subject in three parts: one, two, and three ↩︎

On Reading and Writing

I may be a huge geek and a hacker type, but I’m a writer and reader first, and although while I’m blathering on about my setup it might seem like all I do is tweak my systems, the writing and reading are really more “my thing.”

I wrote that sentence a few weeks ago, and I’ve written a great many more sentences since then, but I’ve felt that that sentence needs some more exploration, particularly because while it seems so obvious and integrated into what I do from behind the keyboard, I think it bares some explanation for those of you playing along at home.

What “I do” in the world, is write. And that’s pretty clear to me, and has only gotten more clear in the last few years/months. There are a couple of important facts about what “being a writer” means to me on a “how I work” on a day to day basis. They are:

  • There’s a certain level of writing output that’s possible in a day, that I sometimes achieve, but it’s not sustainable. I can (and do) do the binge thing--and that has it’s place--but I can’t get up, pound out two thousand or more words every day on a few projects and go to bed happy. Doesn’t work like that.

  • Getting to write begets more writing, and it’s largely transitive. If I write a few hundred words of emails to blog reader, collaborators, and listservs in the morning, what happens in the afternoon is often more cogent than if I spend the morning checking twitter.

  • Writing is always a conversation, between the writer and other writers, between the writer and the reader, between the writer and future writers. I find it very difficult to write, even the most mundane things, without reading the extant discourse on the subject.

  • Writing is an experimental process. I’ve said at work a number of times, “you can’t edit something that isn’t there,” and in a very real sense, it is hard to really know “what you want” until you see the words on the page. Written language is like that I suppose. That’s what the blog is about, I guess.

  • Ideas aren’t real until they’re written down. I’m not much of a Platonist, I guess, and I think writing things down really helps clarify things, it helps point out the logical flaws in an argument, and it makes it possible for other people to commend and expand on the work that you’ve done. That’s a huge part of why I blog. It’s very much not publication in the sense that I’ve created something new and I’ve finished and I’m ready for other to consider it. Rather, I blog what I’m thinking about, I use the blog to think about things.

    Though I think it’s not clear to me (or to you) at this point, I’m very much in the middle of a larger project at the nexus of open source software communities, political economies, and units of authentic social organization. The work on free software that I’ve been blogging, the stuff about economics, the stuff about co-ops. I’m not sure how that’s all going to come together, but I’m working on it. Now, four months into it, it’s beginning to be clear to me that this is all one project, but it certainly never started that way.

The technology that I write about is something that I obviously think has merit on it’s own terms--hence the Cyborg Institute Project--but it’s also very true that I use technology in order to enable me to write more effectively, to limit distractions, to connect with readers and colleagues more effectively, to read things more efficiently. Technology, hacking, is mostly a means to an end.

And I think that’s a really useful lesson.

personal desktop 2

For a few days last week, in between the time that I wrote the Personal Desktop post and when I posed it yesterday, I had a little personal computing saga:

1. One of my monitors developed a little defect. I’m not sure how to describe it, but the color depth suffered a bit and there was this flicker and I really noticed it. It’s not major and I probably wouldn’t have noticed it, except I look at a very nice screen all day at work and I had a working display right next to it, I saw every little flicker.

2. I decided to pull the second monitor, and just go back to one monitor. While I like the “bunches of screens” approach, and think it has merit, particularly in tiling environments, I also think that I work pretty well on one screen, and with so many virtual desktops, it’s no great loss. Not being distracted by the flicker is better by far.

3. I pulled the second monitor and bam! the computer wouldn’t come back from the reboot. Shit. No error beeps, nothing past the bios splash screen. No USB support. Everything plugged in. Shit.

4. I let things sit for a few days. I was slamed with stuff in other areas of my life, and I just couldn’t cope with this. It doesn’t help that I really like to avoid messing with hardware if I can at all help it. Fellow geeks are big on building custom hardware, but the truth is that my needs are pretty minimal and I’d rather leave it up the to the pros.

5. On Friday, I sat down with it, pulled the video card that I’d put in it when I got the machine (an old nvidia 7200 series), and I unplugged the hard drives and futzed with the ram, and after re-seating the RAM it worked. I’m not complaining, and I figure it was just some sort of fluke as I jostled the case.

6. So now I’m back, with one monitor, no other problems have been fixed from the last post, but I can live with that.


As I was fretting with the implications of having a computer die on me like this, and thinking about my future computing trajectory. I realized that my current set up was deployed (as it were) under a number of different assumptions about the way I use computers. I got the desktop with the extra monitors when I was starting a series of remote jobs and needed more resources than I could really expect from my previous setup (a single macbook.) I also, in light of this downgraded my laptop to something smaller and portable that was good for short term tasks, and adding mobility to my setup, but that really didn’t work as my only computer for more than a day or two.

Now things look different. I’m not doing the same kind of remote work that I got the desktop for, and I have a killer machine at work that I’m only using a portion of (in a VM, no less). I have a VPS server “in the cloud” that hosts a lot of the “always on” infrastructural tasks that I needed from my desktop when I first got it.

I’m not sure what the solution is. Make the desktop at home more “server-y” (media files, downloading stuff + writing) exchange the laptop at some point for: a 15" notebook that would be my primary machine--particularly useful for long weekend trips, un/conferences and so forth, and some sort of small netbook-class device, for day-to-day portability.

It’s a thought. Anyway, on to more important thoughts.

Cheers!

personal desktop

I wrote a series of posts about setting up my new work computer as a way to avoid blathering on and on about how the movers lost the cushions for my couch, and other assorted minutia that seem to dominate my attention. What these posts didn’t talk about were what I was doing for “tychoish” and related computing.

About a week and some change, before I moved, I packed up my desktop computer and started using my laptop full time. It’s small, portable, and sufficient, if not particularly speedy.

I can do everything with the laptop (a ThinkPad x41t, which is a 2005-vintage 12" tablet) that I can do on any other computer I use, and while I often prefer it because small screen means that it’s really easy to focus intently on writing one thing at a time. This, inversely, means that it doesn’t work very well for research intensive work, where I need to switch between contexts regularly. It’s a fair trade off, and I did OK for weeks.

But then, having been in town for two and a half weeks, I decided it was time to break down and get my personal desktop setup and working. And it’s amazing. The thing, works just as well as it always has (which is pretty good,) and it’s nice to have a computer at home that I can do serious writing on, and the extra screen space is just perfect. I’ve been able to be much more productive and comfortable with my own projects since this began.

There are some things that I need to address with this computer, that have been queuing up. In the spirit of posting my todo lists for the world to see…

  • I need to get a new keyboard. My “fancy” “Happy Hacking Lite 2” keyboard is at work, as I’m comfortable with it, I do a lot of writing at work, and I set up that keyboard first (and the current default Mac keyboard sucks.)

    I’m thinking of either, getting another Happy Hacking keyboard, or more likely at this point, getting a das keybaord ultimate because how could I turn down blank keys and variable-pressure mechanical switch keys. And writing is what I do, so totally worth it.

  • I need to install Arch on this computer. I feel like cruft is beginning to accumulate here, I’ve never quite been happy with the ubutnu experience, and there are some things that I can’t get to work right (namely mounting of USB-mass storage devices) My concerns are that getting dual monitors setup on this box was a royal pain. But that might have been ubuntu related. I’m not sure.

  • My current thought is that I’ll buy a new (small) hard drive (eg. 80 gigs) to run a clean operating system install on (arch) and then use the current drive as storage for the stuff that’s already there (music, video). But I might just get a larger additional drive and do it in reverse. I dunno. The current situation isn’t that bad, and I think that I’ll archify the laptop first.

Annnyway…