the mainframe of the future

It seems really popular these days to say, about the future of computing, that "in a few years, you'll have a supercomputer in your pocket." [1] And it's true: the computing power in contemporary handheld/embedded systems is truly astounding. The iPhone is a great example of this, it runs a variant "desktop operating system," it has applications written in Objective-C, it's a real computer (sans keyboard and a small screen). But the truth is that Andriod and Blackberries are just as technically complex. And lets not forget about how portable and powerful laptops are these days. Even netbooks, which are "underpowered," are incredibly powerful in the grand scheme of things.

And now we have the cloud, where raw computing power is accessible and cheap: I have access to an always-on quad-core system, for something like 86 cents a day. That's crazy cheap, and the truth is that while I get a lot for 86 cents a day, I never run up against the processor limitations. Or even gotten close. Unless you're compiling software/graphics (gaming) the chances of running into the limits of your processor for more than a few seconds here and there, are remarkably slim. The notable exception to this rule, is that the speed of USB devices is almost always processor-bound.

All this attention on processing power, leads to predictions about "supercomputers in your pockets," and the slow death of desktop computing as we know it. This is, while interesting and sexy to talk about, I think it misses some crucial details that are pretty important.

The thing about the "supercomputers in your pocket" is that mobile gear is almost always highly specialized and task specific hardware. Sure the iPhone can do a lot of things, and it's a good example of a "convergence" device as it combines a number or features (web browsing/email/http client/phone/media viewer) but as soon as you stray from these basic tasks, it stops.

There are general purpose computers in very small packages, like the Nokia Internet tablets, and the Fujitsu ultra mobile PCs, but they've not caught on in a big way. I think this is generally because the form factor isn't general purpose and they've not yet reached the commodity prices that we've come to expect for our general purpose computing gear.

So while I think the "how we'll use pocket-sized" supercomputers still needs to be worked, I think the assertion that computing power will continue to rise, while the size will continue to shrink, at least for a few more years. There are physical limits to Moore's Law, but I think we have a few more years (10?) before that becomes an issue.

The question that I've been asking myself for the past few days isn't "what are we going to do with new supercomputers," but rather, "what's that box on your desktop going to be doing."

I don't think we're going to stop having non-portable computers, and indeed, as laptops and desktops have functionally converged in the last few years: the decision between getting a laptop and a desktop is mostly about economics, and "how you work." While I do think that a large part of people's "personal computing" going to happen on laptops, I don't think desktops are going to just cease to exist in a few years, to be replaced by pocket-sized supercomputers.

It's as if we've forgotten about mainframe computing while we were focused on supercomputers.

The traditional divide between mainframes and supercomputer is simple, while both are immensely powerful supercomputers tend to be suited to address computationally complex problems, while mainframes are designed to address comparatively simple problems on massive data-sets. Think "supercomputers are processors" and "mainframes are input/output."

My contention is that as, the kinds of computing that day-to-day users of technology starts to level off in terms of computational complexity (or at least is overtaken by Moore's Law), the mainframe metaphor becomes a more useful perspective to extend into our personal computing.

This is sort of the side effect of thinking about your personal computing in terms of "infrastructure" [2] While we don't need super-powerful computers to run our Notepad applications, finding better ways to isolate and run our tasks in parallel seems to make a lot of sense. From the perspective of system stability, from the perspective of resource utilization, and from the perspective of security, parallelizing functionality offers end users a lot of benefits.

In point of fact, we've already started to see this in a number of contexts. First, mutli-core/multi-processor systems are the contemporary standard for processors. Basically, we can make processors run insanely fast (4 and 5 gigahertz clock speeds, and beyond) but no one is ever going to use that much, and you get bottlenecks as processes line up to be computed. So now, rather than make insanely fast processors, (even for servers and desktops) we make a bunch of damn fast processors (2 or 2.5ghz is still pretty fast) that are all accessible in one system.

This is mainframe technology, not supercomputing technology.

And then there's virtualization, which is where we run multiple operating systems on a given piece of hardware. Rather than letting the operating system address all of the hardware at once as one big pool, we divide hardware up and run isolated operating system "buckets." So rather than having to administer one system, that does everything with shared resources, and having the headache of making sure that the processes don't inter-fear with each-other, we create a bunch of virtualized machines which are less powerful than the main system but only have a few dedicated features, and (for the most part) don't affect each other.

This is mainframe technology.

Virtualization is huge on servers (and mainframes of course,) and we're starting to see some limited use-cases take hold on the desktop (e.g. Parallels desktop, VMware desktop/fusion), but I think there's a lot of potential and future in desktop virtualization. Imagine desktop hypervisors that allow you to isolate the functions of multiple users? That allow you to isolate stable operations (eg. fileserving, media capture, backups) from specific users' operating system instances, from more volatile processes (eg. desktop applications). Furthermore, such a desktop-hypervisor would allow users to rely on stable operating systems when appropriate and use less stable (but more feature rich) operating systems on a per-task basis. There are also nifty backup and portability related benefits to running inside of brutalized containers.

And that is, my friends, really flippin' cool.

The technology isn't yet there. I'm thinking about putting a hypervisor and a few guest operating systems on my current desktop sometime later this year. It's a start, and I'll probably write a bit more about this soon, but in any case I'm enjoying this little change in metaphor and the kinds of potentials that it brings for very cool cyborg applications. I hope you find it similarly useful.

Above all, I can't wait to see what happens.

[1]Admittedly this is a bit of a straw-man premise, but it's a nifty perception to fight against.
[2]I wrote a series of posts a few weeks ago on the subject in three parts: one, two, and three

Free Software Misunderstood

This post is in response to two things that I've observed recently:

1. A Misinformed Critique of the Debian Project

2. The largely unfair dismissal of free software/open source/hackers on the grounds of purported zealotry.


Debian, Critiqued

The above linked article, presents a number of critiques, leveled at the Debian project. While these complaints with user experience are valid, I was left with a serious, as we say on the Internet "WTF" moment. Read the article if you haven't already before you get to my response, if you're so inclined.

Also I'd like to challenge the Editors of that website to exercise a little more digression in what they publish in the future.

My response:

1. Stable releases of Debian are for the most part not intended to be run as desktop operating systems. The software in Debian Lenny is, at this moment nearly two years old. That's fine (and even desirable) for a server, but most users want things that are a little more up to date than that. This is why we have distributions like Ubuntu, which manages to walk a much better line between stable (and benefits from the efforts of Debian) and current.

2. It's possible to install Debian packages that aren't contained in the repository, or provided in older versions of the operating system. Download the package with wget and then use dpkg -i    [package-file].deb. There may be GUI tools that support this. While we might like to have Linux systems for "new comers" to the platform that don't require using the command line, Debian stable isn't one of these operating systems.

3. Installing fonts on most systems is usually as simple as putting the files in /usr/share/fonts or $HOME/.fonts and and running fc-cache -f. The complainer focuses a great deal on the absence of a familiar font management program (which appears to be a command line tool that exists in Ubuntu 9.04 which is a "newer" system than Lenny).

I still don't see how "contempt" is the right word, to describe the fact that a massive project that is the result of a loose organization of hundreds of people, failed the address a few specific needs of a user using the system in a non-standard/non-recommended pattern qualifies as "contempt for users."

As it stands it sort of feels like the author is attempting to stir up controversy by attacking a historical weak spot, and stretching the bounds of reasonable criticism in the process. I think editors of any publication should be above this sort of thing. thumbs down.

Dismissal of Free Software on the Grounds of Zealotry

We see this a lot, and I'm kind of sick of it.

I've seen a lot of people--who actually agree with almost every tenant of the most "ideologically pure" free software advocates--dismiss version 3 of the GPL, or RMS, or the FSF for being "too radical," or obsessive, or "communist," which is both intensely interesting and intensely troubling. It's often in the form of "I wouldn't say that like RMS," or some such.

For starters, I think its interesting to note the prevalence with which "communist" is used as a dismissal of the "Free Software" movement, particularly because while there is a very vague "anti-corporations" and "anti-trust" vein in the free software world, in point of fact the biggest "big picture political" ideology around is a very ad hoc libertarianism. The "communist" jab is, probably more at the sort of heavy-handed ideological positionally of the "copy left" movement. Furthermore, I think it's probably clear that Free software as we know it today wouldn't be possible without commercial interests, input and, energies, and resources.

And yet. Free software/open-source, gets red baited. Interesting. And disappointing.

On Reading and Writing

I may be a huge geek and a hacker type, but I'm a writer and reader first, and although while I'm blathering on about my setup it might seem like all I do is tweak my systems, the writing and reading are really more "my thing."

I wrote that sentence a few weeks ago, and I've written a great many more sentences since then, but I've felt that that sentence needs some more exploration, particularly because while it seems so obvious and integrated into what I do from behind the keyboard, I think it bares some explanation for those of you playing along at home.

What "I do" in the world, is write. And that's pretty clear to me, and has only gotten more clear in the last few years/months. There are a couple of important facts about what "being a writer" means to me on a "how I work" on a day to day basis. They are:

  • There's a certain level of writing output that's possible in a day, that I sometimes achieve, but it's not sustainable. I can (and do) do the binge thing--and that has it's place--but I can't get up, pound out two thousand or more words every day on a few projects and go to bed happy. Doesn't work like that.

  • Getting to write begets more writing, and it's largely transitive. If I write a few hundred words of emails to blog reader, collaborators, and listservs in the morning, what happens in the afternoon is often more cogent than if I spend the morning checking twitter.

  • Writing is always a conversation, between the writer and other writers, between the writer and the reader, between the writer and future writers. I find it very difficult to write, even the most mundane things, without reading the extant discourse on the subject.

  • Writing is an experimental process. I've said at work a number of times, "you can't edit something that isn't there," and in a very real sense, it is hard to really know "what you want" until you see the words on the page. Written language is like that I suppose. That's what the blog is about, I guess.

  • Ideas aren't real until they're written down. I'm not much of a Platonist, I guess, and I think writing things down really helps clarify things, it helps point out the logical flaws in an argument, and it makes it possible for other people to commend and expand on the work that you've done. That's a huge part of why I blog. It's very much not publication in the sense that I've created something new and I've finished and I'm ready for other to consider it. Rather, I blog what I'm thinking about, I use the blog to think about things.

    Though I think it's not clear to me (or to you) at this point, I'm very much in the middle of a larger project at the nexus of open source software communities, political economies, and units of authentic social organization. The work on free software that I've been blogging, the stuff about economics, the stuff about co-ops. I'm not sure how that's all going to come together, but I'm working on it. Now, four months into it, it's beginning to be clear to me that this is all one project, but it certainly never started that way.

The technology that I write about is something that I obviously think has merit on it's own terms--hence the Cyborg Institute Project--but it's also very true that I use technology in order to enable me to write more effectively, to limit distractions, to connect with readers and colleagues more effectively, to read things more efficiently. Technology, hacking, is mostly a means to an end.

And I think that's a really useful lesson.

personal desktop 2

For a few days last week, in between the time that I wrote the Personal Desktop post and when I posed it yesterday, I had a little personal computing saga:

1. One of my monitors developed a little defect. I'm not sure how to describe it, but the color depth suffered a bit and there was this flicker and I really noticed it. It's not major and I probably wouldn't have noticed it, except I look at a very nice screen all day at work and I had a working display right next to it, I saw every little flicker.

2. I decided to pull the second monitor, and just go back to one monitor. While I like the "bunches of screens" approach, and think it has merit, particularly in tiling environments, I also think that I work pretty well on one screen, and with so many virtual desktops, it's no great loss. Not being distracted by the flicker is better by far.

3. I pulled the second monitor and bam! the computer wouldn't come back from the reboot. Shit. No error beeps, nothing past the bios splash screen. No USB support. Everything plugged in. Shit.

4. I let things sit for a few days. I was slamed with stuff in other areas of my life, and I just couldn't cope with this. It doesn't help that I really like to avoid messing with hardware if I can at all help it. Fellow geeks are big on building custom hardware, but the truth is that my needs are pretty minimal and I'd rather leave it up the to the pros.

5. On Friday, I sat down with it, pulled the video card that I'd put in it when I got the machine (an old nvidia 7200 series), and I unplugged the hard drives and futzed with the ram, and after re-seating the RAM it worked. I'm not complaining, and I figure it was just some sort of fluke as I jostled the case.

6. So now I'm back, with one monitor, no other problems have been fixed from the last post, but I can live with that.


As I was fretting with the implications of having a computer die on me like this, and thinking about my future computing trajectory. I realized that my current set up was deployed (as it were) under a number of different assumptions about the way I use computers. I got the desktop with the extra monitors when I was starting a series of remote jobs and needed more resources than I could really expect from my previous setup (a single macbook.) I also, in light of this downgraded my laptop to something smaller and portable that was good for short term tasks, and adding mobility to my setup, but that really didn't work as my only computer for more than a day or two.

Now things look different. I'm not doing the same kind of remote work that I got the desktop for, and I have a killer machine at work that I'm only using a portion of (in a VM, no less). I have a VPS server "in the cloud" that hosts a lot of the "always on" infrastructural tasks that I needed from my desktop when I first got it.

I'm not sure what the solution is. Make the desktop at home more "server-y" (media files, downloading stuff + writing) exchange the laptop at some point for: a 15" notebook that would be my primary machine--particularly useful for long weekend trips, un/conferences and so forth, and some sort of small netbook-class device, for day-to-day portability.

It's a thought. Anyway, on to more important thoughts.

Cheers!

personal desktop

I wrote a series of posts about setting up my new work computer as a way to avoid blathering on and on about how the movers lost the cushions for my couch, and other assorted minutia that seem to dominate my attention. What these posts didn't talk about were what I was doing for "tychoish" and related computing.

About a week and some change, before I moved, I packed up my desktop computer and started using my laptop full time. It's small, portable, and sufficient, if not particularly speedy.

I can do everything with the laptop (a ThinkPad x41t, which is a 2005-vintage 12" tablet) that I can do on any other computer I use, and while I often prefer it because small screen means that it's really easy to focus intently on writing one thing at a time. This, inversely, means that it doesn't work very well for research intensive work, where I need to switch between contexts regularly. It's a fair trade off, and I did OK for weeks.

But then, having been in town for two and a half weeks, I decided it was time to break down and get my personal desktop setup and working. And it's amazing. The thing, works just as well as it always has (which is pretty good,) and it's nice to have a computer at home that I can do serious writing on, and the extra screen space is just perfect. I've been able to be much more productive and comfortable with my own projects since this began.

There are some things that I need to address with this computer, that have been queuing up. In the spirit of posting my todo lists for the world to see...

  • I need to get a new keyboard. My "fancy" "Happy Hacking Lite 2" keyboard is at work, as I'm comfortable with it, I do a lot of writing at work, and I set up that keyboard first (and the current default Mac keyboard sucks.)

    I'm thinking of either, getting another Happy Hacking keyboard, or more likely at this point, getting a das keybaord ultimate because how could I turn down blank keys and variable-pressure mechanical switch keys. And writing is what I do, so totally worth it.

  • I need to install Arch on this computer. I feel like cruft is beginning to accumulate here, I've never quite been happy with the ubutnu experience, and there are some things that I can't get to work right (namely mounting of USB-mass storage devices) My concerns are that getting dual monitors setup on this box was a royal pain. But that might have been ubuntu related. I'm not sure.

  • My current thought is that I'll buy a new (small) hard drive (eg. 80 gigs) to run a clean operating system install on (arch) and then use the current drive as storage for the stuff that's already there (music, video). But I might just get a larger additional drive and do it in reverse. I dunno. The current situation isn't that bad, and I think that I'll archify the laptop first.

Annnyway...

are web standards broken

When I started doing this website thing on the eve of the millennium, the burgening buzzword of the time was "web standards." All of us in the know were working on learning and then writing to web standards like HTML 4.0 and eventually XHTML 1.0 along with CSS 1 and 2. And we were all hankering for browsers that implemented these standards in a consistent way.

Really all we wanted was for our web pages to look the same no matter who was viewing the page.

This pretty much never happened. Web browsers are pretty good these days, or they at least--in many ways--don't suck as much as they used to, but they're all a bit quirky and they all render things a bit differently from each other. And on top of that they've got poor architectures, so as programs they're really bloated, and prone to crashing and the like. I've written before about being "against" websites, webapps, and the like and I think my disdain for the "web" grows out of the plan and simple fact that:

the web browser is broken, beyond repair.


So where does this put the cause of web standards in web design? Thoughts and questions:

Do we write to standards which aren't going to get adopted usefully? Is ad hearing to standards a productive use of time?

Do we write to clients (specific browser implementations) that are broken, but at least assure that content looks "right?"

When the previous goals two goals aren't compatible which wins?

Will HTML 5 and CSS 4 fix these problems, or is it another moving target that browsers won't adopt for another 10 years, and even then only haphazardly?

Are there other methods of networked content delivery that bypass the browser that might succeed while the browser space (and the content delivered therein) continues to flounder? I'm thinking object/document databases with structured bidirectional, and limited hierarchy (in the system, objects might have internal hierarchy)?

Is the goal/standard of pixel-perfect layout rendering something which the browser is incapable of providing? Might it be the case that CSS is simply too capable of addressing problems which are outside of the ideal scope for defining a consistent style for a page: Let me run with this idea for a moment:

Maybe the problem with XHTML and CSS isn't that it's implemented poorly, but rather that we're trying to use CSS classes and IDs and div tags in an attempt to make pixel-perfect renderings of pages, which is really beyond CSS's "mission." What would web standards and the state of the browser look like, if you dropped CSS IDs (eg. #id-name{ }) and made single instance classes (eg. .class-name{}) verboten? Aside from crashing and burning and completely killing off browser-based applications?

I look forward to hearing your thoughts on the subject.

on public key encryption and security

As part of the moving process I got a bank account, and I was reminded, again, of how much the security systems of most online banks are comically flawed, which lead me to even greater anger about security in general. The following rant is what happened.

I should say at first, that I'm not really a security expert, and I just dabble in this stuff. Having said that...

"Security" online and in a digital context covers two pretty distinct aspects:

  • Identity. In real life we can show our drivers license or passport, we can say "I'm [insert name here]," and in many situations another person is probably not too far away to be able to say, "I know them, they're [insert name here]." Online? Well identity is less easily and reliably verified. Identity is important both for individual's (and organizations') identity and for things that people (and organizations) produce/own: emails, documents, web pages, software, and so forth.
  • Encryption. Basically we encrypt data so that we can be relatively certain that no one gains access to our data unless, by listening into our network connection, or gaining access to physical media. From encryption we get privacy, and as long as the encryption scheme works as it should and the encryption covers communications end-to-end, it's pretty safe to assume some measure of privacy.

It turns out, from a technical perspective that encryption is reasonably easy to achieve. It's true that all cryptographic schemes are ultimately breakable, however, if we can generally assume best practices (expiring old keys, keeping your private keys safe, etc.) then I feel fairly safe in asserting that encryption isn't the weak part of the security equation.

This leaves identity on the table. Which is sort of a messy affair.

Just because someone says, "Hello my name is Alice," it doesn't mean that they are Alice. Just because they have Alice's password, doesn't necessarily mean that they are Alice (but that's a safer bet.) The best, and most reliable way to verify someones identity, it turns out, to have a "web of trust."

Which basically means, you assert that you are who you say you are, and then "vouch" for other people who you "know" are who they say they are. Once you've vouched for someone you then "trust" that the people they've vouched for, and so forth. Good web-of-trust systems allow you to revoke trust, and provide some mechanism for propagating trusted networks of identities among users.

The above described system is a very peer-to-peer/ad hoc system (bottom up, if you will), there are also more centralized (top down,) systems which can also function to verify identity in a digital context. These systems depend on commonly trusted third parties that are tasked with researching and verifying the identity of individuals and organization. So called "certificate authorities," make it possible to "trust identities" of without needing a personal web-of-trust network extend to cover people and organizations you'd come in contact with.


Lets bring this back to the case study of the bank,

They encrypt their traffic, end to end, with SSL (eg. TLS), they pay for a certificate from a certificate authority with a good reputation. The weak part of this equation? You and Me, apparently.

To verify our identity, we have this arcane and convoluted scheme where by we have to enter hard to remember passwords in stages (my last bank, had us enter passwords on three pages in succession) so that the back can be sure we're who we say we are. And the sad part is that while encryption and identity verification technology in secure and reliable ways is pretty advanced (in the big picture), we still have to send passwords. Here are my thoughts on passwords:

  • The best passwords are the hardest to remember. The best passwords don't contain words, and contain numbers, letters, and punctuation. But these passwords are difficult to remember, and I think many people avoid picking truly secure passwords because of the difficulty.

  • Passwords aren't bad, and they're--I suspect--most useful as a casual deterrent and a reminder to users of the potential gravity of the situation; but they're not exactly a reliable fingerprinting mechanism.

  • Some sort of cryptographic handshake would be many magnitudes more secure, and much less painless for users.

    I have this theory, that security for banks (and other similar institutions) is more about giving the appearance of being secure (asking for more complex passwords, making you jump through more hoops, etc.) and less about doing things that would be more secure in the long run. But maybe that's just me.

Anyway, back onto more general interest topics in the near future.

why arch linux rocks

So, long story short, I've been working a lot with ArchLinux in the last few days, getting it setup, and starting to use this peculiar little distribution. While I will surely be blogging more about Arch in the coming days, I think a brief list of first impressions are in order.

  1. I share values with the Arch Developers.

    This is, I think, a factor of "choosing a Linux (or BSD) distribution" that is really hard to understand or explain. In part because the values that distinguish distributions are hard sometimes hard to suss out, particularly if you're on the outside looking in. This explains the phenomena of "distro hopping"

    My sense of the "Arch" philosophy/approach is largerly what this post is about, but in summary: arch is lightweight and minimal, Arch expects users to be sophisticate and intelligent (Arch would rather tell you how something works, so you can do it "right," than try and save you from yourself and do it in a way that might be wrong.) Arch is a community project, and isn't reliant on commercial interests, and arch is firmly dedicated to free software ideas.

    How does this compare to other distributions you've heard of. Arch is community oriented/originated like slackware and Debian; Arch is lightweight like debian-netinst and Gentoo; Arch is minimal like Damn Small Linux (though not quite that minimal) and the other tiny-Linuxes; Arch is based on binary packages like Debian and Fedora/RedHat/CentOS; Arch uses linux, but takes inspiration from the BSDs in terms of system architecture; Arch uses a rolling release cycle like Debian testing branch and Gentoo.

  2. It doesn't dumb anything down, and doesn't expect users to be either experts *or* total beginners.

    I think the term they use is "Intermediate" or "Advanced Beginner" but in any case, but in any case I think the approach is good. Provide configuration in it's most basic and straightforward form, and rather than try to make the system easier to configure, document and hope that straightforward configuration setup will be easier to manage in the long run than a more convoluted, but "easy" set up.

    Basically Arch expects and assumes that complexity and difficulty are the same, and opposed and that simplicity and ease of use are similarly connected.

  3. Arch values and promotes minimalism.

    This comes from a few different aspects of Arch but in general, the avoidance of complexity in the configuration, and the "blank slate" aspect of the installation process combine to create a system that is minimal and that is almost entirely agnostic with regards to what you might want to do with the system.

    Where as many linux-based systems are designed for specific tasks (eg. mythbuntu; medibuntu; linux mint; crunch linux, etc.) and include software by default that supports this goal. Arch in contrast, install no (or very little) software by default, and can function well for a wide range of potential uses, from the fully featured desktop to the minimalistic headless server install.

  4. The Arch Wiki Rocks.

    I've been thinking about wikis and what makes a wiki "work" rather than "not work," and I'm beginning to think that the ArchLinux Wiki is another example of a wiki that works.

    I used to think that wikis powered by the MediaWiki engine were always bad: they look too much like wikipedia (and are reasonably hard to customize) and as a result people tend to treat them like wikipedia which caries all sorts baggage from the tradition 19th century encyclopedic projects and colonialism, and fails to capture some of the brilliance and effectiveness of wikis outside of the world of wikipedia (and the MediaWiki engine by association.)

    So despite this, the ArchLinux wiki is actually really good and provides helpful instructions for nearly everything to do with Arch. It looks good, and the more I read it all of the cool discursive/opinion-based modality that I enjoy the most about wikis is present on the Arch Wiki.

  5. Archies are really geeky and great, and their interests and tendencies are reflected in the packages provided by the system:

Allow me to justify this with a few anecdotes:

  • Arch includes a "snapshot package" from emacs-23 in the main repository (you have to add another debian repository to get this in debian).
  • There is a great cross over between Awesome--my window manager of nchoice--and Arch, so there are good up to date packages of Awesome.
  • Uzbl, (eg. useable) a super minimalistic, web-kit based browser is developed on/for Arch.
  • As I was getting my first virtual machine setup, I did a bit of distro hopping to see what would work best. I decided to use virtualbox (because it's nearly free software, and reasonably full featured) and I had a hell of a time getting other OSs to work right inside of the virtual box, but it appears that other Archies have had the same thought, and there were pretty good explanations on the wiki and it just worked.

How cool is that? I don't think arch is for everyone, but, if any of what I've talked about today sounds interesting/appealing, give it a shot. Also, my experiences with running it under Virtual Box have been generally favorable, so if that's more your speed, give it a shot.

Onward and Upward!