On Federation

In my post on Open ID I said that I’d continue that train of thought with a post about federation, so here we are. This post starts, however, a bit before that one ends, somewhere a little different. Stick with me though:

The greatest thing about Unix-like operating systems (at least conceptually, to me) is the concept of the pipe. This isn’t new of course, but the pipe is the tool by which the output of the small widget like unix programs can be “piped” into another application. The pipe works on anything in a plain text format (basically) and takes what would otherwise be a really fragmented computing enviroment and turns it into something where the data, the text, the product of your computing output, is the central focus of your computing activities. As it should be.

Fast forward 30 years, and we have the internet. Where data doesn’t flow through pipes (unless you’re Ted Stevens), but mostly stays in whatever silo it gets entered in. This isn’t strictly true, there are ways to import and export data when they’re stored in a database somewhere in the cloud. but on the whole once you commit to storing your data in one place/way the relative price1 of moving from one system to another is quite high.

The concept of federation solves the problem of data interchange for the internet in the same way that the pipe solved a very similar problem for UNIX. Or at least it tries to. Unsurprisingly the problem for UNIX developers was a conceptual and engineering problem, for the developers and participants in the internet the problem is one of community norms, but the need for interoperability and openly accessible data is the same.

In UNIX the solution to this problem grew out of an understanding that software worked best when it only did one thing, and that it was easier to develop/use/maintain a lot of different pieces of distinct software well than it was to write single pieces of software that did a lot of different things well.

This is unequivocally true. And I think it’s also true of the Internet. It’s easier to maintain and develop smaller websites with less traffic and less data and a smaller staff, and a smaller community, than it is to maintain and cope with huge websites with massive amounts of traffic. The problem is that websites don’t have pipes--really--and if they do, it has to be hacked (in sense of computing by trial and error, rather than intrusion) by specialists. And to be fair, RSS and some other XML formats are becoming de facto standards which allow some limited piping, and OpenID is a good first step towards interoperability, but there is a great deal of work left to be done.

It seems to me, that data doesn’t flow on the internet because success of a website, seems to be measured in a very strictly quantitative basis. The more users, the more visits, the more hits you have, theoretically the more successful you are; and if this is the case then website producers and web-software developers would seem to have a vested interest in keeping users using a site, even if this potentially holding users' data hostage. But what if websites didn’t need to be huge? What if rather than marketing a website in terms of number of features and size, websites said “give us your time and money, and we’ll give you full access to your data, the ability to connect your data to the data on other similar sites, and allow you to participate in our very specific community?” It would be a different world, indeed.

The thing is that, all of these more-focused websites probably would be a lot smaller than most of the big websites today. I’m fine with that, but it means rethinking the economics and business model for the web. The question isn’t “can we figure out a way to get push-based, interoperable technology running on a large scale, but rather, is there a way for the vast majority of websites to be run by (and support the salary of) very small teams of, 5-10 (+/-) people? Not just “until it gets bigger, or bought by google/yahoo” but, forever?

I look forward to playing with numbers and theorizing systems with people in the comments, but most of all I’m interested in what you all think.


  1. Not necessarily in terms of monetary cost, but in terms of time, energy, and programing knowhow. ↩︎

Open ID

Weeks ago I was talking with a coworker about internet communities and web development, and other related topics, and our various experiences with “community websites.” One of my largest complaints/points in this conversation was about how “community sites” always feel like walled communities in a way, and that while I’m often vaguely interested in any number of particular community sites at this point, I’m not particularly interested in joining yet another website, and “keeping up” and particpating in these pull based communities is, difficult.

Now before you call me jaded, I’ll cop to it, and I’ll clarify that I’m a really intense consumer of internet content, and I’m also really controlling about the format that I get my data in, so I don’t think my experiences are particularly typical. Resume argument…

The obvious solution to this problem that I mentioned is Open ID which is a service where one website accepts the authentication credentials of another website.

So here’s how it works. I sign in to an OpenID provider (I mostly use live journal for this purpose, but any will work), I take my LJ address and go to a site which accepts OpenID logins (like identi.ca), and the site which accepts openID, asks LJ (etc.) “is this really tycho,” at which point LJ makes sure I’m logged in and asks me “do you really want me to do this?” I say yes, and then I’m logged in. No passwords to be compromized, no passwords to forget. no fuss. It just works.

There are a couple of other nice features, first that you can mask your login with a different URL. My Open ID url is this website, but the provider/verifier of my identity is live journal, and this works because of a tag that’s in the HTML of tychoish.com. In addition to being pretty, if I at some point decide that I want a different LJ account or a totally different Open ID provider, I can change the URL in question in the HTML of tychoish.com, and everything still works.

Secondly, you can run your own Open ID server. Unlike other systems which unify identity management online, OpenID doesn’t depend on one company providing authority, or security, which is nice, because there’s no one target to hack, as there would be if a company like Google or Microsoft the unified decentralized server.

Open ID is of course open to the same kinds of problems around identity squatting and theft that having lots of logins can have, but it doesn’t create any new problem or security risk, and there are ways that having fewer passwords, and fewer accounts could actually be more secure.

But online communities? How does that fit in. Well simple. Open ID makes signing up for communities a lot easier. It’s the first step in opening up our participation in multiple online communities to a more federated environment, and I think it could conceptually make it more possible for a lot of smaller niche websites to coexist in a larger internet ecology.

I’m going to post more on the subject of the ecosystems of internet communities and federation later this week, but lets return to my conversation with the coworker where I said something like: “you know, if only people would actually use OpenID?”

And he said, “Yeah, good luck with that one.”

Onward and Upward!

Percentage Error

(some liberties taken with this transcript)

caroline: the outright lying makes politics fun again.

tycho: No, I think I’m still bitter.

caroline: Haha, Don’t get me wrong, I’m so angry I think my brain is compressing.

tycho: sigh yeah.

caroline: But at this point I accept that half of America is backwards and nuts.

tycho: Oh, it’s more like 75%. Minimum.

caroline: 99%

tycho: 90%

caroline: 140%

tycho: ERROR

Dumb Terminals

I make a point of staying on top of trends in technology. It’s sort of my “thing” and it’s more fun than say, the hair colors and marital statuses of the rich and famous.

So like most geeks, I’ve been hearing more and more about “cloud computing,” which is supposedly an evolution of Web 2.0 technologies (Web 2.1? 2.5?) and this whole “internet thing,” where software and data is something that runs on a server somewhere else, and your computer (via your browser) is a window on the “cloud.” Let me back up:

Lets start of with the model traditional personal computing. People have computers, they run software, and they store data. If they have network connection, the network is primarily a tool for pulling new data to be stored and processed using the software and hardware that’s sitting on the users desk.

Where as on the desktop you might use a program like Word or Open Office, “in the cloud” a program like “Google Documents,” is probably the app of choice. And web/cloud apps have been replacing desktop email clients for years.

And this is important and noteworthy because it’s a fundamental change in the way that we use computers, and largely we are all accustomed to this mode of operation. The interesting thing, is that the underlying technologies that support cloud computer: mySQL, PHP, python, ruby-on-rails, and even AJAX are really nothing particularly new. I suspect that the largest contributing factor to the emergence of cloud computing is the fact that network connectivity in the last year or two has improved dramatically.

Having a connection to the internet isn’t something that you do for a few moments or even hours a day anymore, but is practically a requisite part of computer usage: the “internet” is always on. And networks are pretty darn fast for most things.

The geeky and historically astute among you, given the title, can probably see where this is going…

The personal computing modality (run applications and store data locally) came about when computing power and storage finally got to be small enough (it could fit on your desk!) and powerful enough (whole kilobytes!) that it became reasonable for non-specialists to run and operate them.

Before this, computers were pretty large, too powerful by the standards of the day for one person to run themselves, very expensive and very finicky to run, so they ran in secure/controlled locations operated by specialists, and users had “dumb terminals,” which included some sort of connectivity interface (RJ-11 or coax likely), a monitor a keyboard, and a chip board that was just enough to tie it all together and send the single back to the real computer where all the processors and data lived.1

And then computers got smaller and faster than the network connections could keep up with. Hence desktop computing. I’m just saying that things cycle through a bit, and everything that’s old is new again.

Thinking about cloud computing as an old modality rather than as a new modality makes it a much more exciting problem, because a lot of the nitty gritty problems/interface questions were solved in the 70s. For instance, X11, the windowing system that most *NIX systems use, is designed to run this way and in fact sort of acts as if instances where windows appear on a screen attached to the computers running the applications is an interesting coincidence. Which is pretty logical and makes a lot of very cool things possible, but is admittedly kind of backward in the contemporary perspective.

Anyway, cool stuff. Have a good weekend, and if you have any thoughts on this subject, I’d love to hear them.


  1. In fairness these connections were, I believe almost always over intranets, rather than over some sort of public internet, though as I think about it, there were probably some leftovers of this in the BBS-days, with regards to terminals and what not. ↩︎

@sadiavt, microblogging

(my friend, sadia asked me a question on twitter that I couldn’t answer in 140 characters, so escalated it to email, and she said that I should post it to the blog, and who am I to refuse a request like that?)

I really like the open-source/federated microblogging site “identi.ca” which runs on the laconica platform. It’s good stuff, but the user base isn’t quite there (either on the site, or in the federated network.)

Basically the killer feature of microblogging, for me, is integration with a jabber/xmpp client, and pretty fine tuned control over who gets in your “stream/feed” Everything is nice, but fluffy (search, threaded comments, etc.) Jabber is great because it’s so interoperable, and because jabber apps, like adium are killer robust and integrate well into the system, were as Adobe Air twitter apps (and even twitterific) don’t so much. In some respects, it also boils down to the difference between pull (which is the typical solution, and not ideal) and push (which twitter can’t cope with any more).

I have the attention/time to spare into this, if I can have a lot of control over what I see, and it’s pushed to me live rather than via large regular pulls, it’s easier to deal with. The end result is that while all the people I’m interested in reading/talking to are on the twitter, I have little tolerance for the site/service itself, particularly when I know that every other site does it a little better, and most can supply jabber feeds. This is a scaling problem, but Ev has cash, and the solution might be disruptive, but it’s not conceptually difficult.

The thing is that, I think twitter is afraid that if they do anything drastic, and if there’s any more downtime in a major way, that everyone will jump ship. And they’re probably right. Which would be good for us, but not for them.

You also asked if microblogging was an addiction or curiosity, and I think I try all the new services out out of curiosity, but I don’t think it’s a particular addiction, aside from the general internet. It’s sort of ironic, but I’d like to spend my internet time communicating people rather than reading the big portals. Hence the email lists, my “always on IM” M.O., ravelry, the fact that I don’t really read the A-List blogs much, etc.

I was a big IRC user back in the day, and in a lot of ways I see twitter (et al) as an evolution of the IRC impulse, and while I don’t think “going back” to IRC is the way to go (because frankly Jabber/xmpp is really a “better IRC” anyway,) so if it is an addiction, it’s not a particularly new one.

Site Specific Browsers For the Win (eg, SSBFTW)

I’ve written about how much I hate hate web-based applications on this site so much that I don’t even want to begin to hunt through the archives to find a representative sample of entries on the topic. But let me summarize.

Browsers, on the whole, well, suck. They hog system resources and they crash a lot, and they have the most ass-backwards feature model I can think of. “My browser lets you install plugins so that you can make it do all the things that I didn’t code into it."1, also did I mention that they crash a lot?2

On a more conceptual level: as a class of applications they are inconsistent in their implementation of any number or combination of 3 major different standards (and minor ones I’m sure, but I’m thinking about HTML, CSS, and JavaScript.) They’re slow. For most things they require a live internet connection (which is one hell of a dependency for a program if you ask me,) and oh yeah there’s like an anti-HIG, so nothing’s consistent and there’s a huge learning curve where there needn’t be.

So with that critique under our belts, it should be said that there are some things which do work best in browsers or browser-like interfaces. Basically programs that rely on the many-interlinked pages mode of the web, or programs that need to visualize data as it changes in real time. Wikis, are a great example of this and they don’t really work inside of desktop apps anyway. I mean, I’m not opposed to the internet, or the web, but I want my applications and my work to happen in different kinds of software/environments as a general rule. And the truth of the matter is that there are times when web-based applications are worth using.

Enter site specific browsers (SSB) like Fluid.app3. Here’s the problem: you have a few web apps that you use a lot, you want your apps to be sandboxed4 but can’t/won’t use google chrome, and you don’t really need or want all of the browser-centric interface crap. SSBs basically raise a website to the level of an application just like all your other applications. And it’s sandboxed. Besides finding alternatives to web-based applications, this is totally the best option around. Fluid has a lot of nifty features like control over what kind of URLs it’ll open or send to your browser, and what it does when you “close” a window, and special/custom key commands, and so forth.

What this means in practice: All of the websites that I used to habitually keep open in my browser? They have their own “apps,” now, and I sometimes (shh!) close my web browser (which helps the browser run better, which is crazy when you think about it.) It also means that I can use tabs more efficiently, and reference documents don’t get lost. It’s a great thing. Try it out, it’s all free in some sense.

Here’s the tycho-style second hack, particularly for laptop users: install a web-server and run as much of the the web-based software as you can locally. Need access to a personal wiki? Run it locally, and then you always have access to it, even when the wireless flakes out. I mean clearly if you want to have a “live journal app”/SSB, this won’t work, but in some cases it strikes me as both possible and highly preferable.

That’s it, though I can’t deside of SSBs are stopgaps until the web 4.0 or 5.0, where the revolution is about great syncing and sturdy clients that run on your local machines and on your virtualized cloud computers.

Oh and, DNIs while we’re at it. That’d be awesome, well the open-source second generation DNIs. No one’s putting proprietary 1.0 or beta grade hardware in my head, thankyouverymuch.


  1. Admittedly I’m not opposed to the plugin model, and there are a lot of Firefox plugins that I lust after, but the truth is that Firefox--because of plugins--runs interminably slower than it really should, which brings us back to the notion of, if the browser could do it from the beginning without the plugins… ↩︎

  2. So much that most good browsers now have a “when I crash, I’ll save your state as best I can, so you only have to wait a long time and almost be back where you were before I panicked.” Remember how many years it took them to think of that? Imagine if other programing environments or operating systems did that. Google Chrome fixes this by sand-boxing each web page instance, (good going), but really now. Geeze. Also a word here about Chrome: I can’t wait to be able to use it when they release it for OS X (and Linux). The sand-boxing is cool, the speedyness, the good UI (Did Alcor have something to do with that?), the fact that it’s likely to be about as open source as Mozilla/WebKit in the end? A win. But anyway, If browsers are what amounts to a runtime, or programing environment, then they are in no way stable enough. If they’re just remote file viewers, it’d be fine, but they’re not. Not anymore. ↩︎

  3. I like this one, it’s free as in beer, but not speech, and is mostly a wrapper around WebKit/safari, which is… free-ish. Again, not with the caring. If you’re not a Mac user, check out Mozillia Prisim, which is a firefox offshoot/plugin that does a very similar thing. ↩︎

  4. Wow, this is going to be the post with all the footnotes. I also realized that I’ve used this term a lot without subtitling it properly. Ideally applications don’t crash, but if/when they do, you don’t want them to crash your entire system. And this is true of different programs, largely, but if an application is host to another group of applications/processes (like multiple windows/tabs, for instance,) you don’t want what you do in window 2 tab 14 to affect (ie. end) what’s happening in window 3 tab 6, or any other tab/window. Except that in the browser world, this happens all the time. ↩︎

On Pseudonymity

I’ve written before about why I use a pseudonym and about the importance of naming, but there one aspect of this whole “being tycho” thing that I’ve never really articulated, mostly because I think they’re hard to explain.

And then the entry sat in an open window on my desktop for the better part of a week.

Let this be a lesson to you. Don’t write the introduction to a paragraph that ends with “hard to explain” and then let the post fester for 5 days.

Luckily to save my ass, someone on my knit list (JoVE) posted something to the queer knit list I moderate, about how she was always surprised when people didn’t realize that she was a she, even though she intentionally uses a gender neutral handle/nickname/name.

And then, I wrote this post, in response to the email. I knew it was sitting around waiting to come out. First to contextualize: all of the previous list moderators had gone by “list mom” or “list dad” and being even younger then than I am now, I was sort of weirded out by the concept.

Also I’ve done some but not a lot of editing the below, just to clear things up a bit.

When I started taking care of the list, re the listdad stuff for previously covered reasons (age mostly, I think) and I think you (JoVE) were the one who reminded folks that Sam was a gender-ambiguous name (and I think this was at a time when there were more Samantha’s on the list anyway,) which I’m sort of vaguely aware when I introduce myself online, but I forget too.

(For the record, while I’m not intensely invested in masculinity, I’m pretty comfortable with my maleness.) I also think that I probably give off more vibe as “Jewish” than “Gay,” which means that the goyem either think I’m ethnic or gay, and and the jews don’t seem to notice (unless they’re queer.) But then I’m convinced that, at least in America the gay male stereotype is lifted pretty indiscriminately from the jewish stereotype, so fair is fair.

In the last year and some change I’ve been using a pseudonym that is less jewish and more male (and fewer letters! woot!) than my given name, and it’s sort of an interesting drag to pull off (and I still usually dash out my g-ds, which I think is endearing and a bit ironic/weird, and it might negate the drag a bit, but, whatever.)

I’ve always seen my use of a pseudonym--at least in part--as paying homage to a tradition of women (and jews) using pseudonyms to gain entry into the publishing world. But in fairness there’s probably a level of “guy pretending to be a woman pretending to be a man,” that gets lost in the translation. Thankfully, the other part--keeping my given name out of google for privacy concerns--works just fine.

And there you have it. Thanks for reading, more non-introspective (extrospective?) posts soon.

Awesome Window Manager

Aside from doing semi-perverse things with my email retrieval system, one of my most recent technical/digital obsessions has been with a X11-based window manager called awesome. It’s a tiling window manager, and it’s designed to decrease reliance on the mouse for most computer interaction/system navigation purposes.

Unless you’re in the choir, your first question is probably “What’s a tiling window manager?” Basically the idea is that awesome takes your entire screen and divides all of it into windows that are a lot like the windows that OS X, Windows, GNOME, and KDE users are the same. Awesome also has the possibility for what it calls “tags,” but which are akin to virtual desktops (and I think of as slates) which make it possible to have a great number of windows open and accessible which maximizes screen efficiency and multi-tasking while minimizing distractions and squinting.

The second question you might have, given the prevalence of the mouse-pointer paradigm in computing lo these 30 years, why would you want a system that’s not dependent on the mouse? Long time readers of the ‘blag might remember some blogging I did earlier this year about the second coming of the command line interface. The basic idea is that the more you can avoid switching between the mouse and the keyboard, the more efficient you can be. Keystrokes take fractions of seconds, mouse clicks take many seconds, and this adds up. The more complex idea is that text-based environments tend to be more scriptable than GUIs and coded more efficiently with less mess in between you and your data/task. After all, coding visual effects into your text/word processing application is probably a sign that someone is doing their job horribly wrong.

One of my largest complaints about using GNOME is that it’s terribly inefficient with regards to how it uses screen space. Maybe this is the symptom of using a laptop and not having a lot of space to go around, but most applications don’t need a menu bar at they top of every window, and a status bar at the bottom of every window, and a nice 5 pixel border. I want to use my computer to read and write words, not look at window padding (I suppose I should gripe about GNOME at some point, this is an entry onto itself.) Awesome fixes this problem.

I’m not jumping in to Awesome full time, but I am starting to use it more and learn about it’s subtleties, and hopefully I’ll be able to contribute to the documentation of the project (it needs something at any rate). For a long time I’ve flirted with Linux, but haven’t ever really felt that it offered something that I couldn’t get with OS X, and this changes that pretty significantly.

One of the things that I need to do first is explore Linux equivalents to my remaining OS X-only apps. The most crucial is the news reader, I’m a big fan of NetNewsWire, and I’ve never used a news reader which can top it. As it turns out, between vim and Cream, I’m pretty set in the text editor department (though I need to port over the most important of my scripts and snippets to vim), and although Adium is a port of Pidgin, using Pidgin is painful by comparison, particularly in awesome.

But I have time. I’m doing this becasue it’s interesting interested and weirdly enough, it’s kind of fun.

That’s my story and I’m sticking to it, I’ll be posting more on the subject as I learn more.