Planned Obsolescence and Gadgets

I’ve had a tab open for most of last week for a 128 gig solid state hard disk that would fit my laptop, and that had a list price that I was comfortable with, and a maker that I tend to consider reputable. I haven’t ordered if for a number of reasons (not ready to swap drives, I’m using the wrong kind of file systems, given that my current drive works fine it’s a bit more than I want to spend at the moment.) But the fact that these things are “affordable,” and cost half what I thought they did, I’ve been thinking about my current stable of gear. This is, as those of you who think about technology in a way similar to me, not surprising. Here’s what’s on my mind in terms of new gadgets:

Netbooks, the small laptops are awfully sexy and I kind of want one. This isn’t logical: my real laptop is both small (12 inches,) has a full sized keyboard, decent resolution, and is totally functional: I use it for everything from the writing and production of this blog, to most of my day job, to all of my entertainment computing. I take my laptop everywhere, so a notebook doesn’t make a lot of sense. Also the relative (and ongoing!) cost of maintain a full “tycho-stack” on more than one machine is difficult and not something I want to get into. This doesn’t mean that I don’t find all the netbooks extremely cute and desirable but I don’t really need one. I suppose the interest in the SSD is part of an effort to make the existing laptop into more of a netbook.

Also on the stack I’m approaching the time in my cell phone contract when it’s time to upgrade again. I’ve been debating the Nokia N900 for some time. Though I’ve not played around with the n900 I think all of my “I wish my phone could do X,” problems could be solved by using a phone that actually uses Debian. Except I’ve learned in the last week that the screen on the n900 is resistive, like all of the old Palm Pilots (say,) and not capacitive like the iPhone/iPod Touch/Android phones. Big bummer. Not only is the phone not subsidized, and expensive, but it uses outmoded technology for the most important component. Fail.

At the same time it looks like Android phones are finally starting to make it to ATT. I’m not a good candidate for the iPhone (not being a reliable Mac User) and while I’m not sure about the enduring possibilities of the Android Platform, it seems like the best option at the moment.

I like my blackberry, really, but the replacement for the blackberry I have at the moment? The same phone. Really. Well, I think it has a different model number and a slightly different trackball, but otherwise, it’s the same. I mean… Throw me something here? Upgrades are supposed to be upgrades. I wonder if Blackberry smart-phones are today’s Nokia S60.

In any case, dithering about cell phones, netbooks, and SSDs aside, nothing else in my technology stable needs changing really. I geeked out a few weeks ago, and reorganized some of my mail config (but not much of it!), and I think if I had free time, I might redeploy the server to use nginx and prosody which are a bit more lightweight that my current stack, but that’s all minor and will totally wait. It’s a good place to be.

Content Management Beyond Wordpress

As a follow up to my, surprisingly popular post on the Limitations of Wordpress, and also I suppose my post on the current status of tumblelogs, I wanted to ruminate on “where Wordpress” is as a piece of software (and a platform,) and what the whole content management space looks like today.

In a lot of ways Wordpress won. Wordpress does what it does very well. And the thing it does, powering blogs, is in point of fact what most people need. The wordpress plug-in and theme ecosystems are vibrant and powerful and add a great deal of value to the system. Concerns about performance are largely solved by Wordpress Super Cache, and even though I’m squidgy about MySQL and PHP as a platform, for the job at hand it works.

The limitations that I spoke to a year ago are--I think--largely still relevant: exceedingly few (and fewer) innovative websites and blogs will be started that use Wordpress. This is, I think mostly because the prescribed form (“blog” as seen by Wordpress,) has become very cemented in our minds, and it becomes harder and harder to break from that form. In a lot of ways the largest limitation of Wordpress is not the software itself but the habits we have developed as users of Wordpress.

Indeed this is the general problem behind most content management systems: all sites that use X-platform tend to look very much like all other sites that use X-platform. Content management systems that purport to be web development frameworks (Rails/Django) are a bit better in this regard, but the problem remains. But this post is about the future, not about the histories or even the stale-present.

Some predictions and forward looking trends are thus in order:

  • Content management systems will increasingly manage work-flow rather than content presentation. Every site needs to be built and constructed, and in a lot of ways building a site and creating content are fixed costs no matter what system you use. The work flow that you use to maintain content is highly variable and can be pragmatically managed in more effective ways. Stay tuned for this.
  • The “built-in” feature set, or default configuration of a content management system will become less important than the possibilities of the platform. While I don’t think frameworks and CMS’s will merge in the next few years, they’ll get closer.
  • Smart static generation is still the future. Most things don’t need fully dynamic content, and the intense caching that we have to do to offset the dynamic overhead of contemporary systems isn’t the real solution to this problem.
  • Content management systems are, at the moment, at the center of a web site stack/deployment, and thus are huge all encompassing programs. I think increasingly content management systems can be designed to be much smaller applications, and only manage content and content work-flows rather than entire websites. We might call this the API-ization or the Unix-ification of web development.

Anything else? Am I totally off my rocker? Onward and Upward!

Installing Mcabber .10-rc3 on Debian Lenny

mcabber is console based XMPP or Jabber client. It runs happily within a screen session, its lightweight, and it does all of the basic things that you want from an IM client without being annoying and distracting. For the first time since I started using this software a year or two ago, there’s a major release that has some pretty exciting features. So I wanted to install it. Except, there aren’t packages for it for Debian Lenny, and I have a standing policy that everything needs to be installed using package management tools so that things don’t break down the line.

These instructions are written for Debian 5.0 (Lenny) systems. Your millage may vary for other systems, or other versions of Ubuntu. Begin by installing some dependencies:

apt-get install libncurses5-dev libncursesw5 libncursesw5-dev pkg-config libglib2.0-dev libloudmouth1-dev

The following optional dependencies provide additional features, and may already be installed on your system:

apt-get install libenchant-dev libaspell-dev libgpgme-dev libotr-dev

When the dependencies are installed, issue the following commands to download the latest release into the /opt/ directory, unarchive the tarball, and run the configure script to install mcabber into the /opt/mcabber/ folder so that it is easy to remove later if something stops working.

cd /opt/
wget http://mcabber.com/files/mcabber-0.10.0-rc3.tar.gz
tar -zxvf mcabber-0.10.0-rc3.tar.gz
./configure --prefix=/opt/mcabber

When that process finishes, run the following:

make
make install

Now copy the following /opt/mcabber-0.10-rc3/mcabberrc.example file into your home directory. If you don’t already have mcabber configured, you can use the following command to copy the file to your home directory.

cp /opt/mcabber-0.10-rc3/mcabberrc.example ~/.mcabberrc

If you do have an existing mcaber setup, then use the following command to copy the example configuration file to a non-overlapping folder in your home directory

cp /opt/mcabber-0.10-rc3/mcabberrc.example ~/mcabber-config

Edit the ~/.mcabberrc or ~/mcabber-config as described in the config file. Then start mcabber with the following command, if your config file is located at ~/.mcabberrc:

/opt/mcabber/bin/mcabber

If you have your mcabber config located at ~/mcabber-config start mcabber with the following command:

/opt/mcabber/bin/mcabber -f ~/mcabber-config

And you’re ready to go. Important things to note:

  1. If something gets, as we say in the biz “fuxed,” simply “rm rf /opt/mcabber/” and reinstall.
  2. Check mcabber for new releases and release candidates. These instructions should work well once there’s a final release, at least for Debian Lenny. The release files are located here.
  3. Make sure to stay up to date with new releases to avoid bugs and potential security issues. If you come across bugs, report them to the developers there is also a MUC for the mcabber community here: xmpp:mcabber@conf.lilotux.net.
  4. If you have an additional dependency that I missed in this installation do be in touch and I’ll get it added here.
  5. Debian Lenny ships with version 0.9.7 of mcabber. If you don’t want to play with the new features and the magic in 0.10, then go for it. If you just want a regular client, install the stable mcabber with the “apt-get install mcabber” command and ignore the rest of this email.

Overheard

One of my favorite meme’s on twitter is the “OH:” meme, where folks post little snippets of things they’ve heard in the world that are (usually) hilarious. This post will be, I think, a collection of the best little quotes I’ve heard, heard about, or seen recently.

“Chicken is easily divisible”

“If you’re hand is one space off on your keyboard and you start typing server, you start typing awesome. servers are awesome.”

“I will be eagerly awaiting the New York Times style piece on the growing trend of the ‘ZOMG WE’RE NOT EVEN DATING GUYS’ rings.”

“The emacs makes the text, I am but a humble servant.”

“We should get facebook married so everyone would know its the fakes.”

“If [company] were a musical, there’d be a song here. Thankfully it’s not.”

“Caffeine is like liquid naps.”

“For epic lulz you should switch your keyboard [with blank keys] to Dvorak.”

“wat.”

Enterprise Linux Community

Ok. I can’t be the only one.1

I look at open source projects like OpenSolaris, Alfresco, Resin, Magento, OpenSuSE, Fedora, and MySQL, among others, and I wonder “What’s the community around these projects that people are always talking about.” Sure I can download the source code under licenses that I’m comfortable with, sure they talk about a community, but what does that mean?

What, as a company, does it mean to say that the software you develop (and likely own all the rights to,) is “open source,” and “supported by a community?”

If I were sensible, I’d probably stop writing this post here. From the perspective of the users of and participants in open source software, this is the core question, both because it dictates what we can expect from free software and open source and more importantly because it has been historically ill defined.

There are two additional, but related, questions that lurk around this question, at least in my mind:

1. Why are new open source projects only seen as legitimate if the developers are able to build a business around the project?

2. What does it mean to be a contributor to open source in this world, and what do contributors in “the community,” get from contributing to commercial projects?

There are of course exceptions to this rule: the Debian Project, the Linux Kernel itself, GNU packages, and most open source programming languages among others. I’d love to know if I’ve missed a class of software in this list--and there’s one exception that I’ll touch on in a moment--but the commonality here is that that these projects are so low level that it seems too hard to build businesses around directly.

When “less technical” free software projects began to take off, I think a lot of people said “I don’t know if this open source thing will work when the users of the software aren’t hackers,” because after all what does open source code do for non-hackers? While it’s true that there are fringe benefits that go beyond the simple “free as in beer” quality of open source for non-hacker users, these benefits are not always obvious. In a lot of ways the commercialization around open source software helps add a buffer between upstreams and end users. This is why I included Debian in the list above. Debian is very much a usable operating system, but in practice it’s often an upstream of other distributions. Ubuntu, Maemo, etc.

The exception that I mentioned is, to my mind, projects like Drupal and web development frameworks like Ruby on Rails and Django. These communities aren’t sponsored or driven by venture capital funded companies. Though the leader of the Drupal community has taken VC money for a Drupal-related start up. I think the difference here is that the economic activity around these projects is consulting based: people use Drupal/Django/Rails to build websites (which aren’t, largely open source) for clients. In a lot of ways these are much closer to the “traditional free software business model,” as envisioned in the eighties and nineties, than what seems to prevail at the moment.

So to summarize the questions:

  • What, as a company, does it mean to say that the software you develop (and likely own all the rights to,) is “open source,” and “supported by a community?”
  • What does it mean to participate in and contribute to a community around a commercial product that you don’t have any real stake in?
  • How does the free software community, which is largely technical and hacker centered, transcend to deal with and serve end users?
  • How do we legitimize projects that aren’t funded with venture capital money?

Onward and Upward!


  1. I think and hope this is the post I meant to write when I started writing this post on the work of open source ↩︎

Analyzing the Work of Open Source

This post covers the role and purpose (and utility!) of analysts and spectators in the software development world. Particularly in the open source subset of that. My inspirations and for this post come from:


In the video Coté says (basically,) open source projects need to be able to justify the “business case” for their project, to explain what’s the innovation that this project seeks to provide the world. This is undoubtedly a good thing, and I think we should probably all be able to explore and clearly explain and even justify the projects we care about and work on in terms of their external worth.

Project leaders and developers should be able to explain and justify the greater utility of their software clearly. Without question. At the same time, problems arise when all we focus on is the worth. People become oblivious to how things work, and become unable to successfully participate in informed decisions about the technology that they use. Users, without an understanding of how a piece of technology functions are less able to take full advantage of that technology.

As an aside: One of the things that took me forever to get used to about working with developers is the terms that they describe their future projects. They use the imperative case with much more ease than I would ever consider: “the product will have this feature” and “It will be architected in such a way.” From the outside this kind of talk seems to be unrealistic and grandiose, but I’ve learned that programmers tend to see their projects evolving in real time, and so this kind of language is really more representative of their current state of mind than their intentions or lack of communications skills.

Returning for a moment to the importance of being able to communicate the business case of the projects and technology that we create. As we force the developers of technology to focus on the business cases for the technology they develop we also make it so that the only people who are capable of understanding how software works, or how software is created, are the people who develop software. And while I’m all in favor of specialization, I do think that the returns diminish quickly.

And beyond the fact that this leads to technology that simply isn’t as good or as useful, in the long run, it also strongly limits the ability of observers and outsiders (“analysts”) to be able to provide a service for the developers of the technology beyond simply communicating their business case to outside world. It restricts all understanding of technology to journalism rather than the sort of “rich and chewy” (anthropological?) understanding that might be possible if we worked to understand the technology itself.

I clearly need to work a bit more to develop this idea, but I think it connects with a couple of previous arguments that I’ve put forth in these pages one regarding Whorfism in Programming, and also in constructing rich arguments.

I look forward to your input as I develop this project. Onward and Upward!

The Successful Failure of OpenID

Just about the time I was ready to call OpenID a total failure, something clicked and, if you asked how I thought “OpenID was doing,” I’d have to say that it’s largely a success. But it certianly took long enough to get here.

Lets back up and give some context.

OpenID is a system for distributing and delegating authentication for web services to third party sites. Basically to the end user, rather than signing into a website with your username and password, you sign in with your profile URL on some secondary site that you actually log into. The site you’re trying to log in, asks the secondary site “is this legit,” the secondary site prompts you (usually just the first time, though each OpenID provider may function differently here.) then you’re good to go.

Additionally, and this is the part that I really like about Open ID is that you can delegate the OpenID of a given page to a secondary host. So on tychoish.com you’ll find the following tags in the header of the document:

<link rel="openid.server" href="http://www.livejournal.com/openid/server.bml" />
<link rel="openid.delegate" href="http://tychoish.livejournal.com/" />

So I tell a third party site “I wanna sign in with http://tychoish.com/ as my OpenID,” it goes and sees that I’ve delegated tychoish.com’s OpenID to LiveJournal (incidentally the initiators of OpenID if memory serves,) and LiveJournal handles the authentication and validation for me. If at some point I decide that LiveJournal isn’t doing what I need it to, I can change these tags to a new provider, and all the third party sites go talk to the new provider as if nothing happened. And it’s secure because I control tychoish.com and contain a provider-independent identity server, while still making use of these third party servers. Win.

The thing is that OpenID never really caught on. Though managing a single set of authentication credentials, and a common identity across a number of sites has a lot of benefits to the users, it never really caught on. Or I should say, it took a very long time to be taken seriously. There are a number of reasons for this, in my understanding:

1. Third party vendors wanted to keep big user databases with email addresses. OpenID means, depending on implementation that you can bypass the traditional sign up method. This isn’t a technological requirement but can be confusing in some instances. By giving up the “traditional” value associated with sponsoring account creation, OpenID seemed like a threat to traditional web businesses. There were ways around this, but it’s confusing and as is often the case a dated business model trumped an inspiring business model.

2. There was and is some fud around security. People thought if they weren’t responsible for the authentication process that they wouldn’t be able to ensure that only the people who were supposed to were able to get into a given account. Particularly since the only identifying information associated with an account was a publicly accessible URL. Nevertheless it works, and I think people used these details to make people feel like the system isn’t/wasn’t secure.

3. There are some legitimate technological concerns that need to be sorted out. Particularly around account creation. This is the main confusion cited above. If someone signs up for an account with an OpenID, do they get a username and have to enter that, or do we just use the OpenID URL? Is there an email address or password associated with the account? What if they get locked out and need to get into the account but there’s no email? What if they need to change their OpenID provider/location at some point. These are legitimate concerns, but they’re solvable problems.

4. Some users have had a hard time groking it. Because it breaks with the conventional usage model, and it makes signing into sites simple it’s a bit hard to grok.

What’s fascinating about this is that eventually it did succeed. More even than joy at the fact that I get to use OpenID, finally, I think OpenID presents an interesting lesson in the eventual success of emergent technological phenomena. Google accounts, flickr accounts, and AIM accounts all provide OpenID. And although “facebook connect” is not using OpenID technology, it’s conceptually the same. Sites like StackOverflow have OpenID only authentication, and it’s becoming more popular.

OpenID succeeded not because the campaign to teach everyone that federated identity vis a vis OpenID was the future and the way we should interact with web services, but rather because the developers of web applications learned that this was the easier and more effective way to do things. And, I suspect in as much as 80% or 90% of cases when people use OpenID they don’t have a clue that that’s the technology they’re using. And that’s probably an ok thing.

The question that lingers in my mind as I end this post is: is this parallel any other optimistic technology that we’re interested in right now? Might some other “Open*” technology take away a strategic lesson from the tactical success of OpenID? I’d love to see that.

Onward and Upward!

Common Lisp, Using ASDF Install With SBCL

So I, like any self respecting geek trying to learn Common Lisp started to read the cliki, which is a wiki that supports Common Lisp projects. Nifty right? Right. It’s full of stuff, and between it and Common-Lisp.net, you can be pretty sure that if it exists in the common Lisp world it’ll appear on one of those two sites. And for every cool lisp thing, rather than usable instructions for installing the software it would just say “use asdf install and have fun.” Which is good if you know what asdf is or what it’s supposed to, and how to use it.

But, there’s a decent chance that you’re like me, and were completely clueless.

Turns out asdf-install is the common lisp equivalent of the CPAN shell or Ruby gems, or the Debian project’s dpkg, with some lisp-centric variations. This post provides an overview and a “quick start guide” in case you want to get started. The directions I provide are in line with “the way I like to keep my file system organized (e.g. ~/) and center around the Arch Linux and SBCL system that I use. However, this should hold true (more or less) for any distribution of Linux with SBCL and possibly to other lisps. Feel free to add your own modifications in comments or in the lisp page on wikish.


Begin by getting to a CL REPL. If you have emacs and “slime” installed get to a REPL using “M-x slime” otherwise just type sbcl at a system prompt. Installing slime, emacs, and sbcl are beyond the scope of this post, but in general use the packages designed for your platform and you should be good. MacPorts for OS X users and the package managers for most prevalent Linux-based operating systems should have what you need.

At the REPL do the following:

(require 'asdf)
(require 'asdf-install)

(asdf-install:install '[package-name])

Remember to replace the [package-name] with the dependency or package that you want to install. asdf will ask you if you want to install the package system wide, or in a user-specific user directory. I tend to install things in the user-specific directories because it gives me a bit more control over things. The user specific directory is located in ~/.sbcl if you want to poke around the files yourself. Done. That’s pretty straight forward. Lets get to the awesome parts.

Make a ~/lisp directory. I keep mine stored in a git repository. I’ve also kept my .sbcl directory inside ~/lisp and then created a symbolic link so that the computer is none the wiser. Issue the following commands to accomplish this:

cd ~/
mkdir -p ~/lisp/systems
mv .sbcl ~/lisp/
ln -s ~/lisp/.sbcl

Adjust the path as necessary. Additionally You will also want to create a ~/.sbclrc file with some code for asdf to initialize itself when SBCL runs. Do the following:

cd ~/
touch ~/lisp/.sbclrc
ln -s ~/lisp/.sbclrc

In your .sbclrc file you’ll probably want something like the following:

(require 'asdf)

(pushnew #p"/usr/share/common-lisp/systems/" asdf:*central-registry* :test #'equal)
(push #p"/usr/share/common-lisp/systems/" asdf:*central-registry*)

(pushnew #P"/home/[username]/lisp/systems/" asdf:*central-registry* :test #'equal)
(push #P"/home/[username]/lisp/systems/" asdf:*central-registry*)

This tells SBCL and asdf where all of the required lisp code is located. Alter path’s as needed. We’ve not talked very much about the ~/lisp/ directory yet. Basically it’s a directory that serves as a playground for all things lisp related. Each “project” or package should have it’s own directory, which will contain lisp code and an .asd file. To make a package accessible via asdf on your system create a symbolic link for these .asd files in your ~/lisp/system folder. Done.

So let’s set up a basic “hello world” package that we’ll call “reject,” just for grins. File one, ~/lisp/reject/reject.asd:

(defsystem "reject"
  :description "a reject program
  :version "0.1"
  :author "tycho garen"
  :licence "ISC License"
  :depends-on ("cl-ppcre")
  :components ((:file "reject")
               (:file "package")))

The dependency on cl-ppcre isn’t required, but that’s how it would work if you needed a regex engine for a reject hello world application. File two, ~/lisp/reject/package.lisp:

(defpackage reject (:use :common-lisp))

File three, ~/lisp/reject/reject.lisp:

(in-package :reject)

(defun hello-world () ()
(print "Hello World, tycho"))

(hello-world)

Once those fils are saved, issue the following commands to create the needed symbolic link:

cd ~/lisp/system/
ln -s ~/lisp/reject/reject.asd

Now, from the REPL issue the following expression to load the package:

(asdf:operate 'asdf:load-op 'reject)

And then the following expression to test that it works:

(hello-world)

And you’re set to go. As to how you’d write or package up something that might actually have value? That’s a problem I’m still wrapping my head around. But that can all happen later.

If I’ve overlooked something or you think my understanding of something here isn’t incredibly clear, do be in touch. I hope this helps!