Jekyll and Automation

As this blog ambles forward, albeit haltingly, I find that the process of generating the site has become a much more complicated proposition. I suppose that's the price of success, or at least the price of verbosity.

Here's the problem: I really cannot abide by dynamically generated publication systems: there are more things that can go wrong, they can be somewhat inflexible, they don't always scale very well, and it seems like horrible overkill for what I do. At the same time, I have a huge quantity of static content in this site, and it needs to be generated and managed in some way. It's an evolving problem, and perhaps one that isn't of great specific interest to the blog, but I've learned some things in the process, and I think it's worthwhile to do a little bit of rehashing and extrapolating.

The fundamental problem is that the rebuilding-tychoish.com-job takes a long time to rebuild. This is mostly a result of the time it takes to convert the Markdown text to HTML. It's a couple of minutes for the full build. There are a couple of solutions. The first would be to pass the build script some information about when files were modified and then have it only rebuild those files. This is effective but ends up being complicated: version control systems don't tend to version mtime and importantly there are pages in the site--like archives--which can become unstuck without some sort of metadata cache between builds. The second solution is to provide very limited automatically generated archives and only regenerate the last 100 or so posts, and supplement the limited archive with more manual archives. That's what I've chosen to do.

The problem is that even the last 100 or so entries takes a dozen seconds or more to regenerate. This might not seem like a lot to you, but the truth that at an interactive terminal, 10-20 seconds feels interminable. So while I've spent a lot of time recently trying to fix the underlying problem--the time that it took to regenerate the html--when I realized that the problem wasn't really that the rebuilds took forever, it was that I had to wait for them to finish. The solution: background the task and send messages to my IM client when the rebuild completed.

The lesson: don't optimize anything that you don't have to optimize, and if it annoys you, find a better way to ignore it.

At the same time I've purchased a new domain, and I would kind of like to be able to publish something more or less instantly, without hacking on it like crazy. But I'm an edge case. I wish there were a static site generator, like my beloved jekyll that provided great flexibility, and generated static content, in a smart and efficient manner. Most of these site compilers, however, are crude tools with very little logic for smart rebuilding: and really, given the profiles of most sites that they are used to build: this makes total sense.


I realize that this post comes off as pretty complaining, and even so, I'm firmly of the opinion that this way of producing content for the web is the most sane method that exists. I've been talking with a friend for a little while about developing a way to build websites and we've more or less come upon a similar model. Even my day job project uses a system that runs on the same premise.

Since I started writing this post, I've even taken this one step further. In the beginning I had to watch the process build. Then I basically kicked off the build process and sent it to the background and had it send me a message when it was done. Now, I have rebuilds scheduled in cron, so that the site does an automatic rebuild (the long process) a few times a day, and quick rebuilds a few times an hour.

Is this less efficient in the long run? Without a doubt. But processors cycles are cheap, and the builds are only long in the subjective sense. In the end I'd rather not even think that builds are going on, and let the software do all of the thinking and worrying.

Installing Mcabber .10-rc3 on Debian Lenny

mcabber is console based XMPP or Jabber client. It runs happily within a screen session, its lightweight, and it does all of the basic things that you want from an IM client without being annoying and distracting. For the first time since I started using this software a year or two ago, there's a major release that has some pretty exciting features. So I wanted to install it. Except, there aren't packages for it for Debian Lenny, and I have a standing policy that everything needs to be installed using package management tools so that things don't break down the line.

These instructions are written for Debian 5.0 (Lenny) systems. Your millage may vary for other systems, or other versions of Ubuntu. Begin by installing some dependencies:

apt-get install libncurses5-dev libncursesw5 libncursesw5-dev pkg-config libglib2.0-dev libloudmouth1-dev

The following optional dependencies provide additional features, and may already be installed on your system:

apt-get install libenchant-dev libaspell-dev libgpgme-dev libotr-dev

When the dependencies are installed, issue the following commands to download the latest release into the /opt/ directory, unarchive the tarball, and run the configure script to install mcabber into the /opt/mcabber/ folder so that it is easy to remove later if something stops working.

cd /opt/
wget http://mcabber.com/files/mcabber-0.10.0-rc3.tar.gz
tar -zxvf mcabber-0.10.0-rc3.tar.gz
./configure --prefix=/opt/mcabber

When that process finishes, run the following:

make
make install

Now copy the following /opt/mcabber-0.10-rc3/mcabberrc.example file into your home directory. If you don't already have mcabber configured, you can use the following command to copy the file to your home directory.

cp /opt/mcabber-0.10-rc3/mcabberrc.example ~/.mcabberrc

If you do have an existing mcaber setup, then use the following command to copy the example configuration file to a non-overlapping folder in your home directory

cp /opt/mcabber-0.10-rc3/mcabberrc.example ~/mcabber-config

Edit the ~/.mcabberrc or ~/mcabber-config as described in the config file. Then start mcabber with the following command, if your config file is located at ~/.mcabberrc:

/opt/mcabber/bin/mcabber

If you have your mcabber config located at ~/mcabber-config start mcabber with the following command:

/opt/mcabber/bin/mcabber -f ~/mcabber-config

And you're ready to go. Important things to note:

  1. If something gets, as we say in the biz "fuxed," simply "rm rf    /opt/mcabber/" and reinstall.
  2. Check mcabber for new releases and release candidates. These instructions should work well once there's a final release, at least for Debian Lenny. The release files are located here.
  3. Make sure to stay up to date with new releases to avoid bugs and potential security issues. If you come across bugs, report them to the developers there is also a MUC for the mcabber community here: xmpp:mcabber@conf.lilotux.net.
  4. If you have an additional dependency that I missed in this installation do be in touch and I'll get it added here.
  5. Debian Lenny ships with version 0.9.7 of mcabber. If you don't want to play with the new features and the magic in 0.10, then go for it. If you just want a regular client, install the stable mcabber with the "apt-get install mcabber" command and ignore the rest of this email.

Software as app Store

This post represents two major ideas, first of "app stores," and second of "Sass" or "software as a service," which seems to be the prevailing business model for contemporary technology companies that aren't stuck in the 80s. With reflection on free software, open source, and the technology industry as a whole. Because that's sort of my thing.

On the one hand the emergence of these tightly controlled software distribution methods represent a fairly serious threat to free software, as does SaaS particular insofar as SaaS exploits a GPL loophole. On the other hand these models, potentially, represent something fundamentally awesome for the technology and software world, because it represents a commonly accepted paradigm where users of software recognize the value of software, and the creators of software can get compensated for their work. It's not without its flaws, but I think it opens interesting possibilities.

Free and Freedom

Obviously app stores present a quandary for those of us involved in the free software world. On the one hand app stores are not free, which is a trivial complaint. It's not the cost, around which "free software" is truly centered, the true failing here is that creators of software cannot choose to participate in an app store system and distribute source code: the interaction and relationship between developers and users is very scripted and detached. These issues all grow out of the reality that app stores--by design--are they're controlled by a single institution or organization.

Which isn't itself a bad thing--there are contexts where centralized organization means things get done more effectively, but centralized authority is not without risk. So while this question isn't resolved, it's also the kind of question that requires ongoing attention and reflection.

Paying for Software

At the same time, I think it's very true that the "app store model" and indeed the more successful "Web 2.0" business models (e.g. new businesses on the web, post-2003/2004) have posited that:

Software is a thing of value that users should expect to pay for.

And that's not, at least to my mind, a bad thing for the software world. Free or otherwise. Or not always a bad thing, particularly for end-user software. For larger pieces of software (in the "Enterprise") money is largely exchanged for support contracts and for services related to the software: custom features, IT infrastructure, etc. For end user software, support contracts and custom features don't tend to make a lot of sense in context: so perhaps moving back to the exchange of money for software isn't a bad thing.

The connection between "value" (which software almost certainly creates), and currency in the context of software is fraught. Software isn't scarce, and will never be (by nature.) At the same time it does have value and I think it's worth considering how to arrange economies that involve exchanging money for software. There are a lot of factors that can effect the way that app stores might work, and I think given the possibility for causing interesting things to happen we shouldn't dismiss them out of hand.

venture capital and software

I read this article by Joel Spoolsky about the first dot-com bust and it help crystallized a series of thoughts about the role of venture capital in the development of technology and software, particularly of Internet technologies. Give it a shot. Also, I think Cory Doctorow's "Other People's Money," is a helpful contributor to this train of thought.

The question I find myself asking myself is: to what extent is the current development of technology--particularly networked technology--shaped by the demands of the venture capital market? And of course, what kind of alternative business models exist for new technologies?

I guess I should back up and list the problems I have with the VC model. And by VC model I mean private investment firms that invest large sums of money in "start up" companies. Those issues are:

  • Breaking even, even in--say--five years, is exceptionally difficult from a numbers perspective, let alone turning a profit of any note. This is largely because VC funding provides huge sums of money (it is after all really hard to give away 20 billion a year in 60-120k a year tops.) and so seed sums are larger than they need to be, and this has a cascade effect on the way the business and technology develops, particularly in unsustainable ways.
  • VC-funded start-ups favor proprietary software/technologies, because the payoff is bigger up front, which is often the case. It's hard to make the argument that you need seed money for a larger, more slow moving product... Small and quick seem to work better.
  • The VC-cycle of boom and bust (which is sort of part and parcel with plain-old-capitalism) means that technology development booms and busts: so that a lot of projects tank when the market crashes, and that the projects that get funded during the booms are (probably mostly) not selected for their technological merit.
  • VC firms tend to be very responsive to fads and similar trends in the market. (e.g. dot-com bubble, web 2.0, Linux in the mid nineties, biotech stuff, etc.) which means that VC firms generate a great deal of artificial competition in these markets, which disperses efforts needlessly, without (as near as I can tell) improving the quality of software developed (eg. in the microblogging space, for example, the "first one out of the gate," twitter, "won" without apparent regard for quality or feature set.)

Venture capital funding provides outfits and enterprising individuals with the resources for "capital outlay" and initial research-and-development costs, and in doing so fills an economic niche that is otherwise non-existent, and this is a good thing indeed. At the same time I can't help but wonder if the goals an interests of venture capitalists aren't--in some ways--directly at odds with the technology that they aim to develop.

I also continue to question the ongoing role of this kind of "funding structure" (for lack of a better term). I think it's pretty clear that the effect of continuing technological development is the fact that the required "capital outlay" of any given start up is falling like a rock as advanced technology is available at commodity-prices (eg. VPSs, Lulu.com), as open source software tightens development cycles (eg. Ruby on Rails, JQuery). Both of these trends, in combination with the long-standing problems with VC funding, means that I think it's high time we ask some fairly serious questions about the development of this technology. I'll end with the question at the forefront of my thinking on the subject:

Where does (and can) innovation and development happen outside of the context of venture-capital funded start ups in the technology world?

Pragmatic Library Science

Before I got started down my current career path--that would be the information management/work flow/web strategy/technology and cultural analyst path--I worked in a library.

I suppose I should clarify somewhat as the image you have in your mind is almost certainly not accurate, both of what my library was like and of the kind of work I did.

I worked in a research library at the big local (private) university, and I worked not in the part of library where students went to get their books, but in the "overflow area" where the special collections, book preservation unit, and the catalogers all worked. What's more, the unit I worked with had an archival collection of film/media resources from a few documentary film makers/companies, so we didn't really have books either.

Nevertheless it was probably one of the most instructive experiences I've had. There are things about the way Archives work, particularly archives with difficult collections, that no one teaches you in those "how to use the library" and "welcome to library of congress/dewy decimal classification systems" lessons you get in grade school/college. The highlights?

  • Physical and Intellectual Organization While Archives keep track of, and organize all sorts of information about their collections, the organization of this material "on the shelf" doesn't always reflect this.

    Space is a huge issue in archives, and as long as you have a record or "where" things are, there's a lot of incentive to store things in the way that will take up the least amount of space physically. Store photographs, separately from oversized maps, separately from file boxes, separately from video cassettes, separately from CDs (and so forth.)

  • "Series" and intellectual cataloging - This took me a long time to get my head around, but Archivists have a really great way of taking a step back and looking at the largest possible whole, and then creating an ad-hoc organization and categorization of this whole, so as to describe in maximum detail, and make finding particular things easier. Letters from a specific time period. Pictures from another era.

  • An acceptance that perfection can't be had. Perhaps this is a symptom of working with a collection that had only been archived for several years, or working with a collection that had been established with one large gift, rather than as a depository for a working collection. In any case, our goal--it seemed--was to take what we had and make it better: more accessible, more clearly described, easier to process later, rather than to make the whole thing absolutely perfect. It's a good way to think about organizational project.

In fact, a lot of what I did was to take files that the film producers had on their computers and make them useful. I copied disks off of old media, I took copies of files and (in many cases, manually) converted them to use-able file formats, I created index of digital holdings. Stuff like that. No books were harmed or affected in these projects, and yet, I think I was able to make a productive contribution to the project as a whole.

The interesting thing, I think, is that when I'm looking through my own files, and helping other people figure out how to manage all the information--data, really--they have, I find that it all boils down to the same sorts of problems that I worked with in the library: How to balance "work-spaces" with storage spaces. How to separate intellectual and physical organizations. How to create usable catalogs and indices's of a collection. How to lay everything down so that you can, without "hunting around" for a piece of paper lay your hands on everything in your collection in a few moments, and ultimately how to do this without spending very much energy on "upkeep."

Does it make me a dork that I find this all incredibly interesting and exciting?

new awesome

I've been (slowly) upgrading to the latest version of the Awesome Window Manager. Since Awesome is a pretty new program, and there was a debian code freeze during development for a huge chunk of the awesome3-series code... it's been hard to install on ubuntu. Lots of dithering about, and then compiling by hand. For the uninitiated, ususally installing new software on a Debain-based system (like ubuntu; and many GNU/Linux systems are this way) is as simple as typing a single command. This hasn't really been the case for awesome.

In any case, with the latest release candidates for awesome 3.3 in sid (debian unstable) I added a sid repository to my ubuntu system, updated, installed awesome, removed the sid repository. Breathed a huge sigh of relief, and then got to getting things setup again. I have the following responses to the new awesome:

  • I really like the fact that if you change something in your config file and it doesn't parse, awesome loads the default config (at /etc/xdg/awesome/rc.lua) so that you don't have to kill X11 manually and fix your config file from a virtual terminal.
  • If you're considering awesome, and all this talk of unstable repositories scares you, the truth is that awesome is--at this point--not exactly adding new features to the core code base. There are some new features and reorganizations of the code, but the software is generally getting more and more stable. Also, the config file has been (and is becoming less of) a moving target, so given that it's pretty stable and usable, it makes sense to "buy in" with the most current version of the configuration so you'll have less tweaking in general.
  • The new (default) config file is so much better than the old ones. I basically reimplemented my old config into the new default config and have been really happy with that. It's short(er) and just yummy.
  • I did have some sort of perverse problems with xmodmap which I can't really explain but they're solved.
  • If you're use a display manager (like gdm) to manage your x sessions, I know you can just choose awesome from the default sessions list, but I'd still recommend triggering awesome from an .xinit/.Xsessions file so that you can load network managers and xmodmap before awesome loads. Which seems to work best for me.
  • I'd never used naughty, which is a growl-like notification system before, and now that it's included by default I am using it, and I quite adore it.

More later.

Browser Survey

Long story, short punch line. I was developing a website the other day, and I realized that I had to do some compatibility testing with other browsers. While I have a webkit-based browser lying around for these purposes I had to turn to BroswerShots to see what the site looked like in certain windows only browsers. this lead me on something of a little mystery hunt.

I did some checking on my stats and I found that a majority of the visitors to this site are firefox/mozilla (gecko users) and there's a sizable minority that uses Webkit browsers (Safari/Chrome/Etc.). That takes care of about 75% percent of you. The remaining portion uses Internet Explorer (IE).

So be it, really, I mean, I'd try Chrome, or Firefox if you can, but the truth is that by now IE 8 (and even 7) render pages more or less the way they should, and I don't have a big gripe about that (which accounts for 3/4s of all IE useage). There are, however, a quarter of the IE users (so 6% of you,) that are using IE 6. Which actually, can't seem to render any pages correctly, from what I can tell.

Since I already know what browsers you use the survey then should be:

  • Why do you use the browser you use, particularly if its IE or IE 6
  • Do you prefer a brwoser that's fast, but light on features (All WebKit browsers henceforth deployed), or a slower but featured filled browser (Firefox?)
  • Are you trying to use your browser less than you currently do (taking work offline,) or more (putting more things into the cloud)?
  • What do you think the "next big thing in browsers is?"

Links and Old Habits

So I've noticed that my impulse on these shorter blog posts (codas) that I tend to just do my normal essay thing only shorter, which is more of an old habit than it is something productive or intentional. Weird. To help break out of this bad habit, I'm going to post some links that I've collected recently.

I saw a couple of articles on user experience issues that piqued my interest, perhaps they'll pique yours as well: Agile Product Desgin on Agile Development and UX Practice and On Technology, User Experience and the need for Creative Technologists.

Cheetah Template Engine for Python. This isn't an announcement but I've been toying around with the idea of reimplementing Jekyll in python (to learn, because I like python more than Ruby). Cheetah seems to be the template engine for python that looks the coolest/best. I need to read more about it, of course

I didn't get to go to Drupal Con (alas), but there were a few sessions that piqued my interest, or at least that I'd like to look into mostly because the presenters are people I know/watch/respect: Sacha Chua's Rocking your Development Environment Liza Kindred's Bussiness and Open Source James Walker's Why I Hate Drupal.

Sacha's because I'm always interested in how developers work, and we have emacs in common. Liza's because Open Source business models are really (perversely) fascinating, even if I think the Drupal world is much less innovative (commercially) than you'd initially think. Finally, given how grumpy I'm prone to being, how could walkah's talk not be on my list?

Anyone have something good for me?