Link Storm

I have a bunch of links that have been hanging around too long, that I’d like to share with you, so here they are. Enjoy!

  • Distrust Simplicity is a great short form blog, that I’m really liking.
  • The Singularity Summit is a cyborg/futurist event that a reader wrote me a note about a rather long time ago. If you’re interested in the singularity and issues and ideas related to that.
  • Hypo a literate programing (with org-babel) based asset management system for game development. (hypo example.
  • Common Lisp Support in Org-Babel. I feel like there’s probably a joke here.
  • Mango.io a markdown-based CMS, using Python tools. I’m not sure if it’s the kind of thing that I’m likely to ever want to use myself but it’s an idea.
  • A New York Times Article about Podcasting and Leo Laporte’s TWiT Network While the content of this article is interesting in it’s own right, there are a couple of “bigger” picture things happening. First, the old media (i.e. “the Times”) covering the new media (i.e. “TWiT,") is always interesting. Secondly, and less obviously, it’s interesting how the biggest successes in the “New Media,” are by veterans of the “Old Media,” (like Laporte, who had a career in Radio and television before doing TWiT.)
  • Social Text Journal: The Dramatic Face of Wikileaks, is a meta-meta-meta look at wikileaks and the “new media” moment that it represents.

Make All Novella

(Note: I was going through some old files earlier this week and found a couple of old posts that never made it into the live site. This is one of them. I’ve done a little bit of polishing around the edges, but this is as much a post for historical interest as is a reflection of the contemporary state of my thought.)

When I decided to publish my novella Knowing Mars, I decided that I wanted to use my existing publication system and that I wanted to automate the process of generating of all the necessary versions that made it possible to keep my original files in sync, without needing to duplicate effort. I decided that the few hours it would take to write a script would both save a lot of time later and make it more likely to maintain the text.

So I have this script that:

  • Copies the source files into the publication directory.
  • Generates: full html files for every chapter, plain text files for every chapter, a full html edition of the complete text, a plain text edition of the complete text, a minimally styled html edition of the complete text.
  • Keeps these editions synchronized.
  • Keep the original source files synchronized.
  • Ideally provide a tool that would prove useful in the future.

I’ve included the code of what I came up with in this wiki, at `</code/build-novel>`_, and you can find the full source of tychogaren.com and Critical Futures here. For the full source of theKnowing Mars text consider the gitweb. I would be very grateful for any feedback or input.

Why The World is Ready for Dexy

At one time or another, I suspect that most programmers and technical writers have attempted to “fix” technical writing in one way or another. It’s a big problem space:

  • Everything, or at least many things, need to be documented, because undocumented features and behaviors cause problems that *one really ought not need to review the source code and understand the engineering to fix (potentially) trivial problems, every time the occur.
  • The people who write code are both not suited to the task of writing documentation because writing code and writing documentation are in fact different skills. Also, I think the division of labor makes some sense here.
  • Documentation, like code, requires maintenance, review, and ongoing quality control, as the technology and practice change. That’s a lot of work and particularly for large projects, that can be a rather intensive task.
  • Lots of different kinds of documentation are needed, and depending on the specific needs of the user, a basic “unit of documentation,” may need to be presented in a number of different ways. There are a number of ways to implement these various versions and iterations, but they all come with various levels of complexity and maintenance requirements.

The obvious thing to do, if you’re a programmer, is to write some system that “solves technical writing.” This can take the form of a tool for programmers that encourages them to write the documentation as the write the code, or it can take the form of a tool that enforces a great deal of structure for technical writing, to make it “easier” for writers and programmers to get good documentation. Basically “code your way out” of needing technical writers.

You can probably guess how I feel about this kind of approach.

There is definitely a space for tooling that can make the work of technical writing easier, as well as space for tools that make the presentation of documentation clearer and more valuable for users. Tools won’t be able to make developers to write, at least not without a serious productivity hit, nor will tools decrease the need for useful documentation.

It’s a difficult problem domain. While there is a lot of room for building programs that make it easier to write better documentation, the problem is that the temptation to write too much software is great. Often the problems in the technical writing process, including high barriers to entry, complicated build/publication systems, and difficult to master organizational methods, which are easy to address in programs. Meanwhile, most of these issues can be traced to overly complex build tools and human-centered problems, which are harder to address in code.

And since documentation takes the form of simple text, which seems easy to deal with, developers frustrated by documentation requirements, or technical writing teams, are prone to trying to write something to fix the apparent problem.

Which brings us to the present, where, if you want to write and publish documentation, your choices are:

  • Use a wiki, which isn’t documentation but the software generally does a good job of publishing content, and wiki engines mostly don’t have arcane structures of their own that might get in the technical writer’s way. Downside: it’s the wrong tool for the job and it forces writers and editors to maintain style themselves across an entire corpus, which is difficult and eventually counterproductive.
  • Use some other existing content management system. Typically these aren’t meant for documentation, they have difficult to use interfaces, because they’re meant to power websites and blogs, and they almost impose some sort of structure (like a blog,) which isn’t ideal for conveying documentation.
  • Use an XML-based documentation tool-set. This is probably the best option around, at the moment, as these tools were built for the purpose of creating documentation. The main problems are: they’re not particularly well suited for generating content for the web (which I think is essential these days) and as near as I can tell they make humans edit XML by hand which I think is always a bad idea.
  • Build your own system from the ground up. Remember text is easy to munge and most of the other options are undesirable. Downside: homegrown projects take a lot of time, they’re always a bit more complex than anyone (except the technical writers?) expect, and it’s easy to almost finish and that’s bad because half-baked documentation systems are most of what get us into this problem in the first place.

So it’s a thorny problem and one that lots of people have (and are!) trying to solve. I’ve been watching a tool called dexy for the last few weeks (months?) and I’ve been very interested in it’s development and the impact that it, and similar tools, might have on my day-to-day work. This post seems to be the first in a series of thoughts about the tools that support technical writing and documentation.

Wiki Blogging

I’ll probably do a fair piece of this “metablogging” thing here, and I’m sorry.

Also, I totally intended for rhizome to be a much shorter form blogging project, and while the posts are shorter here than at critical futures, they’re not exactly short, and I’m not exactly as prolific as I might like to be.

I think I might just be somewhat long-winded.

I’ve also found that I’ve mostly not succeeded as using the wiki functions. Which is probably as much a result of my minimal posting as it is a function of my inclination.

Though I do realize that I’m pretty set in my existing blogging habit, and it takes a lot for me to break out of this form. Though I think it’ll be good to try. You can help by editing pages and continuing conversations that I start. I think I need to tweak some of the templates, to include “discuss this further” links (both here and on critical futures.) to remind people that anyone can edit and contribute.

Also you should all check out On Wireless Data, if you haven’t already. This is a post that I wrote a while back (and posted last week) about the that the technological constraints of wireless data networks constrains and effects the kinds of applications that are developed for these kinds of environments. This in turn affects the ways that people use technology. Which is interesting (and important) to think about.

And that’s all for now. I’ll see you around later!

Upgrade SBCL and SLIME

This is a little bit of documentation/technical writing around an issue that I had for a while. SBCL is a Common Lisp implementation that I use, and would recommend as a good starting point for people interested in tinkering with Common Lisp. SLIME is an emacs-based development tool kit that lets you interact with a lisp session in real time.

SLIME works as you’re writing code and makes it possible to connect to (potentially any) running lisp process and execute code and access documentation, among other functions. The connection between Emacs/Slime and the running application is provided by a connector called “Swank.” Lisp is pretty cool, Common Lisp is really nifty, but SLIME is what makes working with Lisp fun/easy and really powerful.

Ok. Here’s what happens. You upgrade SBCL (which happens every now and then for me with Arch Linux,) and you probably have to recompile a number of things to work with the new package. That’s a bit annoying, but it’s not a huge burden. Then you try and load Swank, and it bombs. You reinstall Slime but no dice, and you still can’t connect to your application in slime.

This is where I lingered for about 3 months. No working Slime meant going back to interacting with my lisp applications in the conventional manner, which kind of sucked.

Here’s the fix, and it’s crazy simple. In emacs run “M-x slime” Restart the application or reload the swank loader, and then try and connect to the application with Slime. Bingo.

Turns out that Slime (and therefore swank) build a few .fasl files that are version specific to SBCL that hang around after the upgrade. The only way these files are rebuilt is if you load slime, which you may only do by way of swank, which won’t work unless you reload slime. It’s a chicken and egg issue.

Problem solved. Sorry it wasn’t more interesting: most aren’t terribly.

On Wireless Data

It’s easy to look around at all of the “smart phones,” iPads, wireless modems, and think that the future is here, or even that we’re living on the cusp of a new technological moment. While wireless data is amazing particularly with respect to where it was a few years ago--enhanced by a better understanding of how to make use of wireless data--it is also true that we’re not there yet.

And maybe, given a few years, we’ll get there. But it’ll be a while. The problem is that too much of the way we use the Internet these days assumes high quality connections to the network. Wireless connections are low quality regardless of speed, in that latency is high and dropped packets are common. While some measures can be taken to speed up the transmission of data once connections are established, and this can give the illusion of better quality, the effect is mostly illusory.

Indeed in a lot of ways the largest recent advancements in wireless technology have been with how applications and platforms are designed in the wireless context rather than anything to do with the wireless transmission technology. Much of the development in the wireless space in the last two or three years has revolved around making a little bit of data go a long way, in using the (remarkably powerful) devices for more of the application’s work, and in figuring out how to cache some data for “offline use,” when it’s difficult to use the radio. These are problems that can be addressed and largely solved in software, although there are limitations and inconsistencies in approach that continue to affect user experience.

We, as a result, have a couple of conditions. First that we can transmit a lot of data over the air without much trouble, but data integrity and latency (speed) are things we may have to give up on. Second that application development paradigms that can take advantage of this will succeed. Furthermore, I think it’s fairly safe to say that in the future, successful mobile technology will develop in this direction as opposed against these trends. Actual real-time mobile technology is dead in the water, although I think some simulated real-time communication works quite well in these contexts.

Practically this means, applications that tap an APO for data that is mostly processed locally. Queue-compatible message passing systems that don’t require persistent connections. Software and protocols that assume you’re always “on-line” and are able to store transmissions gracefully until you come out of the subway or get off of a train. Of course, this also means designing applications and systems that are efficient with regards to their use of data will be more successful.

The notion that fewer transmissions that consist of bigger “globs” of data will yield better performance than a large number of very small intermediate transmissions, is terribly foreign. It shouldn’t be, this stuff has been around for a while, but nevertheless here we are.

Isn’t the future grand?

Some New Years

Typically, I’m writing my new years post on January 3rd, knowing full well that I won’t get a chance to publish it until the 4th. This probably explains how my 2010 closed and how my 2011 is shaping out.

I’ve been writing emails today and over the weekend to family and friends, making plans for various events and weekends over the next six months. There are only so many weekends, and there is so much to do. My preliminary outline for May/June is--I think this is par for the course when one is a Morris dancer--exhausting, and I have five or six months to prepare. The rest of spring is similarly exciting.

The last year was, on the whole, a good year: I have started new relationships and enriched existing ones in was that I am quite pleased with, I had a move that’s been good for me in a number of ways, I’ve been able to travel regionally a great deal, and I’ve learned a lot in my travels. It wasn’t all positive: I definitely didn’t finish the writing projects I wanted to, I didn’t knit or read as much as I would have liked to, and I have a number of personal projects that I’m throwing a lot of energy into that I hope to resolve in the next few months. But all of the low points can be directly correlated to the high points: success requires sacrifice, and on balance it’s been a successful year.

I hope that in the new year, we all are able to have great successes, without needing to make untenable sacrifices. I think the core of all new years resolutions is a wish to make our lives a little bit better than they were previously, hopefully in small manageable ways. So I’m going to keep working on making my world (and self) a better place to be me.

I hope, if you too are embarking on any kind of project like this, that you will succeed, and I look forward to sharing parts of my journey with you in this blog.

Cheers!

(real content will resume shortly, I promise!)

Obsessive Knitting

So I think I’m back to being a knitter. I started a sweater last May: something fine gage, very very plain, using my “default, this sweater is awesome” pattern in my head. It has had its ups and downs, but its a good project: and like all good projects, I’ve learned something.

First, the sweater is much larger than I wanted it to be: thankfully, it’s going to fit my roommate perfectly and I’ve been meaning in to knit him something for a while. At the same time, it’s such a fine sweater that the extra size means that it took extra long to knit.

The second thing, and perhaps more importantly, when I started the sweater I hadn’t really been knitting very much and I thought that what I really needed was something plain and simple and meditative. Apparently, except when I need distraction, plain knitting is not what I need, and I ended up being far too bored to actually want to work on this.

Thankfully, this week, I’ve mostly needed distraction, so I’ve been able to make rather impressive progress on the sleeves, and I expect to finish the last third of the second sleeve by this evening. That means I can start on new knitting projects, and I have a new sweater planed out and ready to go.

Very exciting, I know.

I got rid of a lot of yarn stash this fall--stuff that I had collected (on the cheap) that I didn’t have a project in mind for, with the hope that a smaller stash would let me focus on being able to knit on projects that I really wanted to knit. Which is sweaters, primarily sweaters in finer gages with nice two color patterns.

And so I will.

For this holiday week, in addition to the aforementioned plain sweater, I have a sweaters worth of the most amazing fingering weight yarn in two colors, and a graph for a new sweater.

It will be glorious.