Multi-Audience Documentation

I’ve written before about different types of documentation, and the different purposes and goals that each type services. Rather than rehash what documentation is, I’m interested in using this post to think about ways of managing and organizing the documentation process to produce better documentation more easily, with the end goal of being able to increase both maintainability and usability of documentation resources.

Different groups of users--say: administrators, end-users, and developers--interact with technology in overlapping but distinct ways. For some technologies, the differences between the classes of users is not significant and one set of documentation is probably good do every one, plus or minus a very small set. In most other cases, multiple resources are required to be able to address the unique needs of different user groups. Figuring out effective ways to address the different kinds of questions that various groups of users ask, but in a way that makes sense to those users is often the primary challenge in writing documentation.

Having said that, writing different sets of documentation for different users is a lot of work, but given time its not insurmountable. The problem is after six months or more (say,) or a couple of new releases when its time to update the documentation, there are three manuals to update instead of one. This is pretty much horrible. Not only is it more work, but the chances for errors skyrockets, and it’s just a mess.

The solution, to my mind, is to figure out ways to only ever have to write one set of documentation. While it might make theoretical sense to split the material into multiple groups, do everything you can to avoid splitting the documentation. Typically, a well indexed text can be used by multiple audiences if its easy enough for users to skip to read only the material they need.

The second class of solutions revolves around taking a more atomic approach to writing documentation. In my own work this manifests in two ways:

  • Setting yourself up for success: understanding how software is going to be updated, or how use is likely to change over time allows you to construct documents that are organized in a way that makes them easy to update. For example: Separate processes from reference material, and split up long processes into logical chunks that you can interlink to remove redundancies.

    Unfortunately, in many cases, it’s necessary to learn enough about a project and the different use patterns before you have the background needed to predict what the best structure of the documentation ought to be.

  • Separate structure from content: This is a publishing system requirement, at the core, but using this kind of functionality must be part of the writer’s approach. Writers need to build documentation so that the organization (order, hierarchy, etc.) is not implicit in the text, but can be rearranged and reformed as needed. This means writing documentation “atoms” in a structurally generic way. Typically this also leads to better content. As a matter of implementation, documentation resource would require a layer of “meta files” that would provide organization that would be added at build time.

In effect this approach follows what most systems are doing anyway, but in practice we need another processing layer. Sphinx is pretty close in many ways but most document formats and build systems don’t really have support for this kind of project organization (or they require enough XML tinkering to render them unfeasible.) Once everything’s in place and once all of the atoms exist, producing documents for a different audience is just a matter of compiling a new organization layer and defining an additional output target.

This also produces problems for organization. If content is segregated into hundreds of files for a standard-book-length manual (rather than dozens, say) keeping the files organized is a challenge. If nothing else, build tools will need to do a lot more error checking and hopefully documentation writers will develop standard file organizations and practices that will keep things under control.

Thoughts? Onward and Upward!

Knitting in Three Dimensions

It’s relatively straight forward to think about knitting in terms of creating two dimensional shapes. Most of us start by knitting something “easy”1 like a scarf. From there it’s easy enough to teach knitters to create a never ending variety of polygons. This, however, misses what I think of as the really cool part of knitting. I think the way to understand how knitting works, to be able to knit things that more closely resemble what you want, and to have the most fun knitting is to always think about knitting as three dimensional.

This isn’t an elaborate argument in favor of circular knitting: that argument has been fairly well made and I’ll recount my favorite points on request, but circular knitting is a great technique and knitting in three dimensions is an entire practice.

Knitting Gestalts / Knitting Shapes

I’ve written about this before but one of the best parts about sweater knitting is thinking about how the sweater--the whole object--comes together into a garment. Rather than knitting a collection of flat pieces that can be sewn into a garment (tailoring) knitting lets you build and shape garments with various seamless and nearly-seamless methods.

I sometimes describe this kind of knitting as “architectural,” but the key (for me) is thinking about the entire object as a whole. There’s something that’s nearly magical that happens when you can take a few rows curled up on a circular needle and see in your mind fits into the object that you’re knitting. The process of using knitting stitches, increases and decreases to get from the former to the later is relatively trivial if have can think about the entire object (a “knitting gestalt”) in three dimensions in your mind.

Knitting Mechanics

If “knitting gestalts” provide a top-down perspective on knitting, I think there’s a “bottom up” three dimensional perspective that is important when thinking about how stitches fit together. While a big part of knitting has to do wit the shapes and forms, the textures, drape and “hand” of the fabric all have a lot to do with the final evaluation of the object. To understand drape and texture, it’s important to consider the properties of individual knitting stitches and the effects of yarn weight/texture, needle size, and personality of the knitter. The second part (yarn type, needle size, knitting style) is pretty common, the first (knitting stitch) is less so.

I have a favorite example of this kind of thinking. I’m not sure where I learned this but it’s suck with me:

Knitted fabric typically curls. This happens because the “purl side” of the knitted stitch has a greater surface area than the “knit side,” which causes unaltered stocking stitch to roll up. At the same time, the “knit” side of the stitch is a little bit wider than the “purl” side of the stitch, so the edges will curl in. The way to counteract this, is to mix knit-and-purl stitches on the same row to balance the surface areas out and thus counteract the effects. Think about ribbing and seed stitch… Think about knit and purl patterns and how they change the tendency of the fabric to roll. Think about the path of the yarn through a knitting stitch.

See? Isn’t is cool?

In Conclusion

Whatever kind of knitting you want to do is fine with me: I don’t care to tell anyone that the way they knit is wrong. At the same time, I don’t think there’s any sense in being afraid of your knitting: knitting is great fun and I think once you know the basics most knitters can knit just about everything. So my goal in this post, and in all of my knitting posts, is to share my own process and encourage you (all) to branch out in your own work.

Have fun!

Onward and Upward!


  1. Scarf knitting seems so easy and mechanically it is: knit the same number of stitches row after row after row. But there are issues. First, garter stitch to the uninitiated doesn’t look like “knitting,” and with a high rows per inch ratio these scarves take forever to knit. Such projects are always discouraging. ↩︎

Intellectual Audience

My friend Jo wrote a post a while ago that addressed the subject of building an audience for your scholarly work. You can read the post on her blog, here.

One of the things that I think Jo is really great at is thinking practically about academic careers and trajectories in light of the current academic job market. While people working in traditional academic spaces and on a traditional academic course have a different set of challenges than folks like me, her points still resonate.

How do you build networks and audiences? Two things:

  1. You talk to people.

Audiences are built on relationships. While we might like to think that writers and scholars are able to attract audiences purely on the basis of their work, in practice additional work is required.

  1. You make sure you have something to show for yourself.

Everyone’s got ideas, and projects that they’d like to work on. People love to talk about their ideas. Success, I think, comes when you have something to show for yourself and your projects, and give people some level of confidence that your can make good on your ideas.

In sort, write more, publish more. While quality matters some, being more than someone to talks well at parties is really important.

I think this approach is useful for people doing any kind of creative or intellectual work that engages an audience, but I’m interested in your thoughts.

Minimalism Versus Simplicity

A couple of people, cwebber and Rodrigo have (comparatively recently) switched to using StumpWM as their primary window managers. Perhaps there are more outside of the circle of people I watch but it’s happened enough to get me to think about what constitutes software minimalism.

While StumpWM is a minimal program in terms of design and function; however, in terms of ram usage or binary size, it’s not particularly lightweight. Because of the way Common Lisp works, “binaries” and RAM footprint is in the range of 30-40 megs. Not big by contemporary standards, but the really lightweight window managers can get by with far less RAM.

In some senses this is entirely theoretical: even a few years ago, it wasn’t uncommon for desktop systems to have only a gig of ram, so the differences would hardly have been noticeable. Now? Much less so. Until 2006 or so, RAM was the most performance effecting limited resource on desktop system, since then, even laptops have more than enough for all uses. Although Firefox challenges this daily.

Regardless, while there may be some link between binary size and minimalism, I think it’s probably harmful to reduce minimalism and simplicity to what amounts to an implementation detail. Let’s think about minimalism more complexly. For example:

Write a simple (enough) script in Python/Perl and C. It should scan a file system and change the permissions of files such that they match the permissions of the enclosing folder, but not change the permissions of a folder if it’s different from it’s parent. Think of it as “chmod -R” except from the bottom up. This is a conceptually simple task and it wouldn’t be too hard to implement, but I’m not aware of any tool that does this and it’s not exactly trivial (to implement or in terms of its resource requirements.)

While the C program will be much more “lightweight,” and use less RAM during while running, the chances are that the Python/Perl version will be easier to understand and use much more straightforward logic. The Python/Perl version will probably take longer to run and there will be some greater overhead for the Python/Perl runtime. Is the C version more minimal because it uses more RAM? Is the Perl/Python program more minimal because it’s interface and design is more streamlined, simple and easier to use?

I’m not sure what the answer is, but lets add the following factor to our analysis: does the “internal” design and architecture of software affect the minimalism or maximalism of the software?

I think the answer is clearly yes, qualified by “it depends” and “probably not as much as you’d think initially.” As a corollary as computing power increases the importance of minimalist implementations matters less generally, but more in cases of extremely large scale which are always already edge cases.

Returning for a moment to the question of the window manager, in this case I think it’s pretty clear: StumpWM is among the most minimal window managers around, even though it’s RAM footprint is pretty big. But I’d love to hear your thoughts on this specifically, or technological minimalism generally.

Back to Basics Tasklist and Organization

I’m a huge fan of emacs' org-mode on so many levels: as an IDE for knowledge workers, as a task management system, as a note taking system, and as the ideal basic mode for so many tasks. However, I’ve been bucking against org-for a number of tasks recently. The end result is that I’m becoming less org-dependent. This post is a reflection on how I’ve changed the way I work, and how my thinking has changed regarding org-mode.

Fair warning: this is a really geeky post that has a somewhat specialized context. If you’re lost or bored. check back later in the week.

The Perils of Org

The problem I keep running into with org is that I really don’t prefer to work in org-mode.1 Org is great and very flexible, but I don’t like that it means that all text-based work is dependent on emacs. My brain is already wired for Markdown and reStructured Text from years of blogging and work projects respectively.

And then there’s this organization problem. There are two ways you can organize content in org-mode. The first is to just dump every thing in one org-mode file and use the hierarchical outlining to impose organization to organize everything. The second is to have every project inside of it’s own file and use outlining incidentally as the project needs it. Content aggregation happens in the agenda.

The problem with the “large files” approach is that you end up with a small handful of files with thousands of lines and imposing useful organization is difficult (too many levels and things get buried; not enough and inevitably your headings aren’t descriptive enough and you get confused. Furthermore, I end up living in clone-indirect-buffer-other-window’d and org-narrow-to-subtree’d buffers, which is operationally the same as having multiple files it just takes longer to set up.

The problem with the other approach, having lots of different files, is that I have a hard time remembering what is in each file, or in logically splitting big projects into multiple files. The agenda does help with this, but the truth is that the kinds of org-headings for organization and tasks are not always the same kinds of headings that make sense for the project itself. I often need more tasks than organizational divides in a project. I tried this approach a couple of times, and ended up with useless mush in my files.

Typically, I can never make the “lots of file approach” really work, and the big files problem lead me to general avoidance of everything. Not good. The key to success here is good aggregation tools.

Hodgepodge

In response, I’ve made a couple of tweaks to how I’m doing… pretty much everything. That is:

  • I’ve moved most of my open projects into a locally ruining and compiling ikiwiki instance. Both laptops have this setup, and there’s a central remote to keep both (all?) machines in sync.
  • I’m using ikiwiki tasklist to basically replicate the functions of org-agenda. Basically this crawls the entire wiki looking for lines that begin with certain keywords and generates a “todo” page based on these notes. Really simple, incredibly useful and it solves much of my aggregation needs.
  • I still have some stuff in org-mode: notes for the nearly-finished novel, lots of random old (legacy) data, 12 various open tasks, and org-capture. I’m thinking of pointing various org-capture templates at files in the wiki but haven’t gotten there yet.
  • I’ve basically taken the “lots of little files,” approach to my writing and work. I’ve not over-leaded the system yet. Each major project gets a page in the root level of the wiki for overview and planing, and then sub-pages for all related project files (if/as needed)
  • It turns out that the markdown-mode for emacs has gotten a few improvements since the last time I downloaded the file, including better support for wiki-links that are mostly compatible with ikiwiki. Also from the same developer deft which implements a pretty nifty incremental search for text files in a given directory. So between these tools, ikiwiki, and the ikiwiki-tasklist there’s support for the most important things.
  • In terms of publishing, beyond ikiwiki for tychoish.com and the personal organization instance, I have a couple of other smaller wikis (also ikiwiki powered,) and I’ve been playing with Sphinx as publishing for more structured documents and resources (i.e. documentation, novels, and collections,) particularly those that need multiple formats and presentations.

I’m sure there will be more shifts in the future, I’m sure. I think this is a good start. Thoughts?


  1. This has pretty much always been the case. I think of it as a personal quirk. ↩︎

Cyberpunk Sunset

I’m not sure where I picked up the link to this post on the current state of cyberpunk, but I find myself returning to it frequently and becoming incredibly frustrated with the presentation.

In essence the author argues that while the originators of the cyberpunk genre (i.e. Gibson and Sterling, the “White Men”) have pronounced cyberpunk “over,” the genre is in fact quite vibrant and a prime location for non-mainstream (“other”) voices and per perspectives. Also, somehow, the author argues that by denying that cyberpunk continues to be relevant and active we’re impinging the diversity that’s actively occurring in the space.

My thoughts are pretty simple:

  • This is old news. People have been pronouncing cyberpunk dead since 1992 or thereabouts. And they’ve largely been right. Cyberpunk died, because the technological horizon 1980s (e.g. BBSs) developed in a particular way. In someways the cyberpunks got it right (there is a digital reality, there are digital natives, and unique digital social conventions.) In many ways no one got it right: more people are using the internet per-capita than anyone thought in 1984 and no one predicted that the internet would be as commercial as it is.

    In light of this the kinds of things that the people active in technology and in cyberpunk are thinking about and addressing have changed a lot. In many ways, Cory Doctorow is a pretty fitting heir to the cyberpunk lineage, but I think it’s also true that the cyberpunk tradition has shifted it’s focus into other issues and ideas.

    That interest in the present and the near future has always been a significant defining characteristic of cyberpunk, at least as relevant as the DIY and outsider aspect. In this respect, cyberpunk’s critique was accepted and quite transformative for the genre.

    At the same time, the “hackers,” and “cyberpunks,” grew out of academia (e.g. Free Software) and not the punk movement.

  • The cyberpunks, even when (white) men were the front men for the (sub)genre, have always been outsiders. In the 80s were the “Young Turks” of the science fiction world. Samuel Delany’s Nova is often cited a key cyberpunk-precursor, and there are some pretty important precursors in Stars in My Pocket, Dhalgren, and The Einstein Intervention.

  • I want to be sure to not forget about Melissa Scott while we’re at it. Trouble and her Friends is a great example of using cyberpunk to explore subcultures and experiences of people (queers, PoC, etc.) on the margins. While Trouble is almost on the late end for “original” cyberpunk I think it counts. The blogger seems to think that only queers and PoC and others have only recently taken up cyberpunk, and that seems particularly shortsighted, and not particularly true.

  • One of the most troubling aspects of the argument is the assumption that if “cyberpunk” is over than no one can write cyberpunk anymore and that to declare such would be to silence all of the would be *punks.

    This is absurd.

    Not only is this not true, but it’s also not how literature works. I’m also pretty sure that this is not consistent with the origins of cyberpunk, or the way the genre memes play out.

    What I think happened when cyberpunk stopped being on the cutting edge and we realized that a critique of the present required different science fictional method (I think that resurgence in “New Space Opera” in the 90s is part of this, as well as a hard-SF turn in the form of Beggers in Spain and a turn toward alternate histories.) As a result, what’s happening cyberpunk has become something closer to fantasy.

    The division (and implications) of the difference between “fantasy” and/or “super soft science fiction” and the science fiction mainstream is at play and probably out side of the scope of this post.

So I’m not that sure where we’re left? Am I missing something? Lets hear it out in comments!

Arranging Patterns for Sweater Design

There are two major problems in sweater making (design.) First, figure out how to make the shape you want out of knitting, and second to place some sort of ornamental feature (pattern) in the knitting without disrupting the shape.

Shaping isn’t easy, but it’s solvable. Once you figure out how to make the shapes you want, it’s just a matter of implementing a known process. Shaping becomes trivial.

The second problem, the design, is the really clever part.

Fitting patterns and embellishments onto a shape, just isn’t solvable, and never becomes trivial. There are tricks, and practices makes it easier, but the possibilities are truly endless and there’s no reason to make any two sweaters exactly the same.1

The patterns and embellishments can be pretty broadly defined: cables, colorowork, other texture patterns, and so forth.

In my own knitting, I have taken to focusing almost entirely on the second problem. I have a basic sweater form that works really well for me, and each sweater explores a different combination of patterns.

My approach is as follows:

  • Divide the sweater into quadrants and plan a single quadrant of the sweater. Repeat this pattern over the entire sweater. This automatically centers the pattern on the sweater.

  • Be flexible with the number of stitches but not too flexible. Also remember to account for a “middle” and “end” stitch which may not be repeated on every quarter.

  • Think of birds eye view of the design. This means thinking about sweater design as a collection of pattern columns.

  • Use patterns at either side of the sweater both for a nice effect and to “bound” the patterns. This can be helpful in controlling the number of stitches.

  • Unless you plan to knit your sweater horizontally, plan your sweater virtually even if you have horizontal patterns. It’s crucial to center patterns that run horizontally across the sweater. Thinking vertically is the easiest way to do this visually for all kinds of patterns.

    If you’re using only one pattern that’s fewer than 10-15 stitches, you may be able to just make sure that your pattern divides in the total number of stitches in the sweater, but that’s a much less common problem.

Beyond that, it’s all trial, error, and practice. Onward and Upward!


  1. There may be some exceptions, but generally. ↩︎

Constraints for Mobile Software

This post is mostly just an overview of Epistle by Matteo Villa, which is--to my mind--the best Android note taking application ever. By the time you read this I will have an Android Tablet, but it’s still in transit while you read this and that’s a topic that dissevers it’s own post.

Epistle is a simple notes application with two features that sealed the deal:

1. It knows markdown, and by default provides a compiled rich text view of notes before providing a simple notes editing interface. While syntax highlighting would be nice, we’ll take what we can get.

2. It’s a nice, simple application. There’s nothing clever or fancy going on. This simplicity means that the interface is clean and it just edits text.

For those on the other side there’s Paragraft that seems similar. While in my heart of hearts I’m probably still holding out for the tablet equivalent1 of emacs. In the mean time, I think developing a text editing application that provide a number of paradigmatic text editing features and advances for the touch screen would be an incredibly welcome development.

In the end there’s much work to be done, and the tools are good enough to get started.


  1. I want to be clear to say equivalent and not replacement, because while I’d like to be able to use emacs and have that kind of slipstream writing experience on an embeded device, what I really want is something that is flexible and can be customized and lets me do all the work that I need to do, without hopping between programs, without breaking focus, that makes inputting and manipulating text a joy. And an application that we can trust (i.e. open source, by a reputable developer,) in a format we can trust (i.e. plain text.) Doesn’t need to be emacs and doesn’t need lisp, but I wouldn’t complain about the lisp. ↩︎