Sphinx Caveats

This is a rough sketch of some things that I've learned about the Sphinx documentation generation system. I should probably spend some time to collect what I've learned in a more coherent and durable format, but this will have to do for now:

  • If you describe a type in parameter documentation it will automatically link to the Python documentation for that type when using the Python Domain and if you have intersphinx connected. That's pretty cool.

  • Sphinx let's you define a scope for a file in some cases. If you're documenting command-line options to a program. (i.e. with the "program" with subsidiary "option" directives,) or if you're documenting Python objects and callables within the context of a module, the module and program directives have a scoping effect.

    Cool but it breaks the reStructuredText idiom, which only allows you to decorate and provide semantic context for specific nested blocks within the file. As in Python code, there's no way to end a block except via white-space, [1] which produces some very confusing markup effects.

    The "default-domain" directive is similarly... odd.

  • Sphinx cannot take advantage of multiple cores to render a single project, except when building multiple outputs (i.e. PDF/LaTeX, HTML with and/or directories.) if with a weird caveat that only one builder can touch the doctree directory at once. (So you either need to put each builder on its own doctree directory, or let one build complete and then build the reset in parallel.)

    For small documentation sets, under a few dozen pages/kb, this isn't a huge problem, for larger documentation sets this can be quite frustrating. [2]

    This limitation means that while it's possible to write extensions to Sphinx to do additional processing of the build, in most cases, it makes more sense to build custom content and extensions that modify or generate reStructuredText or that munge the output in some way. The include directive in reStructuredText and milking the hell out of make are good places to started.

  • Be careful when instantiating objects in Sphinx's conf.py file: since Sphinx stores the pickle (serialization) of conf.py and compares that stored pickle with the current file to ensure that configuration hasn't changed (changed configuration files necessitate a full rebuild of the project.) Creating objects in this file will trigger full (and often unneeded) rebuilds.

  • Delightfully, Sphinx produces .po file that you can use to power translated output, using the gettext sphinx builder. Specify a different doctree directory for this output to prevent some issues with overlapping builds. This is really cool.

Sphinx is great. Even though I'm always looking at different documentation systems and practices I can't find anything that's better. My hope is that the more I/we talk about these issues and the closer I/we'll get to solutions, and the better the solutions will be.

Onward and Upward!

[1]In Python this isn't a real problem, but reStructuredText describes a basically XML-like document, and some structures like headings are not easy to embed in rst blocks.
[2]reality documentation sets would need to be many hundreds of thousands of words for this to actually matter in a significant way. I've seen documentation take 2-3 minutes for clean a regeneration using Sphinx on very powerful hardware (SSDs, contemporary server-grade processors, etc.), and while this shouldn't be a deal breaker for anyone, documentation that's slow to regenerate is harder to maintain and keep up to date (e.g. difficult to test the effect of changes on output, non-trivial to update the documents regularly with small fixes.)

Information Debts

Like technical debt, information debt is a huge problem for all kinds of organizations, and one that all technical writers need to be aware and able to combat directly. Let's backup a little...

Information debt is what happens when there aren't proper systems, tools, and processes in place to maintain and create high quality information resources. A number of unfortunate and expensive things result:

  • People spend time recreating documents, pages, and research that already exists. This is incredibly innefficent and leads to:
  • Inaccurate information propagates throughout the organization and to the public.
  • Information and style "drifts," when information and facts exist in many places.
  • Organizations spend more money on infrastructure and tools as a band-aid when data is poorly organized.
  • People lose confidence in information resources and stop relying on them, preferring to ask other people for information. This increases the communication overhead, noise level, and takes longer for everyone, than using a good resource.

To help resolve information debt:

  • Dedicate resources to paying back information debts. It takes time to build really good resources, to collect and consolidate information, and to keep them up to date. But given the costs of the debt, it's often worth it.
  • Documents must be "living," usefully versioned, and there must be a process for updating documents. Furthermore, while it doesn't make sense to actually limit editing privileges, it's important that responsibility for editing and maintaining documents isn't diffused and thus neglected.
  • Information resources, must have an "owner" within an organization or group who is responsible for keeping it up to date, and making sure that people know it exists. You can have the best repository for facts, if no one uses it and the documents are not up to date, it's worthless.
  • Minimize the number of information resources. While it doesn't always make sense to keep all information in the same resource or system, the more "silos" where a piece of information or document might live the less likely a reader/user will find it.

... and more.

I'm working on adding a lot of writing on information debt in the technical writing section of the wiki. I'll blog more about this, while I continue to work through some of these ideas, but I'm quite interested in hearing your thoughts on this post and on the information-debt pages as well.

Onward and Upward!

Taxonomic Failure

I tell people that I'm a professional writer, but this is a bit misleading, because what I really do is figure out how to organize information so that it's useful and usable. Anyone, with sufficient training and practice, can figure out how to convey simple facts in plain language, but figuring out how to organize simple facts in plain language into a coherent text is the more important part of my job and work.

This post and the "Information Debt" wiki page, begin to address some of these the problem of information resource maintenance, organization, and institutional practices with regards to information and knowledge resources.

Organization is hard. Really hard. One of the challenges for digital resources is that they lack all of the conventions of /technical-writing/books, which would seem to be freeing: you get more space and you get the opportunity to do really flexible categorization and organization things.

Great right?

Right.

Really flexible and powerful taxonomic systems, like tagging systems have a number of problems when applied to large information resources:

  • the relationship between the "scope" of the tag, and the specificity of the tag matters a lot. Too much. Problems arise when:
  • tags are really specific, pages include a number of pieces of information, and tags can only map to pages.
  • tags are general and the resources all address similar or related topics.
  • the size of the tag "buckets" matters as well. If there are too many items with a tag, users will find not the tag for answering their questions.
  • if your users or applications have added additional functionality using tags, tags begin to break as a useful taxonomic system. For example, if your system attaches actions to specific tags (i.e. send email alerts when content with a specific tag,) or if you use a regular notation to simulate a hierarchy, then editors begin adding content to tags, not for taxonomic reasons, but for workflow reasons or to trigger the system.

The resulting organization isn't useful from a textual perspective.

  • If you have to have multiple tagging systems or namespaces.

    Using namespaces is powerful, and helps prevent collisions. At the same Sat Aug 16 10:50:00 2014, if your taxonomic system has collisions, this points to a larger problem.

  • If the taxonomy ever has more than one term for a conceptual facet, then the tagging system is broken.

These problems tend to exacerbate as:

  • the resource ages.
  • the number of contributors grow.

There's this core paradox in tagging systems: To tag content effectively, you need a fix list of potential tags before you begin tagging content, and you need to be very familiar with the corpus of tagged content **before* beginning to tag content.*

And there's not much you can do to avoid it. To further complicate the problem, it's essentially impossible to "redo" a taxonomic system for sufficiently large resources given the time requirements for reclassification and the fact that classification systems and processes are difficult to automate.

The prevalence of tagging systems and the promises of easy, quick taxonomic organization are hard to avoid and counteract. As part of the fight against information debt it's important to draw attention to the failure of broken taxonomy systems. We need, as technical writers, information custodians, and "knowledge workers," to develop approaches to organization that are easy to implement and less likely to lead to huge amounts of information debt.

Coding Pedagogy

There are two parts to this post: first, the relationship or non-relationship between the ability to write code and technical literacy; and second, the pedagogical methods for teaching people how to program/code.

In some ways, I've been writing about this and related topics for quite a while: see /posts/objective-whatsis for an earlier iteration in this train of thought.

Programming and Technical Literacy

Programmers and other technical folks talk a lot about teaching young people to code as the central part of any young technical person's education and basic computer literacy. Often this grows out of nostalgia for their own experience learning to program, but there are other factors at play. [1]

In some cases, they even start or point to projects like Codecademy. Which are, in truth, really cool ideas, but I think that effectively equating the ability to write code with technical literacy is fraught:

  • There are many different kinds of technical literacy and writing code is really such a small part. Sure code gives us a reasonable way to talk about things like design and architecture, but actually writing code is such a small part of developing technology.

  • Writing code isn't that important, really. In a lot of ways, code is just an implementation detail. Important as a way of describing some concepts pretty quickly, important because it's impossible to iterate on ideas without something concrete to point to, but the implementation isn't nearly as important as the behavior or the interface.

  • For the last ~40 years, code has been the way that people design behavior and specify interfaces for software. While there are a lot of reasons why this predominantly takes the form of code, there's not particular reason that we can't express logic and describe interfaces using other modalities.

    There are many people who are very technically literate and productive who don't write code, and I think that defining literacy as being able to write code, is somewhat short sighted. Also, there is another group of people who are actually programmers who don't think of the things they do as "programming," like people who do crazy things with spreadsheets, most librarians, among others. These non-coding programmers may shy away from programming or are mostly interested in the output of the program they write and less interested in the programming itself.

This is a huge problem. I hope that this /posts/computer-literacy-project that I've been planning will start to address some of these issues, but there's even more work to do.

How to Teach People to Code

(This section of the post derives from and summaries the "How to Teach People to Program" wiki page.)

Most of the way that programming books and courses teach programming are frustrating and somewhat dire, for a few reasons:

  • Most examples in programming books are dumb.
  • Basic computer science/engineering knowledge is fundamental to the way that accomplished programmers think about programming but aren't always required to teach people how to program.
  • Syntax isn't that important, but you can't ignore it either.
  • Slow reveals are really frustrating.
  • The kinds of code that you write when learning to programming bear little resemblance to the actual work that programmers do.

The solutions to these problems are complex and there are many possible solutions. As a starting point:

  • Separate the way you present core concepts (i.e. data structures, typing, functions, classes, etc.) from actual code examples and from actual explanations of the syntax.

    Interlink/cross reference everything, but if you give people the tools to answer their own questions they'll learn what they actually need to know, and you can then do a better job of explaining the syntax, basic concepts, and practical examples.

  • Provide longer examples that aren't contrived.

    Examples don't need to start from first principals, and don't need to be entirely self contained. Programming work rarely starts from first principals (relative,) and is rarely actually self contained. It's foolish, then to use these sorts of pedagogical tools.

Thoughts?

[1]In addition there's a related fear that many people who don't have experience with the technology of the 1980s and 1990s won't have the required technological skills to innovate in another 10 or 20 years.

Documentation Rhetoric

Other than shortening sentences, inserting lists, and using document structure, there are a couple of "easy edits" that I make to most documents that other send to me for review:

  1. Remove all first person, both singular and plural.

2. Remove all passive sentences, typically by making the sentences more imperative.

In practice these changes are often related.

Expunge the First Person

Removing the first person is important less because it's "more formal" to avoid the first person and more because it's always unclear in documentation: Who are "we," and who is "I"? Should I read "I" as "me" or as the author of the documentation? What if my experiences and environment isn't like "ours?" While we can resolve these confusion points pretty quickly it gives users another set of information that they must track. And given that technical subjects can be difficult without confusing language, there's no reason to make things more confusing.

People tend to think that this makes their documentation "friendlier," "personable," or "intimate." People used to interacting directly with users (i.e. people doing user support) are particularly susceptible to first person problems. In support cases, that little bit of personal touch is incredibly valuable and goes a long way toward making people feel comfortable.

Those people are wrong. Don't do it. Speak simply. Write about the product and the processes you're documenting, not yourself. Convey facts, not opinions. Provide context, not perspective. If you're writing the official documentation for a product, your perspective is obvious to readers; if you're not writing the official documentation, that's also apparent and probably not your job to disclaim.

Use Good Verbs

Passive sentences and weak verbs are a huge problem. Huge. People with science and engineering back rounds seem to prefer passive sentences because they think that passive sentences convey objectivity, and that this objectivity is desirable.

Passive sentences do convey a sense of objectivity, and there are some cases where there's no way to avoid describing a property of a thing except passively. That doesn't make the passive voice generally acceptable. Related to the reason above, passive voice tends to provide a level of "syntatic indirection," and means that complicated sentences become unnecessarily difficult to comprehend.

In documentation, unlike some other forms, it's possible (and desirable!) to use imperative verbs, which provides some relief. One of the main projects of documentation is to inculcate "best practices" (i.e. values and conventions,) in users. Imperative verbs are great for this purpose.

In short: Do it!

Lies About Documentation...

.. that developers tell.

  1. All the documentation you'd need is in the test cases.
  2. My comments are really clear and detailed.

3. I'm really interested and committed to having really good documentation.

  1. This code is easy to read because its so procedural.
  2. This doesn't really need documentation.

6. I've developed a really powerful way to extract documentation from this code.

  1. The documentation is up to date.
  2. We've tested this and nothing's changed.
  3. This behavior hasn't changed, and wouldn't affect users anyway.
  4. The error message is clear.

11. This entire document needs to be rewritten to account for this change.

  1. You can document this structure with a pretty clear table.

Often this is true, more often these kinds of comments assume that it's possible to convey 3-5 dimension matrixes clearly on paper/computer screens.

  1. I can do that.
  2. I will do that.
  3. No one should need to understand.

Allowable Complexity

I'm not sure I'd fully realized it before, but the key problems in systems administration--at least the kind that I interact with the most--are really manifestations of a tension between complexity and reliability.

Complex systems are often more capable flexible, so goes the theory. At the same time, complexity often leads to operational failure, as a larger number of moving parts leads to more potential points of failure. I think it's an age old engineering problem and I doubt that there are good practical answers.

I've been working on this writing project where I've been exploring a number of fundamental systems administration problem domains, so this kind of thing is on my mind. It seems, that the way to address the hard questions often come back to "what are the actual requirements, and are you willing to pay the premiums to make the complex systems reliable?"

Trade-offs around complexity also happen in software development proper: I've heard more than a few developers talk in the last few months weigh the complexity of using dynamic languages like Python for very large scale projects. While the quests and implications manifest differently for code, it seems like this is part of the same problem.

Rather than prattle on about various approaches, I'm just going to close out this post with a few open questions/thoughts:

  • What's the process for determining requirements that accounts for actual required complexity?

  • How do things that had previously been complex, become less complex?

    Perhaps someone just has write the code in C or C++ and let it mature for a few years before administrators accept it as stable?

  • Is there an corresponding level of complexity threshold in software development and within software itself? (Likely yes,) and is it related to something intrinsic to particular design patterns, or to tooling (i.e. programming language implementations, compilers, and so forth.)

Might better developer tooling allow us to programs of larger scope in dynamic languages (perhaps?)

Reader submitted questions:

  • Your questions here.

Answers, or attempts thereat in comments.

Documentation Emergence

I stumbled across a link somewhere along the way to a thread about the Pyramid project's documentation planning process. It's neat to see a community coming to what I think is the best possible technical outcome. In the course of this conversation Iain Duncan, said something that I think is worth exploring in a bit more depth. The following is directly from the list, edited only slightly:

I wonder whether some very high level tutorials on getting into Pyramid that look at the different ways you can use it would be useful? I sympathize with Chris and the other documenters because just thinking about this problem is hard: How do you introduce someone to Pyramid easily without putting blinders on them for Pyramid's flexibility? I almost feel like there need to 2 new kinds of docs:

  • easy to follow beginner docs for whatever the most common full stack scaffold is turning out to be (no idea what this is!)
  • some mile high docs on how you can lay out pyramid apps differently and why you want to be able to do that. For example, I feel like hardly anyone coming to Pyramid from the new docs groks why the zca under the hood is so powerful and how you can tap into it.

Different sets of users have different needs from documentation. I think my ":Multi-Audience Documentation" post also addresses this issue.

I don't think there are good answers and good processes that always work for documentation projects. Targeted users and audience changes a lot depending on the kind of technology at play. The needs of users (and thus the documentation) varies in response to the technical complexity and nature every project/product varies. I think, as the above example demonstrates, there's additional complexity for software whose primary users are very technical adept (i.e. systems administrators) or even software developers themselves.

The impulse to have "beginner documentation," and "functional documentation," is a very common solution for many products and reflects two main user needs:

  • to understand how to use something. In other words, "getting started," documentation and tutorials.
  • to understand how something works. In other words the "real" documentation.

I think it's feasible to do both kinds of documentation within a single resource, but the struggle then revolves around making sure that the right kind of users find the content they need. That's a problem of documentation usability and structure. But it's not without challenges, lets think about those in the comments.

I also find myself thinking a bit about the differences between web-based documentation resources and conventional manuals in PDF or dead-tree editions. I'm not sure how to resolve these challenges, or even what the right answers are, but I think the questions are very much open.