Covered In LaTeX

Although I haven't used LaTeX much in the past few years, it was one of the primary tools that hastened my shift to using GNU/Linux full time. Why? I'd grown sick of fighting with document preparation and publishing systems (e.g. Microsoft Word/Open Office), and had started using LaTeX on my Mac to produce all of my papers and documents that needed to be output to paper-formats. Why switch? Because after a certain point of having every tool you use be Free software (because it's better!), it becomes easier and more cost effective to just jump the gun and by commodity hardware and use a system that's designed to support this kind of software (managing a large selection lots of free software packages on OS X can become cumbersome).

So why LaTeX? What's the big deal? Why do I care now? Well...

LaTeX is a very usable front-end/set of macros for the TeX typesetting engine. Basically, you write text files in a particular way, and then run LaTeX (or pdflatex) and it generates the best looking PDF in the world of your document. You get full control over things that matter (layout, look and feel) and you don't have to worry about things that ought to be standard (titles, headlines, citations with BibTeX, page numbering, hyphenation). The best part, however, is that once you figure out how to generate a document correctly once, you never have to figure it out again. Once you realize that most of the things you need to output to paper are in the same format, you can use the same template and be able to generate consistently formated documents automatically. There's a "compile" step in the document production process, which means changes aren't often immediately recognizable, but I don't think this is a major obstacle.

Word processing and document preparation is a critical component of most common computer users. At least, I'd assume so, though I don't have good numbers on the subject. In any case, I think it might be an interesting project to see how teaching people how to use LaTeX might both improve the quality of their work, and also the way that they're able to work. It's advanced, and a bit confusing at first, but I'd suspect that once you got over the initial hump LaTeX presents a more simple and consistent interface: you only get what you tell it to give you and you only see the functionality that you know about. This might make the discovery of new features more difficult, but it doesn't limit functionality.

I'm not sure that this post is the right space to begin a lesson or series on getting started with LaTeX, but I think as a possible teaser (if there's interest) that the proper stack for getting started with LaTeX would consist of:

  • A TeXlive distribution. You need the basic tool kit including pdflatex, TeX, Metafont, LaTeX, and BibTeX.
  • A Text Editor with LaTeX support: emacs, TextMate, etc. Plain text can be difficult and cumbersome to edit unless you have the right tools for the job, which include a real text editor.
  • Some sort of macro or snippet expansion system. TeX is great. But it's also somewhat verbose, and having an easy way to insert text into your editing environment, both for templates but also for general operations (emphasis, notes, etc.) is incredibly useful, and reduces pain greatly.
  • A template management system. This probably needn't be a formal software system, but just something to organize and store the basic templates that you will use.

And the rest is just learning curve and practice. Onward and Upward!

Console Use for the Uninitiated

I have a computer, an old laptop, that I mostly use as the foundation of my stereo system. It's just a basic system with a few servers (a web server and the music player daemon), and it doesn't have a running window manager. This configuration usually doesn't bug me: I connect remotely and the computer sits under the couch, but since my recent move I've not had a network connection at home and I've defaulted to playing music and managing the system from the console.

This works just fine for me. The virtual terminals aren't noticeably different from the terminal I get over ssh (as you would expect/hope), except now I have to walk across the room. The people who listen to music with me haven't yet been other terminal geeks, and so I've taken on the role of stereo whisperer. Until a friend looked over my shoulder and wanted to change the track. Using the console is sometimes (often) a slippery slope.

I realized immediately that this situation was much more conducive to learning to use the console than the kinds of introductions to using the console that I've typically written. The commands we used were very limited: the mpc program that acts as a simple command-line client to the music player daemon (mpd) and grep. We also used the pipe operator.

There are thousands of commands on most Linux/UNIX systems and remembering all of them can be a bit challenging. The console is a limiting environment basically you can do one thing at a time with it, and you don't have a lot of leeway with common errors. At the same time, there are a great number of programs and commands that a beginner has no way of knowing about or knowing when to use. Legitimately, the console is both too limiting and expansive to be quickly accessible to the uninitiated. Starting with a very limited selection of commands is way to break through this barrier.

The terminal environment is also very "goal-oriented." Enter a command to generate some sort of output or result and then repeat. At the end your system will have done something that you needed it to, and/or you'll learn something that you didn't already know. When you're just trying to learn, all of the examples seem fake, contrived, and bothersome because many people already have an easy way of accomplishing that task using GUI tools. Learning how the terminal works, thus, needs a real example, not just a potentially realistic example.

The great thing, I think, is that once you have a need to learn command line interaction, it makes a lot of sense even to people who aren't die-hard geeks: Commands all have a shared structure that is fairly predictable and inconsistencies are apparent. Perhaps most importantly the command line's interaction model is simple: input a command and get a response. Advanced users may be able to bend the interaction model a bit, but it is undeniably parsimonious.

It seems, in conclusion, that the command-line is easy to learn for the new user for the same reason it is beloved by the advanced. Ongoing questions, include:

If this kind of realization were to catch on, how might it affect interaction design in the long run? Might "simple to design" and "easy to use" move closer together?

Is there a way to build training and documentation to support users who are new to this kind of interaction style?

Collaborative Technology

I agreed to work on an article for a friend about the collaborative technology "stuff" that I've been thinking about for a long time. I don't have an archive that covers this subject, but perhaps I should, because I think I've written about the technology that allows people to make things with other people a number of times, though I have yet to pull together these ideas into some sort of coherent post or essay.

This has been my major post-graduation intellectual challenge. I have interests, even some collected research, and no real way to turn these half conceptualized projects into a "real paper." So I've proposed working with a friend to collect and develop a report that's more concrete and more comprehensive than the kind of work that I've been attempting to accomplish on the blog. Blogging is great, don't get me wrong, but I think it leads to certain kinds of thinking and writing (at least as I do it,) and sometimes other kinds of writing and thinking are required.

Regarding this project, I want to think about how technology like "git" (a distributed version control system) and even tools like wiki's shape the way that groups of people can collaborate with each other. I think there's an impulse in saying "look at the possibilities that these tools create! This brave new world is entirely novel, and not only changes the way I am able to complete my work, but how I look at problems, and make it so much easier for me to get things done.." At the same time, the technology can only promote a way of working it doesn't necessarily enforce a way of working, nor does any particular kind of technology really remove the burdens and challenges of "getting things done." More often perhaps new kinds of technology, like distributed version control, is responsible for increasing the level of abstracting and allowing us (humans) to attend to higher order concerns.

Then, moving up from the technology, I think looking at how people use technology in this class allows us to learn a great deal about how work is accomplished. We can get an idea of when work is being done, an idea of how quality control efforts are implemented. Not only does this allow us to demystify the process of creation, but having a more clear idea of how things are made could allow us to become better makers.

The todo list, then, is something like:

  • Condense the above into something that resembles a thesis/argument.
  • Become a little more familiar with the git-dm ("data mining") tool that the Linux Foundation put together for their "state of Kernel development."
  • Develop some specific questions to address. I think part of my problem above and heretofore has been that I'm saying "there's something interesting here, if we looked," rather than. "I think w kind of projects operate in x ways, where y projects will operate in z ways."
  • Literature review. I've done some of this, but I've felt like I need to do even more basic methodological and basic theory reading. And even though an unread Patterns of Culture is on my bookshelf, I don't need to read that to begin reading articles.

That's a start. Feedback is always useful. I'll keep you posted as I progress.

Saved Searches and Notmuch Organization

I've been toying around with the Notmuch Email Client which is a nifty piece of software that provides a very minimalist and powerful email system that's inspired by the organizational model of Gmail.

Mind you, I don't think I've quite gotten it.

Notmuch says, basically, build searches (e.g. "views") to filter your email so you can process your email in the manner that makes the most sense to you, without needing to worry about organizing and sorting email. It has the structure for "tagging," which makes it easy to mark status for managing your process (e.g. read/unread, reply-needed), and the ability to save searches. And that's about it.

Functionally tags and saved searches work the way that mail boxes in terms of the intellectual organization of mailboxes. Similarly the ability to save searches, makes it possible to do a good measure of "preprocessing." In the same way that Gmail changes the email paradigm by saying "don't think about organizing your email, just do what you need to do," not much says "do less with your email, don't organize it, and trust that the machine will be able to help you find what you need when the time comes."


I've been saying variations of the following for years, but I think on some level it hasn't stuck for me. Given contemporary technology, it doesn't make sense to organize any kind of information that could conceivably be found with search tools. Notmuch proves that this works, and although I've not been able to transfer my personal email over, I'm comfortable asserting that notmuch is a functional approach to email. To be fair, I don't feel like my current email processing and filtering scheme is that broken, so I'm a bad example.

The questions that this raises, which I don't have a particularly good answers for, are as follows:

  • Are there good tools for the "don't organize when you can search crew," for non-email data? And I'm not just talking about search engines themselves (as there are a couple: xapian, namazu), or ungainly desktop GUIs (which aren't without utility,) but the proper command-line tools, emacs interfaces, and web based interfaces?
  • Are conventional search tools the most expressive way of specifying what we want to find when filtering or looking for data? Are there effective improvements that can be made?
  • I think there's intellectual value created by organizing and cataloging information "manually," and "punting to search" seems like it removes the opportunity to develop good and productive information architectures (if we may be so bold.) Is there a solution that provides the ease of search without giving up the benefits that librarianism brings to information organization?

Gear Mongering

At the end of the day (or even the beginning,) I'm just another geek, and despite all of the incredibly (I'd like to think at least) reasoned ways I think about the way I use technology, I occasionally get an old fashioned hankering for something new. We've all had them. Perhaps my saving graces are that I do occasionally need new things (computers wear out, cellphones are replaced, needs change), and the fact that I'm both incredibly frugal and task oriented (rational?) about the way I use technology.

But I'm still a geek. And gear is cool. Thoughts, in three parts.

Part One, Phones and the HTC EVO

I've been using a Blackberry for 18 months, and I've come to two conclusions: blackberries are great and they have the "how ot integrate messaging into a single usable interface." I was skeptical at first, and it's very simple, and I've never quite gotten my email to function in an ideal way, largely because I think the Blackberry works really well for email. It's everything else that I might want to do with my phone that I can't, and I'd probably like to: I have an SSH client and it's nearly usable. Nearly. I have IM clients, that are nearly functional. Nearly.

When I got the Blackberry, is and was the most important communication I was doing. I worked for a very email-centric company, and I wanted to be able to stay in the email-loop even when I was off doing something else. These days, IRC and XMPP are a far more central feature of my digital existence, and I tend to think that it's not an Internet connection if I can't open SSH. I'm also switching on a much longer public-transit focused commute in the next few week, and being able to do research for writing projects will be nice. I'm not sure what the best solution is exactly, though the HTC EVO is a pretty swell phone.

As the kids these days say, "Do want."

Part two, Infrastructural Computing and Home Servers

I've fully adopted an infrastructural approach to technology, at least with regards to my own personal computing. That was a mouthful. Basically, although I work locally on the machine that's in front of me (writing, email, note taking, collaboration,) much of the "computing" that I interact with isn't actually connected to the machine I interact with directly. In some ways, I suppose this is what they meant when they said "cloud computing," but the truth is that my implementation is somewhat more... archaic: I use a lot of SSH, cron, and a little bailing wire to distribute computing tasks far and wide, and the process of moving everything in my digital world from a laptop that I carried around with me everywhere (college,) to a more sane state of affairs has been a long time coming.

Right.

The long story short is that aside from a machine (my old laptop) that's at capacity powering my "stereo," I don't have any computer's at home aside from my laptop, and I tend to take it everywhere with me, which makes it unideal for some sorts of tasks. Furthermore, without an extra machine setting around, file storage, some kinds of backups, are somewhat more complicated than I'd like. So, I'm thinking about getting some sort of robust server-type machine to stick in a corner in my apartment.

Not exactly sure what the best option is there. I'm burdened by: frugality, sophisticated tastes, and the notion that having quality hardware really does matter.

More thinking required.

Part three, More Laptops

So I might have a laptop related illness. Back in the day, laptops always seemed like a frivolity: underpowered, never as portable as you wanted, awkward to use, and incredibly expensive. Now, laptops are cheap, and even the Atom-based "netbooks," are functional for nearly every task. I tend to buy used Thinkpad Laptops, and as I think about it, I've probably spent as much on the three Thinkpads, all of which are still in service, as I did on any one mac laptop.

The thing about my current laptop is that when you think about it, it'd make a decent home server: the processor has virtualization extensions, the drive is fast (7200 rpm) and it can handle 4 gigs of ram (and maybe more.) What more could I want? And I distributed things correctly, the "server" laptop could be pressed into service as a backup/redundant laptop, in case something unforeseen happened.

Or I could dither about it for another few months, and come to some other, better, fourth solution.

Onward and Upward!

In Favor of Simple Software

I've spent a little bit of time addressing some organizational and workflow angst in the past few weeks, and one thing I'd been focusing on had been to update and fine tune my emacs (text editor) and irssi (irc/chat) configuration. Part of my goal had been to use irssi-xmpp to move all of my chat/synchronous communication into one program; unfortunately I've not been able to get irssi-xmpp to build and function in a fully stable way. This is probably because I'm hard on software and not because of anything specific to the software itself.

In any case, this lead me to come to the following conclusion about these programs, as they are probably the two most central and most heavily used applications in my arsenal, and without a doubt are the applications that I enjoy using the most. I scribbled the following note a few days ago in preparation for this entry:

In many ways the greatest advance or feature that these programs provide isn't some killer feature, it's a simple but more powerful abstraction that allows users to interact with their problem domain. Emacs is basically a text-editing application framework, and provides users with some basic fundamentals for interacting with textual information, and a design that allows users to create text editing modalities or paradigms which bridge the divide between full blown applications and custom configurations. By the same token, Irssi is really a rather simple program that's easy to script, and contains a number of metaphors that are useful for synchronous communication (chat.)

And we might be able to expand this even further: these are two applications that are not only supremely functional, but are so usable because they are software projects that really only make sense in context of free software.

I want to be very careful here: I don't want to make the argument that free software isn't or can't be commercial, because that's obviously not the case. At the same time, free software, like these applications needn't justify itself in terms of "commercial features," or a particular target market in order to remain viable. It's not that these programs don't have features, it is that they have every feature, or the potential for every feature, and are thus hard to comprehend and hard to sell. Even if it only takes a use over a short period of time for users to find them incredibly compelling.

The underlying core extensibility that both of these "programs" have is probably also something that is only likely to happen in the context of open source or free software. This isn't to suggest that proprietary software doesn't recognize the power or utility of extensible software, but I don't think giving users so much control over a given application makes sense from a quality control perspective. Giving users the power to modify their experience of software in an open ended fashion, also gives them the power to break things horribly, and that just doesn't make sense from a commercial perspective.

There's probably also some hubris at play: free software applications, primarily these two, are written by hackers, with a target audience of other hackers. Who needs a flexible text editing application framework (e.g. emacs), but other programmers. And the primary users of IRC for the past 8-10 years have largely been hackers and developers and other "geek" types. irssi is very much written for these kinds of users. To a great extent, I think it's safe to suggest that when hackers write software for themselves, this is what it looks like.

The questions that must linger is: why isn't other software like this? (Or is it, and I'm missing it in my snobbishness,) and where is the happy medium between writing software for non-hackers and using great software (like these) to "make more hackers."

Onward and Upward!

Organize Your Thoughts More Betterly

I've been working with a reader and friend on a project to build a tool for managing information for humanities scholars and others who deal with textual data, and I've been thinking about the problem of information management a bit more seriously. Unlike numerical, or more easily categorized information data, how to take a bunch of textual information--either of your own production or a library of your own collection--is far from a solved problem.

The technical limitation--from a pragmatic perspective--is that you need to have an understanding not only of the specific tasks in front of you, but a grasp of the entire collection of information you work with in order to effectively organize, manage, and use the texts as an aggregate.

"But wait," you say. "Google solved this problem a long time ago, you don't need a deterministic information management tool, you need to brute force the problem with enough raw data, some clever algorithms, and search tools," you explain. And on some level you'd be right. The problem is of course, you can't create knowledge with Google.

Google doesn't give us the ability to discover information that's new, or powerful. Google works best when we know exactly what we're looking for, the top results in Google are most likely to be the resources that the most people know and are familiar. Google's good, and useful and a wonderful tool that more people should probably use but Google cannot lead you into novel territory.

Which brings us back to local information management tools. When you can collect, organize, and manipulate data in your own library you can draw novel conclusions, When the information is well organized, and you can survey a collection in useful and meaningful ways, you can see holes and collect more, you can search tactically, and within subsets of articles to provide. I've been talking for more than a year about the utility of curation in the creation of value on-line. and fundamentally I think the same holds true for personal information collections.

Which brings us back to the ways we organize information. And my firm conclusion that we don't have a really good way of organizing information. Everything that I'm aware of either relies on search, and therefore only allows us to find what we already know we're looking for, or requires us to understand our final conclusions during the preliminary phase of our investigations.

The solution to this problem is thus two fold: First, we need tools that allow us to work with and organize the data for our projects, full stop. Wiki's, never ending text files, don't really address all of the different ways we need to work with and organize information. Secondly we need tool tools that are tailored to the way researchers who deal in text work with information from collection and processing to quoting and citation, rather than focusing on the end stage of this process. These tools should allow our conceptual framework for organizing information to evolve as the project evolves.

I'm not sure what that looks like for sure, but I'd like to find out. If you're interested, do help us think about this!

(Also, see this post `regarding the current state of the Cyborg Institute <http://www.cyborginstitute.com/2010/06/a-report-from-the-institute/>`_.)

In Favor of Unpopular Technologies

This post ties together a train of thought that I started in "The Worst Technologies Always Win" and "Who Wants to be a PHP Developer" with the ideas in the "Ease and the Stack" post. Basically, I've been thinking about why the unpopular technologies, or even unpopular modes of using technologies are so appealing and seem to (disproportionately) capture my attention and imagination.

I guess it would first be useful to outline a number of core values that seems to guide my taste in technologies:

  • Understandable

Though I'm not really a programmer, so in a lot of ways it's not feasible to expect that I'd be able to expand or enhance the tools I use. At the same time, I feel like even for complex tasks, I prefer using tools that I can have a chance of understanding how they work. I'm not sure if this creates value in the practical sense, however, I tend to think that I'm able to make better use of technologies that I understand the fundamental underpinnings of how they work.

  • Openness and Standard

I think open and standardized technologies are more useful, in a way that flows from "understandable," I find open source and standardized technology to be more useful. Not in the sense that open source technology is inherently more useful because source code is available (though sometimes that's true), but more in the sense that software developed in the open tends to have a lot of the features and values that I find important. And of course, knowing that my data and work is stored in a format that isn't locked into a specific vendor, allows me to relax a bit about the technology.

  • Simple

Simpler technologies are easier to understand and easier--for someone with my skill set--to customize and adopt. This is a good thing. Fundamentally most of what I do with a computer is pretty simple, so there's not a lot of reason to use overly complicated tools.

  • Task Oriented

I'm a writer. I spend a lot of time on the computer, but nearly everything I do with the computer is related to writing. Taking notes, organizing tasks, reading articles, manipulating texts for publication, communicating with people about various things that I'm working on. The software I use supports this, and the most useful software in my experience focuses on helping me accomplish these tasks. This is opposed to programs that are feature or function oriented. I don't need software that could do a bunch of things that I might need to do, I need tools that do exactly what I need. If they do other additional things, that's nearly irrelevant.

The problem with this, is that although they seem like fine ideals and values for software development, they are, fundamentally unprofitable. Who makes money selling simple, easy to understand, software with limited niche-targeted feature sets? No one. The problem is that this kind of software and technology makes a lot of sense, and so we keep seeing technologies that have these values that seem like they could beat the odd and become dominant, and then they don't. Either they drop task orientation for a wider feature set, or something with more money behind it comes along, or the engineers get board and build something that's more complex, and the unpopular technologies shrivel up.

What to do about it?

  • Learn more about the technologies you use. Even, and epically if you're not a programmer.
  • Develop simple tools and share them with your friends.
  • Work toward task oriented computing, and away from feature orientation.