A Life Changing Laptop Riser

*tl;Dr>* I got one of those nifty laptop risers that puts your laptop up closer to eye level, and it has pretty much improved all of my interactions with computers a thousand fold and it’s made it possible for me to effectively use two screens. This post explores this.


One of my coworkers had a laptop stand she wasn’t using and I asked to borrow it for an afternoon, and my neck stopped hurting. I never thought my neck hurt before, but apparently it does.

Or did.

But there’s more: for years now I’ve kept an extra monitor around (and had one at work) but the truth is that I have never really felt like I’ve been able to get the most out of an external monitor.

Somehow, putting my laptop 4 inches in the air was the little change that made everything better. The laptop is generally on the left of the external monitor, and I have task lists, notes buffers, the chat window, and my status logging window on the laptop, and then three windows on the external (emacs buffer, terminal, emacs buffer) on the right. My primary focus centers between the monitors, but probably edges slightly toward the external, most of the time.

Also, I discovered that I--apparently--have a slight processing/attention defect whereby I find it painful and difficult to focus on things that are happening on the right side of the screen for any amount of time. Which is weird because my right eye has always been noticeably stronger. I’ll ponder this more later.

My virtual desktops for email and web browsing are a bit less rigid, but the same basic idea. Somehow it seems to work. I’ve done a little bit of work recently to get the layouts right, to minimize the impact of the window management of most context switching (scripting various transitions, saving layouts, etc.) In all things are going great.


It strikes me that I’ve not posted here even a little about my setup in a while. The truth is that it’s not terribly surprising and I’ve not changed very much recently. I’m back to one laptop, and as anxious as having one laptop makes me sometimes (I fear the lack of redundancy,) not having to keep it synced makes life easier. I’ve put some time into doing a little bit of polish on all of little bits of configuration/code that I have that makes my computing world go around, but mostly it’s pretty good.

It’s nice, and I’d write more about it, but I want to get back to getting things done around here. Exporting and exploring some of this stuff in greater depth is definitely on my list, so hang in there, and if there’s something you particularly want to see, be in touch.

2011 Retrospective

For the most part, I’m quite happy with everything that I was able to accomplish last year. I’ve moved cities (for the second year in a row) and last year I changed jobs twice: in both cases, I think the current will stick for a while. And I’m working on other projects, with some impressive speed. Last year wasn’t been great for finishing things, but I guess there’s room for improvement this year.

After a fair amount of professional angst I’m finally doing pretty much exactly what I want to be doing: I’m writing a substantial/total revision of a software manual for a company developing an open source database system. I’ll leave you to figure out the details, but it’s great.

A couple years ago, I said to myself, that I wanted to be a “real technical writer,” which is to say, work with engineering teams, write documentation and tutorials for a single product or group of products, and operate on a regular release schedule. I’ve done a great deal of writing for technology companies: from project proposals and journalism, to tutorials and content for distributors, to white papers, marketing, and sales materials. Delightfully, I’ve managed to get there, and in retrospect it’s both somewhat amazing, and incredibly delightful.

A while back, I had dinner with a friend who’s been doing the same thing I do for a long time (we know each other through folk dance and singing,) and by comparing our experiences it was great to learn that my experience is quite typical, both in terms of the work I’m doing and the procedural engineering practice frustrations (e.g. “What do you mean you changed the interface without telling me?!?!")


At work we have this thing where we send in an account of what we did during the day so that other people know what we’re working on, and so that we can keep our team on the same page. After all, when you’re all looking at computer screens all day, and in a few different time zones, it’s easy to loose track of what people are working on.

At the bottom of these emails, we’re prompted to ask “what are your blockers and impediments.” Often I say something clever like “Compiler issue with Spacetime interface or Library.” Or something to that effect. It feels like a good description of the last year.

Onward and Upward!

Aeron Woes

I have an Aeron chair at my desk at home. Confession.

I got it in April when I moved to New York City. The only piece of furniture that I had that I couldn’t move in my (now former) car was my desk chair. I found a good deal on an Aeron chair and I rationalized to myself that the cost of the chair was actually about the cost of movers. Savings right?

It also helped, that I was leaving a job where I had an Aeron chair in my office, and I knew that in the short term I would be working from home. While my old desk chair was (and is) quite nice, it’s not quite the same. Sit in an Aeron chair for a couple of two years, and it’s hard to go back. I’ve sat in other chairs since then, and it’s never quite the same.

Having said that, after a cleaning incident today, I would like to collect a few gripes about the Aeron chair for your consideration.

  • The assembly right beneath the chair collects dust and dirt in a proportion that doesn’t seem quite possible. It’s clearly an artifact of the mesh, and likely a commentary on the air circulation of my apartment.

    Regardless, dusting nightmare.

  • The arms scuff and scratch on desks, if the bottom of the desk isn’t completely smooth. This isn’t an actual problem: the chair still works fine and is as comfortable as ever, but it’s a annoying.

I’ve never looked at the underside of a desk before seriously. With every other chair I’ve either ordered a variant sans arms, or I’ve take then arms off as soon as possible.

The Aeron arms are low enough that they’ve never bothered me, so I thought “might as well.” But it’s still annoying.

That’s all.

Documentation Emergence

I stumbled across a link somewhere along the way to a thread about the Pyramid project’s documentation planning process. It’s neat to see a community coming to what I think is the best possible technical outcome. In the course of this conversation Iain Duncan, said something that I think is worth exploring in a bit more depth. The following is directly from the list, edited only slightly:

I wonder whether some very high level tutorials on getting into Pyramid that look at the different ways you can use it would be useful? I sympathize with Chris and the other documenters because just thinking about this problem is hard: How do you introduce someone to Pyramid easily without putting blinders on them for Pyramid’s flexibility? I almost feel like there need to 2 new kinds of docs:

  • easy to follow beginner docs for whatever the most common full stack scaffold is turning out to be (no idea what this is!)
  • some mile high docs on how you can lay out pyramid apps differently and why you want to be able to do that. For example, I feel like hardly anyone coming to Pyramid from the new docs groks why the zca under the hood is so powerful and how you can tap into it.

Different sets of users have different needs from documentation. I think my “:Multi-Audience Documentation” post also addresses this issue.

I don’t think there are good answers and good processes that always work for documentation projects. Targeted users and audience changes a lot depending on the kind of technology at play. The needs of users (and thus the documentation) varies in response to the technical complexity and nature every project/product varies. I think, as the above example demonstrates, there’s additional complexity for software whose primary users are very technical adept (i.e. systems administrators) or even software developers themselves.

The impulse to have “beginner documentation,” and “functional documentation,” is a very common solution for many products and reflects two main user needs:

  • to understand how to use something. In other words, “getting started,” documentation and tutorials.
  • to understand how something works. In other words the “real” documentation.

I think it’s feasible to do both kinds of documentation within a single resource, but the struggle then revolves around making sure that the right kind of users find the content they need. That’s a problem of documentation usability and structure. But it’s not without challenges, lets think about those in the comments.

I also find myself thinking a bit about the differences between web-based documentation resources and conventional manuals in PDF or dead-tree editions. I’m not sure how to resolve these challenges, or even what the right answers are, but I think the questions are very much open.

9 Awesome Git Tricks

I’m sure that most “hacker bloggers” have probably done their own “N Git Tricks,” post at this point. But git is one of those programs that has so much functionality and everyone uses it differently that there is a never ending supply of fresh posts on this topic. My use of git changes enough that I could probably write this post annaully and come up with a different 9 things. That said here’s the best list right now.

::: {.contents} :::

See Staged Differences

The git diff command shows you the difference between the last commit and the state of the current working directory. That’s really useful and you might not use it as much as you should. The --cached option shows you just the differences that you’ve staged.

This provides a way to preview your own patch, to make sure everything is in order. Crazy useful. See below for the example:

git diff --cached

Eliminate Merge Commits

In most cases, if two or more people publish commits to a shard repository, and everyone commits to remote repositories more frequently then they publish changes, when they pull, git has to make “meta commits” that make it possible to view a branching (i.e. “tree-like”) commit history in a linear form. This is good for making sure that the tool works, but it’s kind of messy, and you get histories with these artificial events in them that you really ought to remove (but no one does.) The “--rebase” option to “git pull” does this automatically and subtally rewrites your own history in such a way as to remove the need for merge commits. It’s way clever and it works. Use the following command:

git pull --rebase

There are caveats:

  • You can’t have uncommitted changes in your working copy when you run this command or else it will refuse to run. Make sure everything’s committed, or use “git stash
  • Sometimes the output isn’t as clear as you’d want it to be, particularly when things don’t go right. If you don’t feel comfortable rescuing yourself in a hairy git rebase, you might want to avoid this one.
  • If the merge isn’t clean, there has to be a merge commit anyway I believe.

Amend the Last Commit

This is a recent one for me..

If you commit something, but realized that you forgot to save one file, use the “--amend” switch (as below) and you get to add whatever changes you have staged to the previous commit.

git commit --amend

Note: if you amend a commit that you’ve published, you might have to do a forced update (i.e. git push -f) which can mess with the state of your collaborators and your remote repository.

Stage all of Current State

I’ve been using a versing of this function for years now as part of my download mail scheme. For some reason in my head, it’s called “readd.” In any case, the effect of this is simple:

  • If a file is deleted from the working copy of the repository, remove it (git rm) from the next commit.
  • Add all changes in the working copy to the next commit.
git-stage-all(){
   if [ "`git ls-files -d | wc -l`" -gt "0" ]; then; git rm --quiet `git ls-files -d`; fi
   git add .
}

So the truth of the matter is that you probably don’t want to be this blasé about commits, but it’s a great time saver if you use the rm/mv/cp commands on a git repo, and want to commit those changes, or a have a lot of small files that you want to process in one way and then snapshot the tree with git.

Editor Integration

The chances are that your text editor has some kind of git integration that makes it possible to interact with git without needing to drop into a shell.

If you use something other than emacs I leave this as an exercise for the reader. If you use emacs, get “magit,” possibly from your distribution’s repository, or from the upstream.

As an aside you probably want to add the following to your .emacs somewhere.

(setq magit-save-some-buffers nil)
(add-hook 'before-save-hook 'delete-trailing-whitespace)

Custom Git Command Aliases

In your user account’s “~/.gitconfig” file or in a per-repository “.git/config” file, it’s possible to define aliases that add bits of functionality to your git command. This is useful defining shortcuts, combinations, and for triggering arbitrary scripts. Consider the following:

[alias]
all-push  = "!git push origin master; git push secondary master"
secondary = "!git push secondary master"

Then from the command line, you can use:

git secondary
git all-push

Git Stash

git stash” takes all of the staged changes and stores them away somewhere. This is useful if you want to break apart a number of changes into several commits, or have changes that you don’t want to get rid of (i.e. “git reset") but also don’t want to commit. “git stash” puts staged changes onto the stash and “git stash pop” applies the changes to the current working copy. It operates as a FILO stack (e.g. “First In, Last Out”) stack in the default operation.

To be honest, I’m not a git stash power user. For me it’s just a stack that I put patches on and pull them off later. Apparently it’s possible to pop things off the stash in any order you like, and I’m sure I’m missing other subtlety.

Everyone has room for growth.

Ignore Files

You can add files and directories to a .gitignore file in the top level of your repository, and git will automatically ignore these files. One “ignore pattern” per line, and it’s possible to use shell-style globing.

This is great to avoid accidentally committing temporary files, but I also sometimes put entire sub-directories if I need to nest git repositories within git-repositories. Technically, you ought to use git’s submodule support for this, but this is easier. Here’s the list of temporary files that I use:

.DS_Store
*.swp
*~
\#*#
.#*
\#*
*fasl
*aux
*log

Host Your Own Remotes

I’ve only once accidentally said “git” when I meant “github” (or vice versa) once or twice. With github providing public git-hosting services and a great compliment of additional tooling, it’s easy forget how easy it is to host your own git repositories.

The problem is that, aside from making git dependent on one vendor, this ignores the “distributed” parts of git and all of the independence and flexibility that comes with that. If you’re familiar with how Linux/GNU/Unix works, git hosting is entirely paradigmatic.

Issue the following commands to create a repository:

mkdir -p /srv/git/repo.git
cd /srv/git/repo.git
git init --bare

Edit the .git/config file in your existing repository to include a remote block that resembles the following:

[remote "origin"]
fetch = +refs/heads/*:refs/remotes/origin/*
url = [username]@[hostname]:/srv/git/repo.git

If you already have a remote named origin, change the occurrence of the word remote in the above snippet with the name of your remote. (In multi-remote situations, I prefer to use descriptive identifier like “public” or machine’s hostnames.)

Then issue “git push origin master” on the local machine, and you’re good. You can us a command in the following form to clone this repository at any time.

git clone [username]@[hostname]:/srv/git/repo.git

Does anyone have git tricks that they’d like to share with the group?

6 Awesome Arch Linux Tricks

A couple of years ago I wrote “Why Arch Linux Rocks” and “Getting the most from Arch Linux.” I’ve made a number of attempts to get more involved in the Arch project and community, but mostly I’ve been too busy working and using Arch to do actual work. Then a few weeks ago when I needed to do something minor with my system--I forget what--and I found myself thinking “this Arch thing is pretty swell, really.”

This post is a collection of the clever little things that make Arch great.

::: {.contents} :::

abs

I’m using abs as a macro for all of the things about the package build system that I enjoy.

Arch packages are easy to build for users: you download a few files read a bash script in the PKGBUILD file and run the makepkg command. Done. Arch packages are also easy to specify for developers: just specify a “build()” function and some variables int eh PKGBUILD file.

Arch may not have as many packages as Debian, but I think it’s clear that you don’t need comprehensive package coverage when making packages is trivially easy.

If you use Arch and you don’t frequent that AUR, or if you ever find yourself doing “./configure; make; make install” then you’re wasting your time or jeopardizing the stability of your server.

yaourt

The default package management tool for Arch Linux, pacman, is a completely sufficient utility. This puts pacman ahead of a number of other similar tools, but to be honest I’m not terribly wild about it. Having said that, I think that yaourt is a great thing. It provides a wrapper around all of pacman’s functionality and adds support for AUR/ABS packages in a completely idiomatic manner. The reduction in cost of installing this software is quite welcome.

It’s not “official” or supported, because it’s theoretically possible to really screw up your system with yaourt but if you’re cautious, you should be good.

yaourt -G

The main yaourt functions that I use regularly are the “-Ss” which provides a search of the AUR, and the -G option. -G just downloads the tarball with the package specification (e.g. the PKGBUILD and associated files) from the AUR and untars the archive into the current directory.

With that accomplished, it’s trivial to build and install the package, but you get to keep a record of the build files for future reference and possible tweaking. So basically, you this is the way to take away the tedium of getting packages from the AUR, while giving you more control and oversight of package installation.

rc.conf

If you’ve installed Arch, then you’re already familiar with the rc.conf file. In case you didn’t catch how it works, rc.conf is bash script that defines certain global configuration values, which in turn controls certain aspects of the boot process and process initialization.

I like that it’s centralized, that you can do all kinds of wild network configuration in the script, and I like that everything is in one place.

netcfg

In point of fact, one of primary reasons I switched to Arch Linux full time, was because of the network configuration tool, netcfg. Like the rc.conf setup, netcfg works by having a network configuration files which define a number of variables which are sourced by netcfg when imitating a network connection.

It’s all in bash, of course, and it works incredibly well. I like having network management easy to configure, and setup in a way that doesn’t require a management daemon.

Init System

Previous points have touched on this, but the “BSD-style” init system is perfect. It works quickly, and boot ups are stunningly fast: even without an SSD I got to a prompt in less than a minute, and probably not much more than 30 seconds. With an SSD: it’s even better great. The points that you should know:

  • Daemon control scripts, (i.e. init scripts) are located in /etc/rc.d. There’s a pretty useful “library” of shell functions in /etc/rc.d/function and a good template file in``/etc/rc.d/skel` for use when building your own control scripts. The convention is to have clear and useful output and easy to understand scripts, and with the provided material this is pretty easy.

  • In /etc/rc.conf there’s a DAEMON variable that holds an array. Place names, corresponding to the /etc/rc.d file name, of daemons in this array to start them at boot time. Daemons are started synchronously by default (i.e. order of items in this array matters and the control script must exit before running the next script.) However, if a daemon’s name is prefixed by an @ sign, the process is started in the background and the init process moves to the next item in the array without waiting.

    Start-up dependency issues are yours to address, but using order and background start-up this is trivial to implement. Background start ups lead to fast boot times.

Task Updates

Life has been incredibly busy and full lately and that’s been a great thing. I’ve also been focusing my time on big projects recently rather than posting updates here and updating the wiki. And then I have this day job which basically counts as a big project. While I like the opportunity to focus deeply on some subjects, I also miss the blog.

tycho is conflicted about something. Shocking.

In any case, I want to do something useful with this space more regularly. So here I am and expect me more around these parts.

I’ve been working on a total refresh of my Cyborg Institute project. I want it to be an umbrella for cool projects, nifty examples, great documentation, and smart people1 working on cool projects. If that’s ever going to happen, I need to get something together myself. The first release will contain:

  • A book-like object, that provides an introduction to the basic principals of Systems Administration for developers, “web people,” and other people who find themselves in charge of systems, without any real introduction to systems administration. (Status: 70% finished, with a couple more sections to draft and some editing left.)
  • A Makefile based tasklist aggregator, inspired by org-mode but largely tool agnostic. (Status: 95% finished, with documentation editing and some final testing remaining.)
  • A logging system for writers. I use it daily, and I think it’s a vast improvement over some previous attempts at script writing, and I did a pretty good job of documenting it, but it’s virtually impossible to manage/maintain. Having said that, I always wanted to rewrite it in Python (as a learning exercise,) so that might be a cool next step (Status: Finished save editing and an eventual rewrite.)
  • Emacs and StumpWM config files, packaged as “starter-kits” for new users. I have good build processes for both of these. I don’t think that I need to document them fully, but I need to write some READMEs. Since there’s a lot of redistribution of others code, I need to figure out the most compatible/appropriate license. (Status: Finished except for the work of free afternoon.)

Probably, all of these Cyborg Institute projects will get released at about the same time. The blockers will be finishing/editing the book and editing everything else. I might make the release a thing, we’ll see.

Other than that, I:

  • Updated /technical-writing/compilation.
  • Finished the first draft of this novel. Editing will commence in June. I’ve also started planning a fiction project, for a draft to begin in the fall?
  • Wrote a few paragraphs on the ISD page, but I’m starting to think that as my time becomes more limited, that the critical-futures wiki project, as such, will probably be the first thing to fall on the floor, unless someone else is really interested in making that be a thing.

Onward and Upward!


  1. My intention for the Cyborg Institute has always been (and shall remain,) as a sort of virtual think tank for cool projects put up by myself and others. You all, dearest readers, count in this group. ↩︎

Update Pending

It’s been a while since I’ve written one of these “clip posts,” but there’s no time like the present to get started with that. I hope everyone out there in internet-land is having a good end of the year. I’ll try and get a retrospective/new years out in the next few days, and avoid belaboring the point here.

As I said last friday it’s my intent to focus here on shorter/quicker thoughts, and focus my free writing/project time for work on longer projects (fiction, non-fiction, perhaps some programming.) So far so good.

Recent Posts Around Here

Other Cool Things on the Internet