City Infrastructure

I’m always interested in how the lessons that people learn in IT trickle down to other kinds of work and problems. This is one of the reasons that I1 am so interested in what developers are interested in: if you want to know what’s happening in the technology space, it’s best to start at the top of the food chain. For this reason this article from IBM, which addresses the use of IT/Data Center management tools outside of the data center was incredibly interesting for me.

When you think about it, it makes sense. IT involves a lot of physical assets, even more virtual assets, and when projects and systems grow big enough, it can be easy to lose track of what you have, much less what state it’s in at any given time. Generalized, this is a prevalent issue in many kinds of complex systems.

As an aside, I’m a little interested when software that provides asset management and monitoring features, will scale down to the personal level. That’ll be interesting too. There are the beginnings of this kind of thing (e.g. iTunes, and git-annex) but only the beginnings.

I’m left with the following questions:

  • Obviously moving from managing and monitoring networked devices to managing and monitoring infrastructure objects like water filtration systems, storm water drainage, the electrical grid, snow removal, etc. presents a serious challenge for the developers of these tools, and this adaptation will likely improve the tools. I’m more interested in how cities improve in this equation. And not simply with regards to operating efficiencies. What do we learn from all this hard data on cities?
  • Will cities actually be able to become more efficient, or will they need to expand to include another layer of management management, that nullifies the advances. There are also concerns about additional efficiency increasing the “carrying capacity of cities,” into unsustainable levels.
  • Can the conclusions from automated city-wide reporting lead to advancements in quality of service, if we’re better at determining defective practices and equipment. In this vein, how cities share data between them will also be quite interesting.

I’d love to hear from you!


  1. RedMonk also use a similar argument. ↩︎

Do You Read Philosophy?

“What do you do?”

I’m a technical writer.

“Do you write other stuff?”

A bit, sometimes.

“Poetry?”

No Poetry.” I laugh.

“Fiction?”

Yeah, Some.

“Do you read philosophy too?”

A bit.

“Oh, good! Materalist or Idealist?”

Materialist.

“Who do you read?”

I’m a bit of an unreformed Deluzian.

“Deleuze wasn’t a materialist.”

Yeah, but he wanted to be. Really bad.

“There’s that. I’ve been reading Hegel recently.”

Oh, really.

Professional Content Generation

I’m a writer. I spend most of my day sitting in front of a computer, with an open text editing program, and I write things that hopefully--after a bit of editorial work--will be useful, enlightening, and/or entertaining as appropriate. I’ve been doing this since I was a teenager and frankly it never seemed to be particularly notable a skill. The fact that I came of age with the Internet a member of its native participant-driven textual culture had a profound effect, without question. This is a difficult lineage to manage and integrate.

Obviously I’m conflicted: on the one hand I think that the Internet has been great for allowing people like me to figure out how to write. I am forever thankful for the opportunities and conversations that the Internet has provided for me as a writer. At the same time, the Internet, and particularly the emergence of “Social Media” as a phenomena complicates what I do and how my work is valued.

Let’s be totally clear. I’m not exactly saying “Dear Internet, Leave content generation to the professionals,” but rather something closer to “Dear Internet, Let’s not distribute the responsibility of content generation too thinly, and have it come back to bite us in the ass.” Let me elaborate these fears and concerns a bit:

I’m afraid that as it becomes easier and easier to generate content, more will start creating things, and there will be more and more text and that will lead to all sorts of market-related problems, as in a vicious cycle. If we get too used to crowd sourcing content, it’s not clear to me that the idea of “paying writers for their efforts,” will endure. Furthermore, I worry that as the amount of content grows, it will be harder for new content to get exposure and the general audience will become so fragmented that it will be increasingly difficult to generate income from such niche groups.

Some of these fears are probably realistic: figuring out how we will need to work in order to our jobs in an uncertain future is always difficult. Some are not: writing has never been a particularly profitable or economically viable project, and capturing audience is arguably easier in the networked era.

The answer to these questions is universally: we’ll have to wait and see, and in the mean time, experimenting with different and possibly better ways of working. My apologies for this rip-off, but it’s better to live and work as if we’re living in the early days of an exciting new era, rather than the dying days of a faltering regime.

Perhaps the more interesting implication of this doesn’t stem from asking “how will today’s (and yesterday’s) writers survive in the forthcoming age,” but rather “how do these changes affect writing itself.” If I don’t have an answer to the economic question, I definitely don’t have an answer to the literary question. I’m hoping some of you do.


As an interesting peak behind the curtain, this post was mostly inspired as a reaction to this piece of popular criticism that drove me batty. It’s not a bad piece and I think my objections are largely style and form related rather than political. Perhaps I’m responding to the tropes of fan writing, and in retrospect my critique of this piece isn’t particularly relevant here. But that article might provide good fodder for discussion. I look forward to your thoughts in comments or on a wiki page.

Onward and Upward!

News, Fit to Sing

It’s sometimes easy to forget all of the little things that I do during the week, and how they add up to something of note. In the moment--any moment--most things seem much smaller and much less important than they do with a little bit of perspective and when viewed out of context with coordinating achievements. So here’s the highlights from last week:

I gave up on the “having two sites to maintain thing,” and have merged Critical Futures into the tychoish.com blog/wiki. I really like this, and it gave me some time to get elbows deep into the wiki system, which means things work better, the display is a bit cleaner, my life is easier, and I’m very happy with it. Also, as part of this process I’ve been revising the index page, and I think it is in a state that I’m really pleased with.

Some dancing friends on Facebook said “wouldn’t it be nice if there was a wiki for contra dancers.” Now there is. Contra Dance Wiki is there for you all if you want it. I’ve done some preparatory work on the index page, and I’ll continue to add things as I can.

The work I’ve been doing in the last few weeks with ikiwiki, both for tychoish and now for the CDW means that I’m really close to being able to share all of the assorted templates and configuration files I use to make this work. It’s all in the git repository for tychoish, but I’m going to pull together a more generic version so people can get started easier. It might also make sense to write a deployment script of sorts. We’ll see.

I meant to post this on Friday, but posted my review of maple morris, because I didn’t want that to linger. In any case, I had a brilliant weekend singing (and dancing) in Western Massachusetts. It has however left me a bit under the weather, so I’m spending a day recovering and doing some writing. Good stuff there. I’ll write about WMSHC soon.

Maple Morris Review

A few weekends ago, I went to very weird get together. The week before I went to upstate New York to a festival that drew 5000 attendees. Then, I went to Washington, DC to dance with 24 or 25 other people from across the country and Canada. I think of it as “my generation,” Morris dance gathering.

This May marks my 10th anniversary of being a Morris dancer. I’ve spent most of that time, easily being twenty years away from the next-youngest Morris dancer on my team. Morris isn’t aging quite that fast: but there are a lot of quirky things that happen given the small sample sizes.

I’ve been involved in the folk world for years. Lots of folk dance and traditional music. I’m so accustomed to this, that I’m not really sure what people who aren’t do with their time. When I think about other communities, I always reach back to experiences and phenomena that I’ve seen in the folk world.

While I grew up in the folk dance world, I’m coming to terms with a couple of things: First, folk communities are different in different parts of the country/world, and the community in Boston (or New York, or Philadelphia) is very different from what I grew up with. Second, I’m realizing that while I’m “a young person” who grew up with music and dance, I’m no longer “a folk dance kid,” (and that’s a nifty thing to experience.)

Given this, I’ve had the following Morris related thoughts, that seem worth recording:

  • It’s really nice to be part of a single age cohort in this activity, mostly because I’ve not had significant opportunity to dance with people in my general age group.
  • I quite enjoy being more than just a familiar face in a contra dance line, or someone that you see across the square when singing sacred harp.
  • These weekends always challenge me to be a better dancer, and make me realize that I need to focus and work on certain aspects of my dancing.
  • While it’s in-ideal to have gatherings capped at really small numbers, having a small group means a greater strength of connection between everyone, and it means that a few people can do the organizing work without much institutional/organizational overhead. That’s really cool.
  • Once again, my motto is “you don’t have to do everything.” Which is particularly difficult around Morris. But I think by avoiding overdoing it I’m able to: avoid injury and have greater successes at the things I do try. Can’t argue with that. I’m young and I hope to dance for many years to come, and there’ll be time enough for Sherborne then.

Dance Flurry, Review

I went to the Dance Flurry a couple of weeks ago (!) and I wanted to write a few notes here about the experience, and a little bit of reflection. I hope you’ll spare me the indulgence.

A year ago, I was pretty new to the East Coast: I didn’t really know people, and while I’d been dancing for a while and I wasn’t a bad dancer by any means, but I wasn’t quite comfortable in my own skin in big dance events.

This year, many things were different. I’d been to a number of other important regional events: I knew more people, I knew the bands and the callers, I knew the venue, and I knew what to expect.

It was great. I got to dance with friends that I hadn’t seen in months. I got to dance to great bands. The callers were top notch, and there was never a shortage of great ways to spend my time.

My motto for the weekend was “you don’t have to do everything.” Which meant not staying until 1am, just because there was a dance going on; or not showing up at 9am because that’s when things started. Prevailing sanity is an amazing thing.

It meant that I didn’t hurt myself; I didn’t come back from the vacation more tired than I was when I left; and I still had a great time.

It’s amazing.

Git Sync

With the new laptop, I once again have more than one computer, and with it a need to synchronize the current state of my work. This is a crucial function, and pretty difficult to do right: I’ve had multiple systems before and condensed everything into one laptop because I didn’t want to deal with the headache of sitting down in front of a computer and cursing the fact that the one thing that I needed to work on was stuck somewhere that I couldn’t get to.

I store most of my files and projects in git version control repositories for a number of reasons, though the fact that this enables a pretty natural backup and synchronization system, was a significant inspiration for working this way. But capability is more than a stones throw from working implementation. The last time I tried to manage using more than one computer on a regular basis, I thought “oh, it’s all in git, I’ll just push and pull those repositories around and it’ll be dandy.” It wasn’t. The problem is, if you keep different projects in different repositories (as you should, when using git,) remembering to commit and push all repositories before moving between computers is a headache.

In the end synchronization is a rote task, and it seems like the kind of thing that was worth automating. There are a number of different approaches to this and what I’ve done is some very basic bash/zsh script1 that takes care of all of this syncing process. I call it “git sync,” you may use all or some of this as you see fit.

git sync lib

The first piece of the puzzle is a few variables and functions. I decided to store this in multiple files for two reasons: First, I wanted access to the plain functions in the shell. Second, I wanted the ability to roll per-machine configurations using the components described within. Consider the source.

The only really complex assumption here is that, given a number git repositories, there are: some that you want to commit and publish changes too regularly and automatically, some that you want to fetch new updates for regularly but don’t want to commit, and a bunch that you want to monitor but probably want to interact with manually. In my case: I want to monitor a large list of repositories, automatically fetch changes from a subset of those repositories, and automatically publish changes changes to a subset of the previous set.

Insert the following line into your .zshrc:

source /path/to/git-sync-lib

Then configure the beginning of the git-sync-lib file with references to your git repositories. When complete, you will have access to the following functions in your shell: gss (provides a system-wide git status,) autoci (automatically pulls new content and commits local changes to the appropriate repository,) and syncup (pulls new content from the repositories and publishes any committed changes.

syncup and autoci do their work in a pretty straightforward for [...] done loop, which is great, unless you need some repositories to only publish in some situations (i.e. when you’re connected to a specific VPN.) You can modify this section to account for this case, take the following basic form:

syncup(){

   CURRENT=`pwd`

   for repo in $force_sync_repo; do
       cd $repo;

       echo -- syncing $repo
       git pull -q
       git push -q

   done
   cd $CURRENT

}

Simply insert some logic into the `for`` loop, like so:

for repo in $force_sync_repo; do
   cd $repo;
   if [ $repo = ~/work ]; then
      if [ `netcfg current | grep -c "vpn"` = "1" ]; then
          echo -- syncing $repo on work vpn
          git pull -q
          git push -q dev internal
      else
         echo -- $repo skipped because lacking vpn connection
      fi
   elif [ $repo = ~/personal ]; then
       if [ `netcfg current | grep -c "homevpn"` = "1" ]; then
          echo -- syncing $repo with homevpn
          git pull -q
          git push -q
       else
          echo -- $repo skipped because lacking homevpn connection
       fi
   else
      echo -- syncing $repo
      git pull -q
      git push -q
   fi
done

Basically, for two repositories we test to make sure that a particular network profile is connected before operating on those repositories. All other operations are as in the first example. I use the output of “netcfg current”, which is an ArchLinux network configuration tool that I use. You will need to use another test, if you are not using Arch Linux.

git sync

You can use the functions provided by the “library” and skip this part if you don’t need to automate your backup and syncing process. The whole point of this project was specifically to automate this kind of thing, so this--though short--is kind of the cool part. You can download git sync here.

Put this script in your $PATH, (e.g. “/usr/bin” or “/usr/bin/local”; I keep a “~/bin” directory for personal scripts like this in my path, and you might enjoy.) You will then have access to the following commands at any shell prompt:

git-sync backup
git-sync half
git-sync full

Backup calls a function in git-sync to backup some site-specific files to a git repository (e.g. crontabs, etc.) The half sync only downloads new changes, and is meant to run silently on a regular interval: I cron this every five minutes. The full sync runs the backup, commits local changes, downloads new changes, and sends me an xmpp message to log when it finishes successfully: I run this a couple of times an hour. But there’s an exception: if the laptop isn’t connected to a Wifi or ethernet network, then it skips sync options. If you’re offline, you’re not syncing. If you’re connected on 3g tethering, you’re not syncing.

That’s it! Feedback is of course welcome, and if anyone wants these files in their own git repository so they can modify and hack them up, I’m more than willing to provide that, just ask.

Onward and Upward!


  1. I wrote this as a bash script but discovered that something with the way I was handling arrays was apparently a zsh-ism. Not a big fuss for me, because I use zsh on all my machines, but if you don’t use zsh or don’t have it installed, you’ll need to modify something in the array or install zsh (which you might enjoy anyway.) ↩︎

9 Awesome SSH Tricks

Sorry for the lame title. I was thinking the other day, about how awesome SSH is, and how it’s probably one of the most crucial pieces of technology that I use every single day. Here’s a list of 10 things that I think are particularly awesome and perhaps a bit off the beaten path.

Update: (2011-09-19) There are some user-submitted ssh-tricks on the wiki now! Please feel free to add your favorites. Also the hacker news thread might be helpful for some.

SSH Config

I used SSH regularly for years before I learned about the config file, that you can create at ~/.ssh/config to tell how you want ssh to behave.

Consider the following configuration example:

Host example.com *.example.net
User root
Host dev.example.net dev.example.net
User shared
Port 220
Host test.example.com
User root
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
Host t
HostName test.example.org
Host *
Compression yes
CompressionLevel 7
Cipher blowfish
ServerAliveInterval 600
ControlMaster auto
ControlPath /tmp/ssh-%r@%h:%p

I’ll cover some of the settings in the “Host *” block, which apply to all outgoing ssh connections, in other items in this post, but basically you can use this to create shortcuts with the ssh command, to control what username is used to connect to a given host, what port number, if you need to connect to an ssh daemon running on a non-standard port. See “man ssh_config” for more information.

Control Master/Control Path

This is probably the coolest thing that I know about in SSH. Set the “ControlMaster” and “ControlPath” as above in the ssh configuration. Anytime you try to connect to a host that matches that configuration a “master session” is created. Then, subsequent connections to the same host will reuse the same master connection rather than attempt to renegotiate and create a separate connection. The result is greater speed less overhead.

This can cause problems if you' want to do port forwarding, as this must be configured on the original connection, otherwise it won’t work.

SSH Keys

While ControlMaster/ControlPath is the coolest thing you can do with SSH, key-based authentication is probably my favorite. Basically, rather than force users to authenticate with passwords, you can use a secure cryptographic method to gain (and grant) access to a system. Deposit a public key on servers far and wide, while keeping a “private” key secure on your local machine. And it just works.

You can generate multiple keys, to make it more difficult for an intruder to gain access to multiple machines by breaching a specific key, or machine. You can specify specific keys and key files to be used when connected to specific hosts in the ssh config file (see above.) Keys can also be (optionally) encrypted locally with a pass-code, for additional security. Once I understood how secure the system is (or can be), I found my self thinking “I wish you could use this for more than just SSH.”

SSH Agent

Most people start using SSH keys because they’re easier and it means that you don’t have to enter a password every time that you want to connect to a host. But the truth is that in most cases you want to have unencrypted private keys that have meaningful access to systems because once someone has access to a copy of the private key the have full access to the system. That’s not good.

But the truth is that typing in passwords is a pain, so there’s a solution: the ssh-agent. Basically one authenticates to the ssh-agent locally, which decrypts the key and does some magic, so that then whenever the key is needed for the connecting to a host you don’t have to enter your password. ssh-agent manages the local encryption on your key for the current session.

SSH Reagent

I’m not sure where I found this amazing little function but it’s great. Typically, ssh-agents are attached to the current session, like the window manager, so that when the window manager dies, the ssh-agent loses the decrypted bits from your ssh key. That’s nice, but it also means that if you have some processes that exist outside of your window manager’s session (e.g. Screen sessions) they loose the ssh-agent and get trapped without access to an ssh-agent so you end up having to restart would-be-persistent processes, or you have to run a large number of ssh-agents which is not ideal.

Enter “ssh-reagent.” stick this in your shell configuration (e.g. ~/.bashrc or ~/.zshrc) and run ssh-reagent whenever you have an agent session running and a terminal that can’t see it.

ssh-reagent () {
  for agent in /tmp/ssh-*/agent.*; do
      export SSH_AUTH_SOCK=$agent
      if ssh-add -l 2>&1 > /dev/null; then
         echo Found working SSH Agent:
         ssh-add -l
         return
      fi
  done
  echo Cannot find ssh agent - maybe you should reconnect and forward it?
}

It’s magic.

SSHFS and SFTP

Typically we think of ssh as a way to run a command or get a prompt on a remote machine. But SSH can do a lot more than that, and the OpenSSH package that probably the most popular implementation of SSH these days has a lot of features that go beyond just “shell” access. Here are two cool ones:

SSHFS creates a mountable file system using FUSE of the files located on a remote system over SSH. It’s not always very fast, but it’s simple and works great for quick operations on local systems, where the speed issue is much less relevant.

SFTP, replaces FTP (which is plagued by security problems,) with a similar tool for transferring files between two systems that’s secure (because it works over SSH) and is just as easy to use. In fact most recent OpenSSH daemons provide SFTP access by default.

There’s more, like a full VPN solution in recent versions, secure remote file copy, port forwarding, and the list could go on.

SSH Tunnels

SSH includes the ability to connect a port on your local system to a port on a remote system, so that to applications on your local system the local port looks like a normal local port, but when accessed the service running on the remote machine responds. All traffic is really sent over ssh.

I set up an SSH tunnel for my local system to the outgoing mail server on my server. I tell my mail client to send mail to localhost server (without mail server authentication!), and it magically goes to my personal mail relay encrypted over ssh. The applications of this are nearly endless.

Keep Alive Packets

The problem: unless you’re doing something with SSH it doesn’t send any packets, and as a result the connections can be pretty resilient to network disturbances. That’s not a problem, but it does mean that unless you’re actively using an SSH session, it can go silent causing your local area network’s NAT to eat a connection that it thinks has died, but hasn’t. The solution is to set the “ServerAliveInterval [seconds]” configuration in the SSH configuration so that your ssh client sends a “dummy packet” on a regular interval so that the router thinks that the connection is active even if it’s particularly quiet. It’s good stuff.

/dev/null .known_hosts

A lot of what I do in my day job involves deploying new systems, testing something out and then destroying that installation and starting over in the same virtual machine. So my “test rigs” have a few IP addresses, I can’t readily deploy keys on these hosts, and every time I redeploy SSH’s host-key checking tells me that a different system is responding for the host, which in most cases is the symptom of some sort of security error, and in most cases knowing this is a good thing, but in some cases it can be very annoying.

These configuration values tell your SSH session to save keys to `/dev/null (i.e. drop them on the floor) and to not ask you to verify an unknown host:

UserKnownHostsFile /dev/null
StrictHostKeyChecking no

This probably saves me a little annoyance and minute or two every day or more, but it’s totally worth it. Don’t set these values for hosts that you actually care about.


I’m sure there are other awesome things you can do with ssh, and I’d live to hear more. Onward and Upward!