Decreasing Emacs Start Time

One oft made complaint about emacs is that it takes forever to start up, particularly if you’ve got a lot of packages to load it can take a few seconds for everything to start up. In a lot of respects this is an old problem, that isn’t as relevant given contemporary hardware. Between improvements to emacs, and the fact that computers these days are incredibly powerful, it’s just not a major issue.

Having said that, until recently an emacs instance took as much as 7 seconds to start up. I’ve beaten it down to under two seconds, and using emacsclient and starting emacs with “emacs --daemon” makes the start up time much more manageable.

Step One: Manage your Display Yourself

I’ve written about this before, but really even a 2 second start time feels absurd, if I had to start a new emacs session each time I needed to look into a file. “emacs --daemon” and emacsclient mean that each time you “run” emacs rather than start a new emacs instance, it just opens a new frame on the existing instance. Quicker start up times. It means you can open a bunch of buffers in one frame, settle into work on one file, and then open a second buffer and edit one of the previous files you opened. Good stuff. The quirk is that if you’ve set up your emacs files to load the configuration for your window displays late in game, the windows won’t look right. I have a file in my emacs files called gui-init.el, and it looks sort of like this:

(provide 'gui-init)

(defun tychoish-font-small () (interactive) (setq default-frame-alist ‘((font-backend . “xft”)(font . “Inconsolata-08”) (vertical-scroll-bars . 0) (menu-bar-lines . 0) (tool-bar-lines . 0) (left-fringe . 1) (right-fringe . 1) (alpha 86 84))) (tool-bar-mode -1) (scroll-bar-mode -1) )

(if (string-match “laptop” system-name) (tychoish-font-big))

Modifying, of course, the system name, and the settings to match your tastes and circumstances. The (if) statement allows you to have a couple of these -font- functions defined and then toggle between them based on which machine you load emacs on. Then in your init file (e.g. .emacs), make sure the first two lines are:

(setq load-path (cons "~/confs/emacs" load-path))
(require 'gui-init)

Establish the load path first so that emacs knows where to look for your required files, and then use the (require) sexep to load in the file. Bingo.

Package Things Yourself

We saw this above, but as much as possible avoid using the load function. When you use load emacs has to (I’m pretty sure) do a fairly expensive file system operation and then load the file and then compile and load the file. This takes time. Using the require function is not without it’s own cost, but it does save some time compared to load because it lets you take advantage of the work emacs does with the library loading. At least in my experience.

In your various .el files, insert the following statement:

(provide 'package)

And then in your .emacs, use the following statement

(require 'package)

To load it in. You’re probably already familiar with using these to configure packages that you download. Better yet, don’t require at all, but use the auto-load function. This just creates a little arrow inside of emacs that says “when this function is called, load this file, and hopefully the ‘real’ function by this name will be in there.” This lets you avoid loading packages that you don’t use frequently until you actually need them. The following example provides an auto-load for the identica-mode:

(autoload 'identica-mode "identica-mode.el" "Mode for Updating Identi.ca Microblog" t)

Byte Compile files as much as you can.

Contrary to whatever you’ve been told, emacs isn’t a text editor, as much as it is a virtual machine with a good deal of low level functions established for interacting with text and textual environments and some editing-based interfaces. But really at the core, it’s just virtual machine that interprets a quirky Lisp dialect.

The execution model is pretty simple and straightforward, particularly to people who are used to Java and Python: you load source files, emacs imports them and compiles them half way, they’re not the kind of thing that you could read on your own or would want to write, but it’s not quite machine code either. Byte-compiled files are easier for the machine to read, and quicker to process, but they’re not human intelligible. Then when you need to do something with the function that it’s byte-compiled, emacs compiles it the rest of the way into machine code and executes it. Usually this all happens too fast that we don’t really notice it.

One tried and true means of speeding up emacs load times is to byte-compile files manually so that emacs doesn’t have to do it itself when it loads. The emacs-lisp libraries are byte compiled when emacs installs itself, but your files probably aren’t. Now generally, only byte-compile files that you’re not going to be editing yourself regularly. Byte compiled files have an .elc extension, and as soon as there’s a .el file and a .elc of the same name in a directory, emacs will ignore the .el file even if there have been changes made. To byte compile an emacs-lisp file, simply type M-x to get the execute-extended-command prompt, and then run the function byte-compile (i.e. “M-x byte-compile"). Viola!

I hope these all help you all and lead to a slightly more efficient emacs experience.

Winter Break in Reality

I meant to write a more thorough overview of what I was doing with the “extra time” over the holidays. But I don’t think I had as much extra time at the end of the year as I expected to have. What follows is a brief overview what I did do, how the new year has begun and what I’ve been thinking about.

In years past the time at the end of the year was a time to catch up on lost sleep and connections that had fallen by the wayside in the recent months. I also used the time, in some years, to get a lot done: one year I knit about 10 hats. Another, I wrote about a quarter of a novel on a binge. Some years I just vegged.

This year, is different. I haven’t been in school full time for years, and I haven’t received any college credit in a year. I didn’t have significant time off of work. There’s a way in which the holidays were incredibly relaxing. I still have a bunch of friends who are in the later stages of being students, and there’s something awesome about not being a student that’s incredibly relaxing. I mean, working a regular job is not all sunshine and rainbows, but it’s pretty swell, and there’s something about the structure of regular and the mostly even routine that makes it--to my mind--have a greater potential for productivity than “the academic routine.”

In a lot of ways, while I looked forward to holiday time off, and saved up countless projects for that time off, not only did I not make “epic headway” on my projects but I came into the new year feeling sort of behind and tired. Wierd. I blame this on the holidays themselves. It’s as if the entire world slows down: everything gets more difficult for a month or as if the planet is slowly careening to toward this thing that we don’t really enjoy (if we’re being honest,) but that we pretend we really love.

And there’s no getting away from it. You can’t really opt out of the holidays: even if you’re not particularly festive, you can’t control the celebration of other people. You can’t control the fact that the same four songs play on endless repeat in public spaces, you can’t control that everyone wishes you a good holiday, you can’t control all of the federal holidays, you can’t escape tacky decorations, you just can’t escape. And after like 3 days of this, you get tired.

In previous years, the break, the chance to take time off from the big projects I’d been working on (school, applying to graduate school, etc.) was a great opportunity to get “other things done.” Now, there’s no real break from the daily grind, just modulations and finding good balance. That’s an ongoing project, and one that’s better serviced by a good routine and not a few extra days off during a stressful time of year. In any case, I’m glad to have gotten back into things, and I look forward to getting things done.

Onward an Upward!

Wikish and the Personal Public Wiki

First, an announcement. I’ve started a tychoish.com wiki. I’m calling it, appropriately enough “wikish.” You can see a brief introduction and note about my intentions there.

I’ve written a bunch here about the peculiarities of building communities and practices around “the wiki,” as I think it represents a new paradigm for thinking about collaboration and “the text.” I’m, slowly, working on building a community around the cyborg institute wiki, and that’s an ongoing (and fairly specific) project. I’ve also, in much smaller ways, done things with wikis in a couple of other situations: for some group projects I’ve been involved with, a few things for work, and so forth. Perhaps more relevantly, I also used a wiki--much like this one and the others I am responsible for--as the system I used for storing everything in my brain. From these experiences I’ve come to the following conclusion:

  • In any given wiki, most of the “work,” particular at the beginning, is accomplished by a very small number of contributors. Potentially only one contributor.

    Critical mass is a difficult thing to manage or predict, and if you start a wiki and you want it to succeed, you have to be ready to do all of the work of getting it to critical mass, which could take a long time. Fair warning.

  • Wikis are incredibly unstructured. It’s easy to impose structure on a new wiki, in cases where structure will actually hinder growth and development rather than promote development. Particularly if the kind of content you hope to develop is wiki like. For personal organization tasks, wikis are often not the right answer, even if they appear to work for a long time.

  • Creating a page in a wiki is often better and more effective than writing an email of some length (say, more than 250 words), particularly when more than two people are involved in the correspondence.

  • I need another wiki like I need a hole in the head. But, I like that wikish is both public--you all can watch and contribute to what I’m working on--and focused on what I’m working on. The personal wiki, the one that was just for internal use suffered from lack of audience even an imagined audience.

  • I think putting the novella that I wrote in late 2007 into a wiki and working on revisions and tweaks in that context makes a great deal of sense, and I think wikish feels like the “right place” to put that work.

So that’s the plan. I’ll probably post from time to time about new things that I’m posting there, and I’m perfectly happy to have you all make pages in wikish as you want. I’ve also decided, that wikish will require OpenIDs as the only means of authentication. Just cause. See you there!

Independent Web Services

So much of the time, when we talk about network services, technological/software freedom, and this idea of “Cloud” computing, there’s a bunch of debate: “is it a good idea?” “are we giving up too much freedom?” “how does this work out economically?” “what about privacy in the cloud?” While these are important questions, without doubt, I fear that they’re too ethereal, and we end up tussling with a bunch of questions about the future and present of computing that might not be entirely worth debating (at least for the moment.)

Lets take 2 assertions, to start:

1. There are some applications--things we do with technology--that work best when these applications are running on high performance servers that have consistent connections to the Internet, that we can access regardless of where we are in the world.

2. The only way to have control over your data and computing experience is to be responsible for the administration and maintenance of these services yourself.

Huh?

I mean to say, that if we care about our autonomy, and our freedom as we use computers in the contemporary age (i.e. in the era of cloud computing), the only thing to be done is to run our own services. If the fact that Google has all of your data scares you: run your own mail server. If the fact that all of your microblogging output is on twitter, run your own status.net instance. And so forth.

If we really care about having power over our technological experiences, we must take responsibility for services on the Internet. We can say “wouldn’t it be nice if service providers weren’t such dicks with our data,” or “wouldn’t it be nice if software developers wrote networked software that respected our freedom.” And while it would be nice, these convinces don’t in and of themselves

Control over technology and autonomy in the networked context ultimately means that we as users have to:

  • Administer networked servers that provide us with the services that we want and need to do whatever it is that we do.
  • Participate in some exchange for networked services (i.e. pay for service, either in cash or by way of access to data.)

That’s hard! Computers should get easier to use not harder, right?

Leading question there, but…

Yes. One of the leading arguments for consumer-“Cloud Computing” is that by accessing computer services (software) in the browser, developers can provide a more structured and “safe” user experience. At least that’s how I understand it.

While this is a great thing in terms of making computers more accessible, no argument from me, I think we must be careful to avoid confusing of use" with technologically limiting. I fervently believe that its possible to design powerful software that is also easy to use, and I think that as often as not, a confusing technology is an opportunity to provide a teaching experience as much as it presents an opportunity to improve a given technology.

And if it comes down to it, there are situations where it doesn’t matter so much if you’re the one entering the commands into the server. It doesn’t much matter if you are the one managing the server or if you’ve hired someone to configure it for you. As I think about it, there’s probably something of a niche here for people to offer management services in a very boutique sort of style.

If we have to contract to people to do our administration for us, is that really a step in the right direction?

I think, it is. At the moment we pay for our networked computing services (i.e. gmail) by looking at google’s ads next to our mail and giving Google access to the aggregate of our mail spools so that they can mine it for whatever data they need. The other price that we pay for these services is “lock in:” once we commit to using a service it’s quite difficult to change to an alternate provider. Since these are “real costs,” it seems reasonable to expect and want to pay (money) for services that don’t have these costs. Which is where cooperative and boutique-style services make a lot of sense.

I’m not a systems administrator, I just want to do [the thing that I do] and not have to tinker with my computer. This is a lousy idea.

And that’s a lousy question.

To dig in a bit further. I don’t think that “doing the [whatever you do],” would necessarily require a lot of tinkering. It might, of course, and the chances are that we’ve all had to tinker with our technology at one point or another. In most cases tinkering is an upfront rather than ongoing cost. Ideally, the other thing that having full control over your network services you’ll be able to use have services which are more tailored to [the thing you do] than the one size fits all application provided by a third party.

Ok, so what’s the stack look like.

I’m not sure. There’s clearly a common set tasks that we currently use in the networked context. I’m not sure what the application is, exactly, but here’s a beginning of what this “application stack” looks like.

  • An XMPP Server like Prosody.im, with PyAIMt and other convectional IM network transports.
  • Some sort of Email Service: Citadel springs instantly to mind as an “all in one solution,” but some postfix+procmail+fetchmail+horde/squirrelmail seems to make some sense
  • A web server, either for hosting personal websites, or with some sort of authentication scheme (digest?) for sharing files with yourself. The truth is that web servers, are pretty darn lightweight and it doesn’t make sense to not install one. Having said that, people see “web hosting,” and probably often think “Well, I don’t really need web hosting,” when that’s almost beside the point.
  • SSH and some system for FUSE (or FUSE-like) mount points, so that they can use and store remote files.
  • There’s probably a host of web-based applications that would need to be installed as a matter of course: some sort of web-based RSS reader, wiki-like note taking. Bookmarking. Some sort of notification service, Etc.
  • [your suggestion here.]

Bash Loops

I was talking with bear probably two years ago, about programing and how I’m not really a programmer, but I understand what’s going on when programmers talk, and that any time I got close to code, I sort of kludged things together until it worked. This was probably long enough ago, that I was just on the cusp of getting into using Linux full time and being a command line guru.

Of shell scripting, he said something that left something of an impression on me. Something like, “The great thing about the shell, is once you figure out how to do something you never have to figure out how to do it again because you just make it into a script and run it again when you need to.”

Which, now seems incredibly straightforward, but it blew my mind at the time. The best thing, I think, about using computers in the way that I now tend to, is that any time I run across a task that is in anyway repetitive I can save it as a macro (in a non technical sense of the word macro) and then call it back in the future. Less typing, less reading over help text, more doing things that matter.

One thing that got me for a while, was the “loop” in bash. I had a hell of a time making them work. And then a few weekends a go I had a task that required a loop, and I wrote one on the command line, and it worked on the first time through. Maybe I’ve learned something after all. For those of you who want to learn how to build a loop in shell scripting, lets take the following form:

for [variable] in [command]; do

   [command using] $[variable];

done

Typically these are all mashed up onto one line, which can be confusing. Conventionally [variable] is just the letter i, for “item.” Note that the semi colons are crucial, and I think the bacticks are as well (I’d not leave them out,) but they might not be required.

So the loop I wrote. I noticed that there were a number of attempted SSH logins against my server, and while these sorts of SSH probes aren’t a huge risk… better to not risk it. So I wanted to add rules to the firewall to block these IP addresses. Here’s what I came up with:

for i in
  `sudo egrep 'Invalid user.*([[:digit:]]{1,3}\.){3}[[:digit:]]{1,3}' /var/log/auth.log -o | \
  egrep '([[:digit:]]{1,3}\.){3}[[:digit:]]{1,3}' -o | sort | uniq`;

do

   `sudo iptables -I INPUT -s $i -j DROP`;

done

Basically, search the /var/log/auth.log for invalid login attempts, and return only the string captured by the regex. Send this to another egrep command which strips this down to the IP address. Then put the IP addresses in order, and throw out duplicates. Every item in the resulting list is then added to an iptables rule that blocks access. Done. QED.

It’s inefficient, sure, but not that inefficient. And it works. Mostly this just cleans up logs, and I suppose using something like fail2ban would work just as well, but I’m not sure what kind of added security benefit that would offer, and besides it wouldn’t make me feel quite so smart.

I hope this is helpful for you all.

git magic

The following, mostly accurate conversation (apologies for any liberties) should be a parable for the use of the git version control system: As I was about to leave work the other day…

tycho: I pushed today’s work to our repository, have at, I’m headed out.

Coworker A: Awesome. I did too. (pause) wait. It’s screwed up. I deleted a file I didn’t mean to. (pastes link to diff into chatroom)

tycho: Oh, that’s easy to fix. You can reset back to before the file, add all the changes that are in you’re repository, except the deletion of the file, commit, and then “git reset --hard” and then publish that.

Coworker A: But your changes…

(as an aside, the original solution should still work, I think)

tycho: Oh. Hrm. Right. Well… Rebase to remove the bad commit and then add the file in question back on top of my changes.

Coworker A: Wait, what?

tycho: (looks at clook). Shit, I’ll do it. (turns to Coworker P), have you pulled recently?

Coworker P: Nope I’ll do that no--

tycho: Dont’t!

Coworker P: Alright then!

tycho: (mumbles and works)


At this juncture, I pull out crazy git commands and rebase the repository, back a few commits to pull out a single changeset. And then recommit the file with the changes worth saving (which I had copied into ~/ before beginning this operation.)

One thing I’ve learned about using git rebase is that you always have to go back a commit or two before I think I need to, pick out the hash for the last good commit. Also when using “git rebase -i” I find that the commits are listed in the reverse order that I want them to be listed in.

Another great hint: Issue the following command if you’re an emacs user and you don’t want git to open rebase editing sessions in vim.

git config --global core.editor "emacsclient -t -a emacs -NOW"

The one issue here is that I had to rewrite the history of an already published series of changes. This is why I didn’t want P to pull. When I was done, and the state of my repository was as it should have been, my next push (predictably failed), as it needed to be a “git push -f”, which is something of a scary operation. It worked out, and when everyone pulled the next time everything was fine: I knew it would be for P because their local repository never knew about the first iteration of the history. I was less sure if A’s would adjust so seamlessly, but it did.


tycho: Ok, done. Pull A.

Coworker A: All better! I have no clue what happened.

tycho: It’s cool, don’t sweat it. There’s very little that isn’t fixable. As long as you don’t hard reset changes, and don’t do crazy rebasing stuff, you should be ok.

Coworker A: Like what you just did?

tycho: Pretty much.


Here are the lessons:

  • git push” and “git pull” would seem like parallel operations but they’re not. Pull with abandon, it never hurts to pull. But if lots of people are pulling from the same repository, and you push a change that you don’t mean to push, it’s really hard to take that change back in a logical and productive way. So push with caution.
  • Rebasing is a tool that has great power shouldn’t be feared even though theoretically you can screw stuff up with it. The git way says “commit your changes early and often,” is great, but it can be sort of anti-social, as individual commits become sort of meaningless, and change logs can get hard to manage. Rebasing, though scary, can make it possible to both commit as often as you need to, and then rebase to be presentable.
  • Fear forced pushes.
  • Everything in git can be changed, so play with things, and then only publish changes when the repository is in a good working state.

Onward!

Beyond Lists in Org Mode

I’ve written about this problem in org-mode, the emacs outlining and organization tool that I us, before, but I’m readdressing it for my benefit as well as yours.

Org mode is an outlining tool, fundamentally. It provides a nice interface for editing and manipulating information arranged in an outline format. Additionally, and this is the part that everyone is drawn to, it makes it very easy to mark and treat arbitrary items in the outline as “actionable,” or todo items in need of done. The brilliance of org-mode, I think, is the fact that you spend all your time working on building useful outlines and then it has a tool which takes all this information and compiles it into a useful todo list. How awesome is that. For more information on org-mode, including good demonstrations, check out this video.

The problem is a common and recurring one for me. I basically live in the agenda mode--that compiled list of todo items--and I don’t so much use org-mode for making outlines. Truth is, I have a “Tasks” heading in most org files, and I use the automatic capture option (e.g. org-remember) to stuff little notes into the files, and beyond that, I mostly don’t interact with the outlines themselves.

This isn’t a bad thing, I suppose, but it means that org-mode can’t really help you, and you’ve short-circuted the ability of org-mode to improve the organization. Under ideal circumstances, org allows you to embed and extract todo lists from the recorded record of your thought process. If you’re not actively maintaining your thoughts in your org-mode files, it’s just another todo list. That isn’t without merit, but it doesn’t allow the creation of tasks and the flow of a project to spring organically from your thoughts about the project, which is the strength of org mode.

Intermission: I took a break from writing this post to go and reorganize my org files. What follows are a list of “things I’ve been doing wrong” and “things I hope to improve.”

  • I don’t think I had enough org-files. There are lots of approaches to organizing information in org: one giant file, lots of small files for individual projects, a few mid to large files for each “sphere” of your life.

    Initially I took the “medium sized files for major ongoing projects.” I had a writing file, and a work file, and a writing file, and files for the fiction projects that I’m working on, and a notes file, and a clippings file, and so forth. Say about 8-10 files. It works, but I think the thing it did was it caused me to use the org-remember functions to just dump things in a “tasks” heading, and then work from the agenda buffer, and not ever really have to touch the files themselves. Org files need to be specific enough that you would want to keep them open in another window while you’re working on a project. I think the point where you know you’ve gone too far is when the first level headings start to replicate organization that might better be handled by the file-system.

  • Use the scheduling and deadline functions to filter the todo list into something that is workable. It’s easy to just look at the task list and say “oh no, I don’t want to work on this task right now because it depends on too many things that aren’t done, and there are other things that I could work on.” Scheduling an item, if not setting a deadline, forces me (at least) to think practically about the scope of a given project, what kind of time I’ll have to work on it, and what other tasks depend upon it.

  • When you’re using org to manage huge blocks of text--or any system, really--it can be difficult if you have multiple hierarchies and depths of greater than two or three. It just gets hard to manage and keep track of things and figure out where things are, particularly given how useful and prevalent search tools are.

    Having said that, When you’re organizing tasks in org, that limitation, one that I find myself imposing upon myself doesn’t really work terribly well, and leads to files that might actually be more difficult to read and to work out of.

  • I started using the “org-archive-subree” function for archiving content when I was through with parts of the outline, This sends the archive to a separate file and while it works, I find it… less than useful. I’ve since discovered “org-archive-to-archive-sibling” which is a great deal of awesome, and I recommend using it exclusively.

  • Write content in org mode when possible. Though some people (hi Matt!) are keen on using org as a publication system, I’m not sure if this is the right answer, but I do think that its good during very creative phases of projects to do the work in org, mostly as I think it facilitates focusing on the current problem (through collapsing of the tree to show you just what you’re working on,) and also for working non-linearly as you can leave yourself TODO items for later action.

At the same time, if you tend to maintain org files that contain planning for more than one project, I find it cumbersome to also draft in these files. So I think “keep smaller very focused org files, and maybe do drafting in them if appropriate.”

That’s a start at least. I’ve made these changes--which are really quite subtle--and I like the way it feels, but we’ll have to see how things shake down in a few weeks. As much as I want to avoid tinkering with things--because tinkering isn’t the same as getting things done--I really do find it helpful to review processes from time to time and make sure that I’m really working as effectively as I can.

The Two Year Sweater

I finished knitting a sweater. I posted pictures of this to twitter, so I guess in a way, I’ve scooped myself.

But I did it. This sweater has a special story…


I think it’s worth mentioning that--if there are any knitters left reading this that I’m sort of haphazardly working on a collection of knitting patterns and stories/essays. Patterns in the sense that you could get a bunch of yarn and some needles and read and end up with a sweater that probably looks like the one I have. But not patterns in the sense that I’m not writing instructions for knitting, but rather stories about my life and the creative process that embed the instructions for knitting sweater. This post isn’t exactly one of those, thought I do hope to get to the sweater in question at some point soon.


But then don’t they all.

I initially called this sweater “Latvian Dreams” and the idea was that I’d blog about the sweater as I knitted it as a sort of adventuresome knit along.

It turned into a nightmare.

And I never did really blog about it in the way that I might have liked to….

I was working in a yarn store at the time, and it seemed like a good idea at the time. I was knitting a lot. I was pretty serious into blogging at this point, and it seemed like a good idea.

It wasn’t.

In an effort to create a pattern that would be easy for other people to pickup, particularly people who might not have been particularly adept at the kind of stranded two color knitting I find so entrancing, the patterns I chose were almost too simple, and I never really got into them.

It’s not, I suppose terribly fair to say that the patterns were too simple…

The patterns were all symmetrical, both top to bottom and side to side. I chose three different patterns, arranged things to be reminiscent of an Aran sweater, and they even synced up with each-other so that there was a regular repeat that I thought would help people memorize the stitches.

And the whole thing was sort of like pulling teeth.

I mean it all worked out in the end, so I suppose I can’t complain about anything but the time that it took to make the blasted thing. It’s a good sweater. Even though I haven’t blocked it yet, I’m struck by how well it works. The yarn is fine--hence part of the scope of the project--and it fits really well. I must know a thing or two about how to knit sweaters.

And somehow it’s a bit bittersweet.

In a lot of ways this is the kind of sweater that I don’t really have the attention or focus to be able to even ponder making now. Too much attention even in the planning, not to mention the scope of the carry through. It’s not that I’ve lost the technical ability to knit a sweater like this, it’s as if my life has moved on, and it took those kinds of sweaters with it, and that’s sort of hard.

No lies, I’m glad to be done. For sure.