open source competition

I’ve been flitting about the relm of political economics, technological infrastrucutre, and cyborg-related topics for a number of weeks, maybe months, and I haven’t written very much about open source. This post is hopefully a bit of a return to this kind of topic, mostly because I’ve been staring at a blog post for weeks, and I finally have something that’s nearly cogent to about an article that kind of pissed me off. Here goes

The article in question seeks to inform would-be-software entrepreneurs how they ought to compete against open source software, and to my mind makes a huge mess of the whole debate. Lets do some more in-depth analysis.

“Open source is only cheap if you don’t care about time,” is an interesting argument that sort of addresses the constant complaint that open source is “fussy.” Which it is, right? Right. One of the best open source business models is to provide services around open-source that make it less fussy. Also I think Free Software is often “a work in progress,” and is thus only occasionally “fully polished,” and is often best thought of as a base component that can be used to build something that’s fully customized to a specific contextual set of requirements. That’s part of the value and importance of free software.

I don’t think we can have our cake and eat it too on this one, (the cake is a lie!) and in a lot of ways I think this is really a positive attribute of free software.

The complaints regarding open source software seem to boil down to: “open source software doesn’t come with support services, and installation polish” (we’re working on it, but this is a commercial opportunity to provide support around open source products in general.)

So to consolidate the argument, the author seems to suggest that: “in order to beat open source software, which sucks because it’s not polished enough and doesn’t have support, I’m going to write a totally different code base, that I’ll then have to polish and support.”

My only real response is. “Have fun with that.”


Before I lay this to rest, I want to give potential “Commercial Software Vendors” (proprietary software vendors?) the following qualifications on the advice in the original article.

1. Save your users time: Sound advice. Though I think the best way to save users time is probably to integrate your product with other related tools. Make your product usable and valuable. Provide support, and take advantage skilled interaction designers to provide intuitive interfaces. Don’t, however, treat your users like idiots, or assume that because your product might have a learning curve it’s flawed. The best software not only helps us solve the problems we know we have, but also solves problems we didn’t know we had, and in the process creates tremendous value. Don’t be afraid to innovate.

Also, **save yourself time*, you can create more value for your customers by not reinventing the proverbial wheel. Use open source software to bootstrap your process, and if the value you create is (as it always is) in support and polish, you can do that to open source just as well as you can to your own software.

2. Market Hard, might work, but it’s all hit and miss. Open source might not be able to advertise, or send people on sales calls to enterprises, but open source has communities that support it, including communities of people who are often pretty involved in IT departments. Not always, mind you, but sometimes.

If you’re a “Commercial Software Vendor” you’re going to have a hell of a time building a community around your product. True fact. And word of mouth, which is the most effective way to predict sales, is killer hard without a community.

4. Focus on features for people who are likely to buy your product, is a great suggestion, and really, sort of the point of commercial software, as far as I can see. Custom development and consulting around open source if you can provide it, achieves the same goal. At the same time, I think a lot of open source enterprise software exists and succeeds on the absence of licensing fees, and so I think would-be-software vendors should be really wary to think of the enterprise as being “cash cows” particularly in the long run.

So in summary:

  • Create value, real enduring value. Not ephemeral profitability, or in-the-moment utility.
  • Be honest about what your business/endeavor really centers on, and do that as best you can.
  • Understand the social dynamics of open source, not simply the technological constrains of the user experience.

And…. done.

fact files

I wrote a while back about wanting to develop a “fact file” or some way of creating a database of notes and clippings that wouldn’t (need to be) project specific research, but that I would none the less like the keep track of. Part of the notion was that I felt like I was gathering lots of information and reading lots of stuff, that I didn’t really have any good way of retaining this information beyond whatever I could recall based on what I just happen to remember.

I should note that this post is very org-mode focused, and I’ve not subtitled very much. You’ve been warned.

Ultimately I developed an org-remember template, and I documented that in the post linked to above.

Since then, however, I’ve changed things a bit, and I wanted to publish that updated template.

(setq org-remember-templates'(
  ("annotations" ?a
    "* %^{Title} %^g \n  :PROPERTIES:\n  :date: %^t\n  :cite-key: %^{cite-key}\n  :link: %^{link}\n  :END:\n\n %?"
    "~/org/data.org" "Annotations and Notes")
  ("web-clippings" ?w
    "* %^{Title} %^g \n  :PROPERTIES:\n  :date: %^t\n  :link: %^{link}\n  :END:\n\n %x %?"
    "~/org/data.org" "Web Clippings")
  ("fact-file" ?f
    "* %^{Title} %^g \n  :PROPERTIES:\n  :date: %^t\n  :link: %^{link}\n  :END:\n\n %x %?"
    "~/org/data.org" "Fact File")
  ))

What this does, reflects something I noticed in the way I was using the original implementation. I noticed that I was collecting quotes from both a variety of Internet sources and published sources. Not everything had a cite-key (a key that tracks the information in my bibtex database,) and I found that I also wanted to save copies of blog posts and other snippets that I found useful and interesting, but that still didn’t seem to qualify as a “fact file entry.”

So now there are three templates:

  • First, annotations of published work, all cross referenced against cite-keys in the bibtex database.
  • Second, web clippings, this is where I put blog posts, and other articles which I think will be interesting to revisit and important to archive independently for offline/later reading. Often if I respond to a blogpost on this blog, the chances are that post has made it into this section of the file.
  • Third, miscellaneous facts, these are just quotes, in general. Interesting facts that I pull from wikipedia/wherever, but nothing teleological, particularly. It’s good to have a place to collect unstructured information, and I’ve found the collection of information in this section of the file to be quite useful.

General features:

  • Whatever text I select (and therefore add to the X11 clipboard) is automatically inserted into the remember buffer (with the %? tag)
  • I make copious use of tags and tag compleation which makes it easier to use the “sparse tree by tag” functionality in org-mode to just display heading which are tagged in a certain way.) So that I can see related content easily. Tags include both subject and project-related information for super-cool filtering.
  • All “entires” exist on the second level of the file. I’m often sensative to using too much hierarchy, at the expense of clarity or ease of searching. This seems to be particularly the case in org-mode, given the power of sparse trees for filtering content.

So that’s what I’m doing. As always, alternate solutions feedback are more than welcome.

writing like a programmer

I’m unique among my coworkers, in that I’m not a developer/programmer. This is a good thing, after all, because I’m the writer and not a programmer; but as a “workflow” guy and a student of software development one thing that I’ve been particularly struck by since taking this job is how well I’ve been able to collaborate with coworkers who come from a completely different background/field and furthermore how helpful this as been to my work and development as a writer. This post is going to contain some of these lessons and experiences.

For starters, we’re all pretty big fans of git. As git is one of the most interesting and productive technologies that I use regularly, this is really nice. Not only does everyone live in plain text format, but they mostly use the same version control system I do. I’ve definitely had jobs and collaborations in the past few years (since I made the transition to pure text) where I’ve had to deal with .doc files, so this is a welcome change.

I’ve long thought that working in plain text format has been a really good thing for me as a writer. In a text editor there’s only you and the text. All of the bullshit about styles and margins and the like that you are forced to contend with in “Office” software is a distraction, and so by just interacting with text, by exactly (and only) what I write in the file, I’ve been able to concentrate on the production of text, leaving only “worthwhile distractions” to the writing process.

Working with programmers, makes this “living in plain text” thing I do, not seem quite so weird, and that’s a good thing for the collaboration but--for me, at least--it represents an old lesson about writing: use tools that you’re very comfortable with, and deal with output/production only when you’re very ready for it. Good lesson. I might have taken it to the extreme with the whole emacs thing, but it works for me, and I’m very happy with it.

But, using git, with other people has been a great lesson, and a great experience, and I’m getting the opportunity to use git in new ways, which have been instructive for me--both in terms of the technology, but also in terms of my writing process.

For instance, when ever I do a git pull (which asks the server for any new published changes and then merges them (often without help from me) with my working coppy) and see that a coworker has changed something, I tend to inspect the differences (eg. diffs) contained in the pull. Each commit (set of changes; indeed each object, but that’s tangential) in git are assigned a unique identifier (a cryptographic hash) and you can, with the following command generate a visual representation of the changes between any two objects:

git diff  6150726..956BC46

If you have colors turned on in git (to colorize output; only the first line affects diffs, but I find the others are nice too):

git config --global color.diff auto
git config --global color.status auto
git config --global color.branch auto

This generates a nice colorized output and of all the changes between the two revisions, or points in history as specified. The diff, is just the output of the format that git uses to apply a set of changes to a base set of files so it displays a full copy of what the lines used to look like at the first point in time, and then new lines which represent what the lines look like in the second point in time, as well as contextual unchanged lines to anchor the changes to, when needed. Colorized the old content are darker (orange?) and the new content is brighter (yellow? green?), contextual anchors are in white.

The result is that when you’re reviewing edits you can see exactly what was changed, and what it “used to be” without needing to manually compare new and old files, and also without the risk of getting too wound up in the context.

Not only is this the best way I’ve ever received feedback, in terms of ease, of review and clarity (when you can compare new to old, in very specific chunks, the rationale for changes is almost always evident), but also in what it teaches me about my writing. I can see what works and what doesn’t work, I can isolate feedback on a specific line from feedback on the entire document.

While I’ve only really been able to do this for a few weeks, not only do I think that it’s productive in this context, but that I think it might be an effective way for people to receive feedback and learn about writing. People involved in the polishing of prose (professional editors, writers, etc) often have all sorts of ways to trick themselves in attending to the mechanics of specific texts (on the scale of 7-10 words) stuff like reading backwards, reading paragraphs/sentences out of order. Reading from beginning to end, but reading sentences backwards, and so forth. Reviewing diffs allows you to separate big picture concerns about the narrative from structural concerns, and some how the lesson--at least for me--works.

Programmers, of course, use diffs regularly to “patch” code and communicate changes, and the review patches and diffs are a key part of the way programmers collaborate. I wonder if programmers learn by reviewing diffs in the same sort of way.

This will probably slowly develop in to a longer series of posts, but I think that’s enough for you. I have writing to do, after all :)

Cheers!

the mainframe of the future

It seems really popular these days to say, about the future of computing, that “in a few years, you’ll have a supercomputer in your pocket."1 And it’s true: the computing power in contemporary handheld/embedded systems is truly astounding. The iPhone is a great example of this, it runs a variant “desktop operating system,” it has applications written in Objective-C, it’s a real computer (sans keyboard and a small screen). But the truth is that Andriod and Blackberries are just as technically complex. And lets not forget about how portable and powerful laptops are these days. Even netbooks, which are “underpowered,” are incredibly powerful in the grand scheme of things.

And now we have the cloud, where raw computing power is accessible and cheap: I have access to an always-on quad-core system, for something like 86 cents a day. That’s crazy cheap, and the truth is that while I get a lot for 86 cents a day, I never run up against the processor limitations. Or even gotten close. Unless you’re compiling software/graphics (gaming) the chances of running into the limits of your processor for more than a few seconds here and there, are remarkably slim. The notable exception to this rule, is that the speed of USB devices is almost always processor-bound.

All this attention on processing power, leads to predictions about “supercomputers in your pockets,” and the slow death of desktop computing as we know it. This is, while interesting and sexy to talk about, I think it misses some crucial details that are pretty important.

The thing about the “supercomputers in your pocket” is that mobile gear is almost always highly specialized and task specific hardware. Sure the iPhone can do a lot of things, and it’s a good example of a “convergence” device as it combines a number or features (web browsing/email/http client/phone/media viewer) but as soon as you stray from these basic tasks, it stops.

There are general purpose computers in very small packages, like the Nokia Internet tablets, and the Fujitsu ultra mobile PCs, but they’ve not caught on in a big way. I think this is generally because the form factor isn’t general purpose and they’ve not yet reached the commodity prices that we’ve come to expect for our general purpose computing gear.

So while I think the “how we’ll use pocket-sized” supercomputers still needs to be worked, I think the assertion that computing power will continue to rise, while the size will continue to shrink, at least for a few more years. There are physical limits to Moore’s Law, but I think we have a few more years (10?) before that becomes an issue.

The question that I’ve been asking myself for the past few days isn’t “what are we going to do with new supercomputers,” but rather, “what’s that box on your desktop going to be doing.”

I don’t think we’re going to stop having non-portable computers, and indeed, as laptops and desktops have functionally converged in the last few years: the decision between getting a laptop and a desktop is mostly about economics, and “how you work.” While I do think that a large part of people’s “personal computing” going to happen on laptops, I don’t think desktops are going to just cease to exist in a few years, to be replaced by pocket-sized supercomputers.

It’s as if we’ve forgotten about mainframe computing while we were focused on supercomputers.

The traditional divide between mainframes and supercomputer is simple, while both are immensely powerful supercomputers tend to be suited to address computationally complex problems, while mainframes are designed to address comparatively simple problems on massive data-sets. Think “supercomputers are processors” and “mainframes are input/output.”

My contention is that as, the kinds of computing that day-to-day users of technology starts to level off in terms of computational complexity (or at least is overtaken by Moore’s Law), the mainframe metaphor becomes a more useful perspective to extend into our personal computing.

This is sort of the side effect of thinking about your personal computing in terms of “infrastructure”2 While we don’t need super-powerful computers to run our Notepad applications, finding better ways to isolate and run our tasks in parallel seems to make a lot of sense. From the perspective of system stability, from the perspective of resource utilization, and from the perspective of security, parallelizing functionality offers end users a lot of benefits.

In point of fact, we’ve already started to see this in a number of contexts. First, mutli-core/multi-processor systems are the contemporary standard for processors. Basically, we can make processors run insanely fast (4 and 5 gigahertz clock speeds, and beyond) but no one is ever going to use that much, and you get bottlenecks as processes line up to be computed. So now, rather than make insanely fast processors, (even for servers and desktops) we make a bunch of damn fast processors (2 or 2.5ghz is still pretty fast) that are all accessible in one system.

This is mainframe technology, not supercomputing technology.

And then there’s virtualization, which is where we run multiple operating systems on a given piece of hardware. Rather than letting the operating system address all of the hardware at once as one big pool, we divide hardware up and run isolated operating system “buckets.” So rather than having to administer one system, that does everything with shared resources, and having the headache of making sure that the processes don’t inter-fear with each-other, we create a bunch of virtualized machines which are less powerful than the main system but only have a few dedicated features, and (for the most part) don’t affect each other.

This is mainframe technology.

Virtualization is huge on servers (and mainframes of course,) and we’re starting to see some limited use-cases take hold on the desktop (e.g. Parallels desktop, VMware desktop/fusion), but I think there’s a lot of potential and future in desktop virtualization. Imagine desktop hypervisors that allow you to isolate the functions of multiple users? That allow you to isolate stable operations (eg. fileserving, media capture, backups) from specific users' operating system instances, from more volatile processes (eg. desktop applications). Furthermore, such a desktop-hypervisor would allow users to rely on stable operating systems when appropriate and use less stable (but more feature rich) operating systems on a per-task basis. There are also nifty backup and portability related benefits to running inside of brutalized containers.

And that is, my friends, really flippin' cool.

The technology isn’t yet there. I’m thinking about putting a hypervisor and a few guest operating systems on my current desktop sometime later this year. It’s a start, and I’ll probably write a bit more about this soon, but in any case I’m enjoying this little change in metaphor and the kinds of potentials that it brings for very cool cyborg applications. I hope you find it similarly useful.

Above all, I can’t wait to see what happens.


  1. Admittedly this is a bit of a straw-man premise, but it’s a nifty perception to fight against. ↩︎

  2. I wrote a series of posts a few weeks ago on the subject in three parts: one, two, and three ↩︎

Ongoing Projects

I’ve been talking with people recently about “what I’m working on,” and I’ve realized two things. First, that I’m beginning to get spread thin; and second, that I haven’t really used this blog as an effective tool to track these projects and facilitate ongoing work on these projects. So I’m going to write an “ongoing projects update.” So there.

While I don’t think there’s sense in making this a “weekly feature” I think taking the opportunity to check in with you all about my projects, to mention cool things that are going on with these projects.

  1. The Novel

I’ve not managed to make this into the habit that I want it to be. Having totally missed my goal of finishing the draft in August, I’ve set a more tentative goal of getting it done in time for NaNoWriMo this year. I don’t know if I’ll do a NaNo project this year--probably not, I’m too contrary--but it seems like a good and doable goal.

What has me hung up at the moment, is I have a few scenes that I need to be written by a particular character that I’ve come to despise, not because he’s a bad character, I just find him frustrating to write. This is mostly interesting, insofar as I initially thought that he’d be the easy character to write in the story.

Despite this hang up, I’m really quite close to being done with this monster. Three or four more chapters, and some editing across the board. Not a huge deal. I just need to do it. That’s a lot of what this Labor Day weekend has been about.

  1. This Blog

You’re all aware of this project, I trust. I’ve been able to keep up my “mostly daily” schedule for a long time now. Two or three years and counting. Since I’ve started the new job, and since my posting entries (if not actually writing them,) is a manual task (with Wordpress, I could queue things to Autopost). I’m not as good as I once was about getting entries posted in the morning as I would like to be. But it gets there.

Also, while I’m not cruising toward the A-List like I might have dreamed about when I was a teenager and getting started with this whole blogging thing, I’m actually pretty pleased with how this blog is going. Most entries evoke some sort of response that I see: on identi.ca, on facebook, or in comments. I get to have cool email conversations with you all. I’m pretty pleased. I’m still trying to figure out how to do a little better, because I think it’ll be awesome for all of us, if there are more voices and conversations going on, but I love blogging, and I’m really pleased with this project.

  1. Cyborg Institute and Sygn System

This is the project that I’ve started with deepspawn, to create a distributed social networking and “user generated database engine.” Notes and other work related to this project are starting to come together on the Cyobrg Institute Wiki, and it’s something that I put a lot of work into a few weeks ago, but I haven’t really given it the kind of love in the past two weeks that its needed.

My list at the moment, for Sygn related projects is to do some reorganization of the wiki (the constant struggle), to announce and promote the xmpp muc for the sygn project (a chat room), to help people develop a basic reference implementation (and maybe learn some Python in the process?), and generate a few more use cases, to help folks understand the implications and possible utility of the project.

  1. Cyborg Institute Systems Administration

One of my contentions about the future (of technology specifically, but I think it’s generalizable to some extent) is that as “previously scarce resources” like data connectivity, storage space, and software, become less scarce, the one thing that will continue to have concrete value is systems administration. Having people in the world who are really good at keeping larger systems running, at making sure all of the pieces talk to each-other, at making sure the people who need technological services have the right kind of service that they need. There’s real value in that.

And that’s a huge part of what the “Cyborg Institute” project is about. Sure there’s a lot of cyborg-related content and theorizing that I’m interested in working and developing, but really I can do that here on tychoish, what Cyborg Institute lets me (and you!) do is make this conversation much larger, it lets us work together and it allows me to help people do awesome things.

While the product of this work isn’t particularly visible, and I don’t really have the ability to say “I did X, Y, and Z for CI” this week, there are a lot of little things, and I think it’s definitely a worthwhile project.

5. `Critical Futures <http://criticalfutures.com>`_ `Relaunch <http://wiki.criticalfutures.com/>`_

This is definitely a Cyborg Institute project: it’s running on CI servers, we’re using CI tools, and I think the project--a collaborative fiction wiki--is very much one of these new technology-things that makes the whole “cyborg moment” so interesting.

I should point out that [brush][] is largely spearheading this. I’m just doing a bit here and there, and making sure the system runs well. I’m excited about this, and I’m glad that Critical Futures is going to get some love. There’ll be some other projects of mone--the novel, and so forth--on Critical Futures as well someday, but that’s down the road I think. Good to do something here, no?

  1. Knitting

I think it’s a good day when you can be like “You know tycho, you should watch more TV.” my current knitting project is very much a “do it whilst watching television” kind of project, and I’d very much like to be able to create a space in my day(s) to get more work on this done.

That seems about good for now. What are you working on? :)

useful emacs and org-mode hacks

After a long time of intentionally avoiding tweaking my emacs file, I’ve gotten back into tweaking and hacking on my setup a bit in emacs land. Rather than wax philosophical about emacs and plain text, I thought I’d share a few things with you all in the hopes that this will prove helpful for you.

I’ve given some thought to publishing a git repository with my emacs files, my awesome config, and the useful parts of my bashrc files. My only hesitation is that all of these files aren’t in one repository right now, and I’d need to do some clean up to avoid publishing passwords and the like. Encouragement along this direction might be helpful in inspiring me to be a little more forthcoming in this direction.

Keybinding “Name Spaces”

I’ve begun reorganizing key-bindings in a standard pattern, in order to avoid collision of bindings in certain spaces. The problem with the “C-x C-[a-z]” bindings is that it’s hard to get really good mnemonic bindings for whatever you’re trying to do, and there are few of them. I’ve taken to putting all of my custom bindings (mostly) under “C-c [a-z],” and then grouping them together, based on mode or function.

(global-set-key (kbd "C-c o a") 'org-agenda-list)
(global-set-key (kbd "C-c o t") 'org-todo-list)
(global-set-key (kbd "C-c o p") 'org-insert-property-drawer)
(global-set-key (kbd "C-c o d") 'org-date)
(global-set-key (kbd "C-c o j") 'org-journal-entry)
(global-set-key (kbd "C-c r") 'org-remember)
(global-set-key (kbd "C-c a") 'org-agenda)

(global-set-key (kbd “C-c w s”) ‘w3m-search) (global-set-key (kbd “C-c w t”) ‘w3m-goto-url-new-session) (global-set-key (kbd “C-c w o”) ‘w3m-goto-url) (global-set-key (kbd “C-c w y”) ‘w3m-print-this-url) (global-set-key (kbd “C-c w l”) ‘w3m-print-current-url)

You can see here, org-mode related bindings and w3m related bindings. “C-c o” is wide open, and I haven’t yet found anything in that space that I’ve overwritten. Same with “C-c w”. Even though the command key-chains are a bit longer than they might be if I piled things more sporadically, I can remember them more quickly.

Org-journal is something I got from metajack, and I don’t use it as much as I should. Everything else is standard org or w3m functionality.

I suppose I should make mode-specific key-bindings so that I’m not eating away global name space for mode-specific functionality, but I’m not sure that would make things too much clearer or easier to remember.

Also I really like the (kbd ") syntax for specifying key sequences. Much easier to read and edit.

Custom File settings

A while back I pulled my customize-set variables out of my main init-file, and gave them their own file, which means my init-file isn’t quite so long, and the variables that I’m not setting.

Nevertheless, I like to set as many variables by hand with setq just so that I can be in better touch with what settings I’m changing. This code, moves custom-set variables out of main file:

(setq custom-file "~/path/to/emacs.d/custom.el")
(load custom-file 'noerror)

Window Transparency and Font Settings

At the top of my init file, I have the following four lines to set font and window transparency.

(add-to-list 'default-frame-alist '(font . "Monaco-08"))
(set-default-font "Monaco-08")
(set-frame-parameter (selected-frame) 'alpha '(86 84))
(add-to-list 'default-frame-alist '(alpha 86 84))

Note that this depends on running a composting manager like xcompmngr, and the transparency is quite subtle. With great pleasure, running this code at the begining of the init file means that emacs’ looks and behaves correctly when I start it using a plain,

emacs --daemon

command from a regular bash prompt. I’m running fairly recent (but perhaps not the actual release?) builds of emacs 23. Note that I’d had trouble getting daemonized versions of emacs to start and capture the right information about font and transparency. That seems to be resolved.

Aliases

Here’s the alaises I use to make key-commands less work to type. It’s sort of a space between “creating a key binding” and just using the function from M-x Here’s the current list:

(defalias 'wku 'w3m-print-this-url)
(defalias 'wkl 'w3m-print-current-url)

(defalias ‘afm ‘auto-fill-mode) (defalias ‘mm ‘markdown-mode) (defalias ‘rm ‘rst-mode) (defalias ‘wc ‘word-count) (defalias ‘wcr ‘word-count-region) (defalias ‘qrr ‘query-replace-regexp) (defalias ‘fs ‘flyspell-mode) (defalias ‘oa ‘org-agenda) (defalias ‘uf ‘unfill-region) (defalias ‘ss ‘server-start) (defalias ‘se ‘server-edit) (defalias ‘nf ‘new-frame) (defalias ‘eb ‘eval-buffer) (defalias ‘mbm ‘menu-bar-mode) (defalias ‘hs ‘hs-org/minor-mode)

There are a number of these that I don’t use much any more, but it’s not worth it to edit the list down.

New Modes

A few new modes that I’ve been using

yassnippet

I’ve started using yasnippet more, and I’m quite fond of it for managing and inserting little templates into files as I’m working. There’s not a lot of example code that I can share with you, as it just works, but I do have a couple of notes/complaints:

  • I have to use C-i to expand snippets. The “tab” key doesn’t seem to work to expand snippets ever.
  • The organization of the snippets directory is absurd. I understand how the structure of the hierarchy mirros the way modes are derived from one another, and having the expansion triggers as file names also makes sense, but it’s really hard to organize things. Do people use modes that aren’t derived from “text-mode”? Are there any? There should be a “global” directory in the snippets folder (next to text-mode) where all of the files in any number of folders beneath “global” are available in all modes.
  • It’s amazing useful, and there are some things that I need to create snippets for that I haven’t. This is on my list of things to do.

w3m

w3m is an external text-mode browser that emacs hackers have written a good bridge to emacs for. What this means is you get a text-mode browser that works in emacs, but it’s speedy because page rendering happens outside of emacs.

It works, and it’s immensely use-able, though the key-bindings are a bit hard to remember and there are too many of them to change at once without completely driving yourself crazy.

I read a thread on the emacs-devel list a few months back about embedding something like uzbl inside of emacs (making emacs more like a window-manager) and I think the project presents an interesting possibility, but I think w3m succeeds because it makes the text of a website accessible within emacs.

Embedding a “real” browser in emacs, would just duplicate window manager functionality, and add complication. I think better to make a uzbl config file that was emacs-friendly, and some sort of “create emacs buffer with selected uzbl text” bridge would be nice, but anything more than that seems foolish.

My (few) w3m key-bindings are above.

nxml mode

With all this web-design work I’ve been doing, (eg. cyborg institute) I’ve needed to stray into using HTML and CSS modes. There’s this newer mode called nxml-mode which is delightful because it validates your html/xhtml/xml file on the fly (great!) but I’ve found it less than helpful for situations where I just have a snippet of HTML/XML in a given file, because it gets included later. Nonetheless, powerful stuff.


That’s about it for now. There are few other things, but I don’t feel ready to really explore them at this point, mostly because I haven’t gotten familiar enough to know if my modifications have been useful. Muse-mode, etc.

Any good emacs code that I should be looking at?

return of five things

Ways I’ve Injured Myself Recently

  1. The tip of my right index finger caring a server.

2. My left knee (recurrent, minor) dancing, probably jumping. Design flaw, I’m convinced.

3. I have some sort of strain/dislocation of the first knuckle of my right ring finger, and its oddly sore.

  1. My right shoulder, because I sleep on it funny.

5. My left wrist is occasionally stiff from typing and what not, but I think it’s sort of interesting that these 3 of the five hand-related.

Things I would Change about my Cell Phone if I could.

1. I would be able to SSH into my cell phone, for the purpose of sending libnotify-esque notifications to the cell phone

2. It would have a plethora of hardware keys, potentially some sort of keyboard (on a slider) and only require touch-screen interfaces when intuitive.

  1. I would want it to run emacs, at least to be able to check on things.
  2. I would like some sort of native terminal client on the cellphone.
  3. It would be unlocked.

Objects that I would like to Combine

  1. eBook reader/music player/tablet-or-pocket computer.
  2. A tea kettle and yogurt maker.
  3. My network router and my computer.
  4. A keyboard and desk chair.
  5. The mouse (and my computer’s dependence on it) and /dev/null

Things I wish I spent more time doing

  1. Knitting
  2. Reading
  3. Editing things I’ve written
  4. Spinning
  5. Being social

Abilities of which I am Jealous in Others

  1. Musical talent, mostly playing the violin/viola and melodian.

2. Signing and leading songs effectively (including remembering lyrics completely).

  1. The facility to function on much less sleep than I seem to require.
  2. The appetite to drink coffee without wanting to wretch.
  3. The ability to write computer programs with skill and grace.

Revolutionary Communities

I began to get to this in my post on health care and cooperatives, and governmental reform but I think it’s important to get to this point in its own post.

I guess what I’ve been gunning at (whether or not I realized it) is, “the shape of social/political change” in the contemporary world. What does change look like? What mechanisms can we use to create change? How do the existing ways that we think of revolutionary change fail to address the world we live in?


Samuel R. Delany, in his essay(s) Time Square Red, Time Square Blue presents what he calls “Contact” a potential instrument of social reform, of social “activism.” Contact, boils down to unstructured, seemingly random, intermingling of people in urban contexts. He argues for direct relationships, for an increase in cross-class cross-race relationships, by avoiding “gentrification” and social segregation. And he illustrates the efficacy of these methods with a number of pretty effective examples.

When I read this the first time, as well as the second and third, I thought remember thinking “wow, that was the first social critique I’ve read that not just presents an overwhelming critique of a cultural phenomena (gentrification, the sequestering of public sexuality) but that also presents a mechanism for social change.”

The problem with presenting mechanisms to promote social and political change is that the details are incredibly difficult to clarify, and it’s easy to present a valid critique without presenting an idea of how to effect change. It’s easy to call for action, and leave the nature of that action up to the in-the-moment activists. It’s far too easy to point out a social problem, even a superstructural issue, and then default to the methodology of previous generations (and issues,) to attempt to solve the problem. Here’s an example:

We see a lot of “recursion to Marxist-inspired methodology,” without much (I’d say) thinking about the industrial/material implications of Marx. This happens, to varying degrees in a number of areas: I think in some more casual Marxist-Feminism, in (some) environmental movements, and other movements that present “revolutionary social/political” critique. Revolutionary moments are indeed important times for some renegotiation of social values and systems, but it’s too easy to say “after the revolution….” and get all misty eyed, and forget that the critique at hand has very little to do with the disconnect between the ownership of resources, labor, and social power.

Furthermore, I think there are a lot of contemporary civil rights movements (Gay and Lesbian, Women, Immigrant) that refer back to the American Civil Rights Movement in a way that ignores the complexities of the current issue, or the complexity of the earlier issue. In any case, interlude over, I think I’m gunning for a way to get past this trap of casting contemporary struggles in the methodological terms of past struggles.


My contention is that in the next, 20 or 30 years1 the biggest force of social change won’t be (exactly:) the mustering of revolutionary regiments, it won’t be about who we elect to legislatures and executive offices, it won’t be about where we march; but rather, about the communities we form, about the relationships we develop in these communities.

But tycho, I know you’re interested in communities, but *revolution?*

Indeed, it’s a stretch, but here’s the argument: when people get together, we make things. We see this in free software, we see this in start-ups, we see this in fan communities on the Internet. This production, is going to be an increasingly important part of our economic, political, and social activity, and the conversations the cross-class contact that occurs when people get together to work on something of common interest. Communities are the substrate for the transmission of ethical systems, and are the main way in which ideologies are transmitted to people. This is all incredibly important.

But tycho, materialism isn’t dead, you’re ignoring *things* which continue to have great importance!

Technology won’t make material things matter less at least in the way that this statement assumes. What technology will almost certainly do is make it possible for fewer people to do the work that once required required great infrastructure and capital outlay. Technology will allow us to coordinate collaboration over greater distances. Technology will lower the impact of large economies of scale on the viability of industries (smaller production runs, etc.) The end result is the things that take huge multi- and trans-national institutions (corporations) to produce today, could potentially be the domain of much smaller cooperatives.


We’ll realize, I think only somewhat after the fact, that the world has changed, and all the things that we used to think “mattered” don’t really. And I think, largely, we can’t plan for this. The “work” ahead of is, is to make things do work with other people, to collaborate and draw connections across traditional boundaries (nations, class, race, discipline, gender, skill sets), in the present and let the future attend to itself. These kinds of ad-hoc institutions are already forming, are already making things. And that’s incredibly cool.

Thoughts? I need to improve the history section of this, a good bit, and come up with more examples of the kinds of communities that exemplify this kind of organization, but this is a start.


  1. These are rough dates, lets just say “until the singularity hits.” ↩︎