Breaking up with the Web

I really don't want to use the web anymore. This should come as no great surprise to most of you, but I think it's worth pondering a bit, particularly because like all "breaking ups," it's a bit difficult. To recap, the reasons for the break up:

  • The software we use to browse the web is awkward and difficult to use efficiently. I'm talking here about things like Firefox, Safari, and Chrome. While "webkit" generation browsers are better than everything that's come before (even if their lack of comparability with the Firefox Platform makes them useable,) every browser I've interacted with is a huge program that just feels unwieldy.
  • There are two many distractions in the browser. I've managed to find ways to assimilate and interact with nearly all of the information that comes at me in the course of a day or a week in a sane, balanced, and efficient way. Except for the browser. Where I find myself refreshing Facebook or twitter endlessly. I don't even like facebook and the twitter website all that much.
  • The web is too sensitive to the availability of data connectivity. While I have an Internet connection nearly all of the time that I'm in front of a connection, I don't really like to rely on this to do my work. I don't want to use applications that rely on connectivity, and I hate situations where I have a few moments to do something, and I have a computer with me, and I get started and then I have to check a fact, or read a little bit about {{something}} on wikipedia, and I can't because I don't have a connection.
  • I don't like that the presentation layer of the web provides so much flexibility to make websites so unreadable and difficult to comprehend. Web browsers interfaces like emacs-w3m improve this somewhat, but even that is somewhat lacking. This isn't a problem with software, but rather it's a problem with designers, design, and the "way the web works."

So to end on a somewhat positive note. Here's what I think we really need in the next generation of digitally connected applications.

  • Some sort of very smart predictive caching software that would run locally. We have the hard-drive space in contemporary machines that we could dedicate--as much as 100 gigabytes to a cache of network data and never really feel a space crunch. In some cases even more. I think most people's digital music collections tend to top out in the 75-100 gig range, and "small" desktop hard drives have at least 500 gigs. Nothing else--well videos--takes up space. This would make the offline web a much more realistic proposition, it would speed things up and we could work on ways of only sending diffs between the cache and the servers, and it would rock.

  • Databases need to mostly move off of the server and onto local boxes. Extension of point above. Content doesn't change that much, local machines are now fast and smart enough to really be able to handle this. This is in HTML5, but having said that, I worry a bit. Because I'm me.

  • We can import a lot of the "intelligence" of computing onto clients. There's moves toward this already, with Adobe AIR and it's competitors, but this seems to be all about adding "bling" to the web experience, and use the cross-platform nature of web technologies, even the proprietary ones like Flash, to reinvent desktop application development. I think we can go even further with this. Lets think about the next generation of desktop RSS clients. Offline wiki/wikipedia software.

    I'm not trying to buck the "software in the 21st century is social and connected" trend that we're in the middle of, but rather seriously rethink the interface and work-flow paradigms of the web.

  • I hope that the next generation of web-document standards (of which I think sygn is an example) will focus on structure and organization and a much more limited set of "features" (less is more) that will let content creators make content more useful rather than better looking.

    Take design out of the content, and put all of the display logic (aside from headings and meta-data) on the client. Don't like how a site displays? Use a different client. And so forth.

Anyone with me?

The Odd Cyborg Out

I said to my office mate this week, "I'm switching to zsh," and I believe he said something to the effect of "oh dear, what's next."

I should back up. I'm something of an odd duck when it comes to the way I use computers. I'm a geek, even in the context of my coworkers who are (also) huge geeks. I'm the only one who uses emacs. We're an OS X shop (for the desktop, at least) but I run Arch Linux inside of a virtual machine. Because I'm like that. And now, I'm switching away from the by-now unix standard "bash" shell to "zsh." I'm a bit weird. I'm ok with this.

So zsh. Why should you care? Well...

I'm not expert, having only really used it for a few days but there are a few things that have won me over:

  • It's mostly backwards compatible with bash. So, except for the stuff that configured my prompt, I was able to copy over my old .bashrc file pretty much as is. There's been no real "brain adjustment" from all my old bash habits.
  • It's faster. You know, this is the kind of thing taht you don't believe, "my terminal is faster than your terminal" is kinda lame because bash is pretty peppy compared to GUI stuff. I mean what, bash is a 300-400 kb, how slow can it be? The answer is, zsh just feels faster. This seems to be a quasi universal experience.
  • It does tab-completion within commands. This is seriously amazing, because while command completion and path completion is awesome in bash, you still have to remember all of the sub-commands. This is particularly rough for big commands like "git" and "apt-get" or "apt-cache". Very awesome.

Getting up the courage to switch and to rewrite my prompt was something that took a little bit of doing, but now I'm happy, and I strongly recommend it. If you like me live in the terminal, or have thought about using the command line more, give zsh a try, it's good stuff.


The other thing, almost certain to provoke an "Oh dear" reaction on the part of my geeky friends is the fact that I'm strongly considering switching from the Awesome Window Manager to the Stump Window Manager, or more practically StumpWM or just Stump. Here's some background on my adventures with tiling window managers:

When I started using Awesome every thing I did with the computer lived in it's own little window. I was coming from the mac, so I lived with ten or fifteen open TextMate windows, a like number of open tabs in my terminal emulator, and a browser with a gazillion open tabs. I thought that this was sort of "the way I worked," and so I replicated this kind of workflow in Awesome.

And here's the thing. Awesome is great for managing a huge number of windows. With 9 workspaces/tags (or more!) it was possible to keep twenty or thirty windows afloat... a few browsers, a few chat windows, a dozen terminals, a few emacs frames, and the like all happening at once. And the window manager made it possible for me to only have to look at 2 or three windows at a time.

Then I progressed. With emacs' server/daemon mode, I only have one instance of emacs and 20 or so buffers, and in an extreme moment I sometimes have as many as 4 frames open at once, but more often I just have 2 or three (org-mode, writing, and a spare for something.) And terminals? I've taken to using screen which multiplexes an untabbed terminal, so I typically have a single screen session with 8 screen-windows, and I keep a couple of instances of that open at once for different contexts, so lets say another three windows. I have a remote screen session for IM and chat now that I connect to, and a single web browser.

Frankly, it's sort of gotten to the point where I don't really need to manage very many windows, and I probably never use more than 4-5 tags/workspaces. My needs for a window manager changed, and one of the core problems that problem that Awesome solves, is one that I've solved by using multiplexed applications. And that leads me to Stump.

I see that I probably need to spend a little more time talking about this tiling window manager stuff again. Stay tuned!

My Phone is Smarter Than Your's

I got a Blackberry last December. I blogged about it then, but I haven't really talked much about it. There's been a bunch of hubbub recently about the iPhone finally getting Multimedia Messaging Service (MMS) support, and this has spurned some thought on my part about smartphones and mobile technology, and all that jazz. It's a big space in the technology world, and most of the time I just ignore all of it, because I don't much care about it. I'm a "big computing," kind of guy, and I don't much like the whole "talking on the phone thing," but this doesn't--you're surely not surprised to learn--mean that I don't have opinions on the subject.

Despite my disdain for telephones, I really like the whole Blackberry thing. The physical keyboard means I'm way faster at typing up messages and notes than I would be otherwise, and that's incredibly useful. Blackberries aren't, "sexy" as smartphones go, and frankly the software is sort of insane with regards to how it all works, but in comparison to how other phones work, I'm pretty happy with the way things are. Here are the Pros:

  • I like that I can run applications in the background on the Blackberry. Being able to get alerts when emails come in. Being able to leave a message that I'm writing, and go respond to another message, or make a call, or get an instant message or twiddle with Google maps, is really great.
  • I enjoy that the phone is messaging centric. Furthermore, I really like that all messaging: Blackberry Messaging (IM), GoogleTalk (jabber), SMS Texting, and email all appear in one great queue. There's one big list of things, to check and that's it. The key to making this work is good filtering, but that's another point.
  • I really enjoy the ecosystem of applications available for the phone. Blackberries like many smartphones (including the Android platform, after a fashion) use the J2ME (java) platform, which means and the platform is rather established. Sure the sexy things that people do with iPhones aren't there for my phone, and there are applications that I wish I had (better SSH, a text editor, some sort of file synching ability,) but the apps I have all work well, are stable, and integrate well with the system (ie. the messaging thing.)
  • There are host of little things that are great. The charging cradle is an awesome thing. The fact that it's "smart" enough to alter its behavior based on if it's in the case or not in the case, so that if it's on your belt it does something different than if it's laying on your desk. It also has a "bedside" mode which I think is similarly brilliant. Not a huge feature, but exceedingly useful.
  • So Google does this thing with their Sync Tools where your contacts from Gmail end up on your phone, and the sync is pretty seamless. No more futzing around with adding people by hand, no more worrying about backing up your database. I'm not thrilled about this reliance on Google, but it just works, and that is an intensely good thing. I do kind of wish that more things on the phone were like this.

What I don't like?

  • The twitter apps don't integrate well into the messaging, and I can't think of a sane way to use twitter with my phone.
  • There is no real XMPP/Jabber application aside from Google Talk that I've found to be useable. (Though I'd love to be proven wrong.) It would be nice to be able to connect to my general use XMPP account under a different resource and go from there.
  • I think, as an interaction modality the trackball is a horrible idea, and I think something more joystick-like would be a much more useful and quick. Even, perhaps something that used the keyboard more effectively. As it is, all navigation and system operation uses the trackball, and that's kind of annoying. It's done as well as it could, but I think it could be better.
  • Email filtering is non-intuitive and difficult. Possible, certainly, but difficult. I'd like an interface to be able to exclude and block various senders on the phone itself.
  • Configuration options are Byzantine and difficult to navigate. There are so many options particularly around the various noises that the phone will make that I've not bothered to really modify any of them. I might load up the beginning of "Thick as A Brick" for my ring tone (and part two for the alarm clock?), but for the most part there are too many chirps and chatters that the damn thing does, that it's hard to really modify it in any real way. It makes it interesting to be in close proximity to other Blackberry users for any length of time, because those noises get embedded in your consciousness.
  • The Blackberry is pretty unfriendly to Free Software stuff, which is a shame, partly because of the whole lack of freedom issue, but almost more because everything else I do with technology uses free software stuff, that it's annoying that my existing stuff doesn't work right on the phone.

Would I get another Blackberry? Probably. Though the lack of a good SSH client is a bother, and I'd like something that did a bit better with things like PDF/electronic-text reading, but all in all I'm pretty happy.

The interesting thing is that at this point I can't fathom going back to some sort of "non-smartphone:" this just seems, to me, to be "the way a phone should work." That's a pretty strong endorsement, I'd say.

Onward and Upward!

Links on Knitting, Emacs, Free Software, Cultural Studies, and the Future of Media

I have an absurd number of tabs open, and I'd like to present some interesting reading that I've had on my plate for a while. Nothing incredibly current, but all of it's good stuff. For your consideration:

  • Interlaced Knitting Chart from Kim Salazar who is a master knitter/crafter. I've enjoyed her blog for years, and I keep coming back to this pattern and I'm interested in figuring out how to integrate it best into the project I'm thinking of working on next/soon.
  • This Thread about Package Management in Emacs, which is an incredibly essoteric subject, but I think it's a useful conversation, and I think something that will--if its implemented--make emacs even more awesome, and make it easy to spin off specialized instances of "emacs distributions," which I think will help emacs be more helpful to more people. I'd like multi-threaded support though.
  • I've had this article about Open Source Business Models open in my browser for weeks, and my mind boggles at it. I tend to think that Free Software and Open Source have pretty much the same business models as all software businesses. There are companies that make money on licencing free software (i.e. Red Hat, Novell), there are a bunch of companies that provide services and custom development around open source software (too numerous to cite,) and there are scads of companies that have businesses offering services that are enabled by open source software (i.e. every Internet company, but Amazon is a great example of this.) So I'm not really sure how to respond to this. But it's there, and now I'm closing that tab.
  • Open Source: The War is Over or so one blogger thinks. I actually think there's some truth to the idea that proprietary software is mostly a failed project, and most people realize that--moving forward--open source methods and practices are ideal for technology. But I think "winning the argument and beginning to move toward open source," and "the war being over," are two different things. Furthermore, I'm not sure I'm comfortable equating "enterprise adoption of open source," as the singular marker of success for Open Source (let alone Free Software).
  • Michael Berube on Cultural Studies in the Chronicle
  • I guess it's hard to really take me out of the academy. I'm a huge geek for this kind of stuff still. I guess my thoughts are:
    1. Michael Berube might be a great blogger, and I think the thigns he's thinking about in this peice are quire useful and worthwhile, but as a piece of writing, this article is too short to really get into a lot of depth about anything, and too long to be easily read
    2. American Academic Marxism is a mostly failed project, and I think the "inter-discipline" of Cultural studies has been a poor steward of said.
    3. While Cultural Studies is a liberating interdisciplinary proposition, it's pretty unbalanced (English+Sociology) and I think a bit more economics and anthropology would be helpful. Berube is on the right side of this argument but I think he's too kind to CS on this point.
  • Gina Trapani's Smarterware got a new look and it's both amazing and I think points out the importance of leaving design to the professionals. Good stuff.
  • Against Micropayments and the Media Industry Interesting post, that gets it right. The future of media and publishing of all forms is something that I think about more than a little bit. If people are ever going to pay for content again, it's going to have to be tied into the way that people pay for connectivity, which is also a non-scarce resource, but one that we've grown used to paying for. There's some unpacking and investigating to be done here, for sure.

microsoft reconsidered

I've been thinking about Microsoft recently, and thinking about how the trajectory of Microsoft fits in with the trajectory of information technology in general.

A lot of people in the free software world are very anti-Microsoft, given some of the agregious anti-competitive activites they use, and general crappiness of their software. And while I agree that MS is no great gift to computing, it's always seemed to me that they're johnny-come-lately to the non-free software world (comparatively speaking AT&T and the telecom industry has done way more to limit and obstruct software and digital freedom than microsoft, I'm thinking.) But this is an akward argument, because there's no real lost love between me and Microsoft, and to be honest my disagreement with Microsoft is mostly technologcial: microsoft technology presents a poor solution to technical problems. But I digress.

One thing that I think is difficult to convey when talking about Microsoft is that "The Microsoft We See" is not "The Core Business of Microsoft;" which is to say the lion's share of Microsoft's business is in licensing things like Exchange servers (email and groupware stack) to big organizations, and then there's the whole ASP.NET+SQL-Server stack which a lot of technology is built upon. And Microsoft works licensing in ways that's absurd to those of us who don't live in that world. A dinky instance (ten users?) of Windows Server+Exchange for small corporations easily starts at a grand (per year? bi-annually?) and goes up from there depending on the size of the user-base. I would, by contrast, be surprised if Microsoft saw more than 50 or 60 dollars per desktop installation of Windows that consumers buy. [1] And I suspect a given installation of windows lasts three to five years.

I don't think it's going to happen tomorrow or even next year, but I think netbooks--and the fact that Microsoft won't put anything other than XP on them--and the continued development of Linux on embedded devices, and the growing market share of Apple in the Laptop Market (and the slow death of the desktop computing market as we know it,) all serve to make any attention that we give to market share of Windows on the desktop, increasingly less worthwhile. This isn't to say that I think people will flock in great numbers to other platforms, but...

I think what's happening, with the emergence of all these web-based technologies, with Mono, with Flash/Flex/Silverlight/Moonlight, with web-apps, with Qt running cross platform, with native GTK+ ports to windows and OS X, is that what you run on your desktop is (and will continue to become) more and more irrelevant. There won't be "the next Microsoft," because whatever you think of the future of IT, there isn't going to be a future where quality software is more scarce, or harder to produce than it is today.


So this brings us back to servers licensing, and something that I realized only recently. In the Linux world, we buy commodity hardware, sometimes really beefy systems, and if you have a scaling problem you just set up a new server and do some sort of clustered or distributed setup, which definitely falls under the heading of "advanced sysadmining," but it's not complex. With virutalization it's even easier to fully utilize hardware, and create really effective distributed environments. At the end of the day, what servers do is not particularly complex work in terms of number crunching, but it is massively parallel. And here's the catch about Windows: developers are disincentived to run more than one server, because as soon as you do that, your costs increase disproportionately with regard to the hardware. Say the cost of a production server (hardware) is 4k and you pay 2k-3k for the software. If at some point this server isn't big enough for your needs, do you: buy an almost-twice-as-good-8k dollar server with a single license, or just shell out another 6k-7k and have a second instance? Now lets multiply this times 10? Or more? (I should point out that I'm almost certainly low balling Software licensing costs.)

At some point you do have to cave and pay for an extra Microsoft license, but it makes a lot of sense from an operations perspective to throw money at hardware rather than distributed architectures, because not only is it quicker, but it's actually cheaper to avoid clusters.

Microsoft, the company that made its money in microcomputer software has backed itself into being the "big iron" computing business. Which is risky for them, and anyone. Sun Microsystems couldn't make it work, IBM kills in this space (and Linux mainframes are in the 50k-100k range, which doesn't look as absurd in light of the calculations above.)

Anyway, this post has been all over the place, and I'm not sure I can tie it all together in a neat bow, but I think its safe to say that we live in interesting times, and that this whole "cloud thing" combined with the rapidly falling price of very high-powered equipment changes all of the assumptions that we've had about software for the past twenty or thirty years. For free software as well as the proprietary software...

[1]There's a line in the Windows EULA, that says if you don't agree with the terms and aren't going to use the windows that comes installed on your computer that you can get a refund on this if you call the right people for your machine's distributor. I've heard reports of people getting ~130 USD back for this, but it's unclear how much of that goes to Microsoft, or to the support for MS products that OEMs have to provide.

the day wikipedia obsoleted itself

Remember how, in 2006 and 2007 there was a lot of debate over wikipedia's accuracy and process, and people though about creating alternate encyclopedias that relied on "expert contributors?" And then, somehow, that just died off and we never hear about those kinds of projects and of concerns anymore? The biggest news regarding wikipedia recently has been with regards to a somewhat subtle change in their licensing terms, which is really sort of minor and not even particularly interesting even for people who are into licensing stuff.

Here's a theory:

Wikipedia reached a point in the last couple of years where it became clear that it was as accurate as any encyclopedia had ever been before. Sure there are places where it's "wrong," and sure, as wikipedians have long argued, wikipedia is ideally suited to fix textual problems in a quick and blindingly efficient manner, but The Encyclopedia Britannica has always had factual inaccuracies, and has always reflected a particular... editorial perspective, and in light of its competition wikipedia has always been a bit better.

Practically, where wikipedia was once an example of "the great things that technology can enable," the moment when it leap frogged other encyclopedias was the moment that it became functionally irrelevant.

I'm not saying that wikipedia is bad and that you shouldn't read it, but rather that even if Wikipedia is the best encyclopedia in the world it is still an encyclopedia, and the project of encyclopedias is flawed, and in many ways runs counter to the great potential for collaborative work on the Internet.

My gripe with encyclopedias is largely epistemological:

  • I think the project of collecting all knowledge in a single

    fact that the biggest problem in the area of "knowing" in the contemporary world isn't simply finding information, or even finding trusted information, but rather what to do with knowledge when you do find it. Teaching people how to search for information is easy. Teaching people the critical thinking skills necessary for figuring out if a source is trustworthy takes some time, but it's not terribly complicated (and encyclopedias do a pretty poor job of this in the global sense, even if their major goal in the specific sense is to convey trust in the specific sense.) At the same time, teaching people to take information and do something awesome with it is incredibly difficult.

  • Knowledge is multiple and comes from multiple perspectives, and is contextually dependent on history, on cultural contexts, on sources, and on ideological concerns, so the project of collecting all knowledge in a value-neutral way from an objective perspective provides a disservice to the knowledge project. This is the weak spot in all encyclopedias regardless of their editorial process or medium. Encyclopedias are, by definition, imperialist projects.

  • The Internet is inherently decentralized. That's how it's designed, and all though this rounds counter to conventional thought in information management, information on the Internet works best when we don't try to artificially centralize it, and arguably, that's what wikipedia does: it collects and centralizes information in one easy to access and easy to search place. So while wikipedia isn't bad, there are a lot of things that one could do with wikis, with the Internet, that could foster distributed information projects and work with the strengths of the Internet rather than against them. Wikis are great for collaborative editing, and there are a lot of possibilities in the form, but so much depends on what you do with it.

So I guess the obvious questions here are:

  • What's next?
  • What does the post-wikipedia world look like?
  • How do we provide usable indexes for information that let people find content of value in a decentralized format, and preferably in a federated way that doesn't rely on Google Search?

Onward and Upward!

fact files

I wrote a while back about wanting to develop a "fact file" or some way of creating a database of notes and clippings that wouldn't (need to be) project specific research, but that I would none the less like the keep track of. Part of the notion was that I felt like I was gathering lots of information and reading lots of stuff, that I didn't really have any good way of retaining this information beyond whatever I could recall based on what I just happen to remember.

I should note that this post is very org-mode focused, and I've not subtitled very much. You've been warned.

Ultimately I developed an org-remember template, and I documented that in the post linked to above.

Since then, however, I've changed things a bit, and I wanted to publish that updated template.

(setq org-remember-templates'(
  ("annotations" ?a
    "* %^{Title} %^g \n  :PROPERTIES:\n  :date: %^t\n  :cite-key: %^{cite-key}\n  :link: %^{link}\n  :END:\n\n %?"
    "~/org/data.org" "Annotations and Notes")
  ("web-clippings" ?w
    "* %^{Title} %^g \n  :PROPERTIES:\n  :date: %^t\n  :link: %^{link}\n  :END:\n\n %x %?"
    "~/org/data.org" "Web Clippings")
  ("fact-file" ?f
    "* %^{Title} %^g \n  :PROPERTIES:\n  :date: %^t\n  :link: %^{link}\n  :END:\n\n %x %?"
    "~/org/data.org" "Fact File")
  ))

What this does, reflects something I noticed in the way I was using the original implementation. I noticed that I was collecting quotes from both a variety of Internet sources and published sources. Not everything had a cite-key (a key that tracks the information in my bibtex database,) and I found that I also wanted to save copies of blog posts and other snippets that I found useful and interesting, but that still didn't seem to qualify as a "fact file entry."

So now there are three templates:

  • First, annotations of published work, all cross referenced against cite-keys in the bibtex database.
  • Second, web clippings, this is where I put blog posts, and other articles which I think will be interesting to revisit and important to archive independently for offline/later reading. Often if I respond to a blogpost on this blog, the chances are that post has made it into this section of the file.
  • Third, miscellaneous facts, these are just quotes, in general. Interesting facts that I pull from wikipedia/wherever, but nothing teleological, particularly. It's good to have a place to collect unstructured information, and I've found the collection of information in this section of the file to be quite useful.

General features:

  • Whatever text I select (and therefore add to the X11 clipboard) is automatically inserted into the remember buffer (with the %? tag)
  • I make copious use of tags and tag compleation which makes it easier to use the "sparse tree by tag" functionality in org-mode to just display heading which are tagged in a certain way.) So that I can see related content easily. Tags include both subject and project-related information for super-cool filtering.
  • All "entires" exist on the second level of the file. I'm often sensative to using too much hierarchy, at the expense of clarity or ease of searching. This seems to be particularly the case in org-mode, given the power of sparse trees for filtering content.

So that's what I'm doing. As always, alternate solutions feedback are more than welcome.

writing like a programmer

I'm unique among my coworkers, in that I'm not a developer/programmer. This is a good thing, after all, because I'm the writer and not a programmer; but as a "workflow" guy and a student of software development one thing that I've been particularly struck by since taking this job is how well I've been able to collaborate with coworkers who come from a completely different background/field and furthermore how helpful this as been to my work and development as a writer. This post is going to contain some of these lessons and experiences.

For starters, we're all pretty big fans of git. As git is one of the most interesting and productive technologies that I use regularly, this is really nice. Not only does everyone live in plain text format, but they mostly use the same version control system I do. I've definitely had jobs and collaborations in the past few years (since I made the transition to pure text) where I've had to deal with .doc files, so this is a welcome change.

I've long thought that working in plain text format has been a really good thing for me as a writer. In a text editor there's only you and the text. All of the bullshit about styles and margins and the like that you are forced to contend with in "Office" software is a distraction, and so by just interacting with text, by exactly (and only) what I write in the file, I've been able to concentrate on the production of text, leaving only "worthwhile distractions" to the writing process.

Working with programmers, makes this "living in plain text" thing I do, not seem quite so weird, and that's a good thing for the collaboration but--for me, at least--it represents an old lesson about writing: use tools that you're very comfortable with, and deal with output/production only when you're very ready for it. Good lesson. I might have taken it to the extreme with the whole emacs thing, but it works for me, and I'm very happy with it.

But, using git, with other people has been a great lesson, and a great experience, and I'm getting the opportunity to use git in new ways, which have been instructive for me--both in terms of the technology, but also in terms of my writing process.

For instance, when ever I do a git pull (which asks the server for any new published changes and then merges them (often without help from me) with my working coppy) and see that a coworker has changed something, I tend to inspect the differences (eg. diffs) contained in the pull. Each commit (set of changes; indeed each object, but that's tangential) in git are assigned a unique identifier (a cryptographic hash) and you can, with the following command generate a visual representation of the changes between any two objects:

git diff  6150726..956BC46

If you have colors turned on in git (to colorize output; only the first line affects diffs, but I find the others are nice too):

git config --global color.diff auto
git config --global color.status auto
git config --global color.branch auto

This generates a nice colorized output and of all the changes between the two revisions, or points in history as specified. The diff, is just the output of the format that git uses to apply a set of changes to a base set of files so it displays a full copy of what the lines used to look like at the first point in time, and then new lines which represent what the lines look like in the second point in time, as well as contextual unchanged lines to anchor the changes to, when needed. Colorized the old content are darker (orange?) and the new content is brighter (yellow? green?), contextual anchors are in white.

The result is that when you're reviewing edits you can see exactly what was changed, and what it "used to be" without needing to manually compare new and old files, and also without the risk of getting too wound up in the context.

Not only is this the best way I've ever received feedback, in terms of ease, of review and clarity (when you can compare new to old, in very specific chunks, the rationale for changes is almost always evident), but also in what it teaches me about my writing. I can see what works and what doesn't work, I can isolate feedback on a specific line from feedback on the entire document.

While I've only really been able to do this for a few weeks, not only do I think that it's productive in this context, but that I think it might be an effective way for people to receive feedback and learn about writing. People involved in the polishing of prose (professional editors, writers, etc) often have all sorts of ways to trick themselves in attending to the mechanics of specific texts (on the scale of 7-10 words) stuff like reading backwards, reading paragraphs/sentences out of order. Reading from beginning to end, but reading sentences backwards, and so forth. Reviewing diffs allows you to separate big picture concerns about the narrative from structural concerns, and some how the lesson--at least for me--works.

Programmers, of course, use diffs regularly to "patch" code and communicate changes, and the review patches and diffs are a key part of the way programmers collaborate. I wonder if programmers learn by reviewing diffs in the same sort of way.

This will probably slowly develop in to a longer series of posts, but I think that's enough for you. I have writing to do, after all :)

Cheers!