Materialist SF

I was talking about the recent events in the US “economy” and my latest fiction writing project the other day, and while the connection between the two seemed more direct in the moment and doesn’t bear repeating I found that I uttered the following statement:

“You know, there isn’t a lot of materialist space opera/science fiction out there.”

And as soon as I had spoken the words, I knew that I had to be wrong. Or at least I hoped I was.

It’s understandable that science fiction often isn’t terribly materialist. I suspect many SF writers are attracted to the genre because SF is a great platform for exploring new (and old) ideas, in a setting that’s just enough different from our contemporary setting to make us think. While the spaceships and aliens certainly helped draw me in when I was a kid, the ideas are what have kept me as a bigger kid/adult.

And let’s be honest dealing with ideas about material is difficult in the space opera type setting. To assume that goods--food, clothing, fuel, technology-- will continue to be scarce in universes that have faster than light travel, and amazingly decked-out space ships, can be a bit tough to swallow. Similarly, with that kind of technology, might we just assume that no one has to work in the factories or the mines, and really that does seem to be the more logical assumption.

Often, even stories that have economic themes, or plots (about trade or political intrigue) aren’t particularly grounded in a material understanding of economics. By which I mean, we don’t often read stories that deal with how capital is created (labor,) transported, or consumed. If stories even have some sort of in-world money, we so rarely see where the value that currency comes from. Right?

I suppose I should clarify that my thought process started with space opera and expanded outward from there. Clearly Space Opera is probably the most prone to these sorts of non-materialist stories, but other areas of the genre suffer to varying degrees.

My next project was to think of stories and novels that we’re materialist in some way. So here’s what I came up with:

Empire Star by Samuel Delany turns on a very materialist plot. The story would be hard to summarize, and I wouldn’t want to spoil it, for anyone who hasn’t read it, but in a way, it’s all about the alienation of labor and rebuilding capital after a war. In contrast, Babel-17 (also Delany) isn’t particularly materialist at all: while there is a war that certainly has effects on capital, labor, and trade that’s not particularly relevant to the story, except in the abstract. I mention them in the same breath because, Empire Star is the companion to Babel-17, and they were published together (and their both great, if very different, stories). Empire Star, is a novella and gets anthologized with some regularity as it is both awesome and a great early example of the Space Opera revival.

“Who’s Afraid of Wolf 359?” by Ken MacLeod, was my second idea. The story was published in 2007 in the Stranham/Dozis The New Space Opera anthology, and was also on Escape Pod this year. The story tracks the redevelopment of a fallen colony and in doing so manages to trace the economic development of the galactic civilization. Though I would expect nothing less from MacLeod.

There has to be more. I’m sure of it, I’d like to use the comments of this post to collect other examples of materialist science fiction, Space Opera or otherwise, and I/we can collect the results and add it to the Feminist SF Wiki, or as a follow up post.

Cheers,

tycho

GNU Screen

This is a “isn’t this really old piece of software pretty darn cool,” kind of a post. GNU Screen is a terminal multiplexer, that dates probably from the eighties or there abouts, and it provides a sort of text-based windowing environment inside of a command line. Sort of.

Before I started using it, I read statements like that and had no clue what Screen really did. I think a brief (and basic) overview of how screen works might be worth something. Basically, you start a screen instance in a terminal window, and you’re brought to a blank terminal window. The commands are, by default all bound to Control-A (c-a), and subsets of that. So you have a terminal open that you can run console applications, or other shell commands. You can also hit c-a c-c to open a second “window” in the same terminal emulator, and c-a c-a swaps between the present and most recent window, while c-a c-" presents a list of open windows. All of this runs within one instance of a terminal window, so you don’t have to resort to tabs, awkward key bindings, any of it. Everything is there.

There are a lot of additional features, most of which I don’t use--I must admit, but the truth is the basic idea of taking a terminal window--which is by nature a single-purpose and single-task--and make it possible to perform many different tasks inside of one window isn’t a great technological or user feature in 2008, but there are a few nifty things that make it incredibly useful.

First, screen instances run as daemons (actually, I’m not sure this is the correct term, but nevertheless) so that you can detach a screen instance from the terminal it’s running in, and reattach it later. We can imagine this to be useful in a number of instances. First, if you’re working over SSH, you can not only have multiple tasks running over a single connection (multiplexed) but if the connection drops, or you need to move computers… your state is saved. Similarly, if you switch terminal emulators (xterm and urxvt, or gnome-terminal) you can save where you are. Screen makes it possible to log-in and of your system without loosing where you were. Commands that are useful in these workflows are: screen -ls to list existing screen instances, screen -r ## to reattach a detached screen (if there’s more than one detached screen then specifying a unique PID number or fragment will let you pick between multiple “screens,"). You can also specify a -D flag to detach the screen, and -RR to “force detach/reattach,” though I often run screen -DRR just for good measure.

Basically the upshot of this functionality is that all of my terminal applications and work can be disassociated from a specific session or terminal emulator. While this might be my own particular oddity, there’s something that I rather enjoy about the independence of being able to separate the processes, from environments either physical (what hardware I use, given SSH) or from specific environments on the hardware that’s in front of me (which has certain stability and security features that are appealing.

In the past few months I’ve taken to running several network connected console apps (the mcabber jabber client, and irssi, the IRC client) in screen instances so that if I needed to restart X for some reason, I could without popping on and offline. More recently I’ve been using it to cut down on the number of terminal applications I have running at any given time, as the terminal emulators are rather bulky programs in comparison to screen and the shell interpreter.

That old technology, it’s worth something. I know there are other screen folks out there, there must be. What--to you--is the “killer feature that I left out?”

novel progress

I’ve not been writing very much or very regularly in the new year, unfortunately. I’ve had a lot of obligations at work, and I went to visit my grandmother last weekend, which took a lot of time (both whilst there, but also in preparation). And lets not talk about Critical Futures, which is a full length post onto itself at this point. In any case, I’ve been doing some work on the new novel, which I have managed to put the finishing touches on the first third. I’m working on getting things together--as preemptive as this might be--for a podcast publication sometime later this year. The progress on the story continues, and I like everything about the story. So there. Anyway. I thought it would be fun to mark the “one third done” milestone. Here’s to the next third!

3 Odd Properties of Blogging

Blogs are really awesome, and blogs are also really powerful tools for publishing and communication, and more than anything represent the world wide web “coming into its own.” In recognition of this, it seems that everyone whose interested in the internet is out there trying to “crack” blogging and understand what makes for a really great blog, and if you listen to them, they’ll tell you about how successful blogging requires a solid niche focus, dynamic content (including videos, audio, and pictures), strong clear headlines with interesting hooks, regular posting, and keyword optimized content.

Or something.

Actually all those suggestions above sound pretty clever, but to be honest, I’m not sure that those suggestions are really particularly likely to lead someone who “wants to be a more successful blogger,” to actually, you know, be a more successful blogger. So because I’m one of those folks who’s interested in the internet and I’m trying to “crack” blogging, I’ll offer a short list of three things that I think make a big difference in “blogging success.” Whatever that means.

1. Location matters: I’d wager that the most successful blogs in America are written by people who live in New York City or San Fransisco, with a small but respectable minority of successful blogs being generated out of: Washington DC, Chicago, and Los Angeles. I theorize that real life social networking remains very powerful on the internet. People read blogs of people who they know, and the blogs that they learn about from their friends, and all this happens in “meatspace.” If you don’t live in one of these cities, either move there, attend events in that city, or be very active in a relevant local community.

2. Relationships, are more important than audience: Remember how in high school composition class the teacher was always going on about how you should “be mindful of your audience.” Well you should, but you should be more mindful of your relationship with your audience as a blogger than you’re likely to be in any other forum. Blogging is about conversations, about saying “hey friends, what do you think would happen if…” Work on building your relationships with people who read your blog or who might read your blog, and that is likely the greatest single impact on your readership.

3. Volume is more important than brilliance: Fundamentally blogging is an experimental medium. It’s more important that you post every day and maybe get a post every week or two that you think is brilliant and clever, than post one brilliant and clever thing every week or two. There are of course exceptions to this, but as long as you’re trying to be brilliant it’ll work. Every post can’t be a home run. The corollary to this is that, if your only effort toward being a more successful blogger is posting regularly, that alone isn’t the key to success, but regular posting is part of almost every successful blog.

So that’s what I have for you. Thoughts?

Onward and Upward!

Digital Collections

I have a lot of digital stuff certainly more than any of my computers have storage space for at this point, though to be fair we’re talking about 500 gigabytes of various collections (music, video, backups, documents). Particularly given the price disks by the present moment in time, we’re not talking about anything too absurd. During my recent computer juggling interlude, I realized that hard drive space wasn’t nearly the issue I used to think it was, and I suspect my situation isn’t terribly unique for lots of geeks, and space crunches are not often a technological problem, but rather a user problem.

During numerous conversations with other geeks, it’s become clear that while we all have massive collections of files that take up lots of space (on the order of hundreds of gigabytes) most of use only a percentage of that space for about 98% of our computing. Everyone seems to have ahem come into large collections of files: copies of ahem our DVD collections, recordings of television shows, music collections and the like. All these things take up a lot of space. The files that we use day in and day out? Much less. Even if you count my email, the full backup of my blog, my personal wiki, and everything else, we’re talking under half of a gigabyte, give or take.

So what gives?

There are a couple of mediating factors that bear consideration:

First the emergence of the “netbook” Recognizes this fact. Netbooks have hard drives which float between 4 and 16 gigabytes. It’s not a lot of space, but it’s enough that as long as you’re just working with emal, and a selection of documents that it’s enough to keep you busy for a few days, if not months (I mean really now folks). We’re also at a point where storage is getting more available and cheaper faster than our collections can grow.

The second factor that leads to this is the fact that “content” (videos and music) don’t have particularly reliable distribution channels. If a TV show uh, appears on your hard drive, the chance of it appearing again is often tedious, and pretty slim. So our inclination is to save it. This ties into all sorts of really multifaceted issues about data ownership, copyright, and digital distribution, but it’s also partially a user issue.


A few months ago, I was convinced that given fears about backups, backup reliability, and verification, and the dropping price of storage online, either through Amazon S3, or hell even something like Dreamhost (not in a web accessible folder) would probably have been as effective as having disks yourself. I’m not sure this is still the case, particularly with how cheep disks are (which makes redundancy easier). Also I think there’s a point between one and two terabytes where even the worst digital pack-rats recognize that such archives are reasonably pointless.

I guess in light of this, I guess the lingering questions are: do you have a big digital collection? How big? And do you have any particular strategy for dealing with these files?

Onward and Upward!

A Catalog of Open Source and Free Culture

I’ve been writing recently here about open source and free software, and what happens to the practices and ideologies of these projects when they “jump species” and start affecting the world outside of open source. This is, I suppose, part of a larger response/digestion of Christopher Kelty’s *Two Bits* monograph.

Having said that, this post isn’t a response per se but rather a catalog of all the various kinds of software and non-software projects that are connected in some way to open source and free software. I hope that such a catalog will be helpful in thinking more concretely about these issues. Without further ado:

Open Network Services

I’m using this as a banner for service-based software that derives inspiration from the free software movement, but is based on network services (web sites, web applications, and so forth). The AGPL is a free software approach to dealing with the code, but I don’t think that well executed open network services is something that can--exactly--be conveyed with a liscence. Examples: identi.ca and gitorious.

Standard Network Protocols (e.g. IETF)

I wouldn’t have been particularly inclined this on my own but from Kelty’s (2008) book Two Bits I realized that it fits. The Internet Engineering Task Force (IETF) is responsible for defining and maintaining the standards that make the Internet go, so that Linux developers and Apple developers and Blackberry developers can all write software that can “talk” to each other via the network. It’s not open source, exactly, but the most successful standards will be the ones that are most accessible and that a community feels at least partly responsible for (ie. has input from), which is in the end a lot like open source.

Creative Commons

Though Creative Commons (CC) is in some ways the most obvious umbrella of non-software free software projects, I am almost a bit hesitant to include it here. Though I don’t have a very clear idea of the history, it seems like CC takes a “copyleft” (like GNU GPL) approach to “hack” a very different problem. Where free software “hacks” an understanding of the collaborative nature of software and the ability to tweak software into copyright law, CC “hacks” an understanding of digital distribution and post-scarcity digital reality into copyright law. Similar, particularly on first blanch, but underneath? Maybe not as much.

Un-conferences/BarCamps

These conferences are intended to be very adhoc and tend to provide very open access to organizational information and participation. While these conferences aren’t anarchist in the contemporary sense, they practice openness in a way that resonates at least a little with the free software/open source movement.

Not-For Profits/Community Organizations

I’m thinking of things like BucketWorks. While NFPs aren’t a new things, I think increasingly they’ll be connected with the logic of open source. There are a lot of “businesses” that I think will never be capable of generating a huge return, (coffee shops, yarn stores, book stores,) that I think will be more likely to operate in an “open source” manner, lead by communities, with “business” decisions being made by the community of users.

Wiki Projects

While I’m not sure if wiki projects, like wikipedia and wikitravel are truly the non-software equivelents to open source/free software; their collaborative nature is familiar. There’s something about Wikis that inspire their editors and contributors to be “exhaustive” in a way that I don’t think shares much with open source and it’s centralization on Unix-like systems.

Free Culture

Really this is another huge category, to my mind it represents the “activist” types who “port” some of the ideas about software freedom to other domains, like music, or art, or writing. Tends to be explicitly ideological rather than keep the implicit/quasi-agnosticism that free software itself often has.

Crypto-anarchists/Security Researchers

Having top-to-bottom control over the software you run on your computer would seem to appeal to the paranoid and cryptographically informed set. There’s obviously a lot of overlap with the typical software freedom hacker here, but I think the reason for using open source software (and other related tools) is distinct.

Revolutionaries

I include this because I think it’s important to note an important distinction. While I think many “software freedom” people would argue that leftists/radicals/revolutionaries would use free software because it might embody the freedom that their fighting for, I think it’s probably more realistic to expect that said revolutionaries would use open source tools because it is more available and powerful. Not that this argument is of consequence. I include “revolutionaries” in this list because it’s different from other rationale.

Monolithic Content Management

When I was writing about the redesign of tychoish.com, I mentioned that I needed to write an essay here about “monolithic vs. micro (kernel) approaches to content management.” Referencing an old (and mostly settled) debate in operating system design1, think it might be productive to bring some of these systems design perspectives to the problem of content management systems for websites.

The analogy is imperfect for a lot of reasons, because it can operate on two basic levels. The most direct level would be the software itself. Is the content management system (CMS; software) modular. If the database driver doesn’t work with the system you want to use can you change it? If you don’t like the template engine is it a simple matter to replace it? If you need additional functionality can you drop in modules or plug-ins to provide these features?

In truth most systems are at least a little hybrid/microkernel-ish in their approach. Wordpress is pretty monolithic on the whole, but it provides a lot of access via the plugin system. While Drupal is very modular/microkernel, it still has a lot of functionality in the core system (and de facto core modules) that shape the way that most Drupal sites are developed. b2 and greymatter were almost entirely monolithic. I guess MediaWiki has some modularity, but it strikes me as a pretty monolithic system. And so forth.

The less direct level, would be the relationship between what the content management system can provide and the website as a whole. Does the content management system handle all of the content internally, or does the system only handle one aspect of content, like a blog or a wiki? For the record I don’t think interactions/relationships between these two levels are particularly important.

Tychoish.com and Critical Futures are both very monolithic sites--maybe a better term for this is monolithic architecture. This is to say that the entire content of the site is stored in the database and managed by the system. Monolithic architectures seem to be the preference these days: Drupal prefers these kinds of designs, and I think for a lot of sites, “single system” has some appeal, particularly if there’s a large editorial staff, or a desire to avoid editorial bottlenecks, as single systems make this a bit easier.

At the same time--though unpopular--I’d like to suggest that monolithic architecture might not be the best solution for all sites across the board. Why? Because it leads to more complex and abstracted tools which are harder for people to customize, and it makes it more likely that people will be forced into using a tool that almost does what they need that works with their chosen platform, rather than a tool that really does what they need but doesn’t work with their platform. Also monolithic architectures lead to a very non-UNIX-ish approach to software tools, when that might be more appropriate.

To be clear, I think there are a lot of situations and individuals/organizations that do benefit from very monolithic site architectures (and users that benefit from monolithic tools), but it’s a decision that shouldn’t be made lightly, nor should people who build websites (myself included) assume that “the more content you put in a system” the better the system is.

Thoughts?

Cheers!


  1. As I understand it, the debate is: micro kernels have sophisticated and elegant designs, that are highly modular and flexible, but suffer from poor performance (increased communications overhead) and more places where fault can occur (as there are more “places”). In contrast, monolithic kernels are “old school” and not innately flexible or modular, but they’re fast, and once sufficiently developed, they either work well or they don’t work at all. The design difference is that monolithic kernels provide all sorts of services (networking, file systems, display drivers) themselves in their own “space,” while micro-kernels just handle the lowest level internally and rely on other programs to do things like networking, and file systems, and device drivers. As I understand it. ↩︎

Macbook Ubuntu

So after a lot of teeth gnashing over the past few weeks about what to do with my macbook since I’ve basically abandoned OS X for Linux1, I realized that I could probably just install Ubuntu on this hardware and be done with it.

So really pretty much as soon as I had the thought I began backing things up and starting the install process. While my backup wasn’t perfect (forgot the applications folder, which is pretty replaceable; and dropping in ~/Library and /Library is pretty ineffective, so there’s some work there) the whole process was pretty quick. I assume that part of this is that I’m getting better at getting “my” Ubuntu installation rolled up, and I could copy over my config files over the network which make it even quicker.

There were a couple of things that are somewhat less than ideal. First hibernation doesn’t work right, but that’s ok, I don’t expect that I’d use that very much anyway. Secondly right clicking is a pain that I’ve not yet managed to resolve, and while the touchpad works it’s too sensitive, and I haven’t figured how to crank that down. I’ve managed to get right click emulated with three finger taps, which is functional if not ideal. I’ve also discovered how to turn off the touch pad with a shell command. Because the Awesome Window Manager requires the mouse very minimally this actually works well and makes me a bit more efficient.

After not using the Macbook for several weeks and coming back to it, I’ve realized a few things. First the screen looks really good. The think pad has a digitizer and a significantly lower pixel density, and the monitors for my desktop both have lower pixel density and don’t have the glossy screen.2 The second thing is that the build quality on the Macbook is noticeably inferior to the Thickpad. There are little case squeaks and flexes that I dismissed initially,

Jack asked about external monitors (for projectors) and I don’t have anything to report on that front. I still have OS X installed and usable if it’s a pressing need, but I don’t even have the video converter for the new macbooks so I’m not terribly worried. There are probably other things that I’ve also not had occasion to


  1. While I intended to just get the linux box for the desktop and keep the macbook, I quickly came to the realization that switching between modalities wasn’t terribly effective for the way that I worked, and after spending a week on the road using just my MacBook, I felt as if OS X was more distracting than it was useful. No matter how much I adored TextMate. ↩︎

  2. I hated the glossy screen at first because of the way that it collects finger prints and dirt, but by gum it looks really pretty. The desktops, by contrast are huge, which is nice, but sometimes when I’m writing something more focused, all the extra space can be distracting. ↩︎