the future of universities

One element that has been largely missing from my ongoing rambling analysis of economies, corporations, co-ops, and institutions has been higher education and universities. Of course Universities are institutions, and function in many ways like large corporations, but, nostalgia notwithstanding, I don’t think it’s really possible to exempt Universities or dismiss them from this conversation.

Oh, and, there was this rather interesting--but remarkably mundane--article that I clipped recently about that addressed where universities are “going” in the next decade or two. I say mundane, because I think the “look there’s new technology that’s changing the rules game” is crappy futurism, and really fails to get at the core of what kinds of developments we may expect to see in the coming years.

Nevertheless… Shall we begin? I think so:

  • The expansion of university in the last 60 years, or so, has been fueled by the GI-Bill and the expansion of the student-loan industry. With the “population bubble” changing, and the credit market changing, universities will have to change. How they change is of course up in the air.
  • There aren’t many alternatives to “liberal arts/general education” post-secondary education for people who don’t want, need, or have the preparation for that kind of education at age 18. While I’m a big proponent (and product of) a liberal arts education, there are many paths to becoming a well rounded and well educated adult, and they don’t all lead through traditional-four-year college educations (or equivalents, particularly at age 18.)
  • Technology is changing higher education and scholarship, already, with all likelihood faster than technology has been and is changing other aspects of our culture (publishing, media production, civic engagement, etc.). Like all of these developments of culture, however, the changes in higher education are probably not as revolutionary as the article suggests.
  • There will probably always be a way in which degree granting institutions will be a “useful” part of our society, but I think “The College,” will probably change significantly, but I think forthcoming changes probably have less to do with education and the classroom, and more to do with the evolving role of the faculty.
  • As part of the decline of tenure-systems, I expect that eventually we’ll see a greater separation (but not total disconnect) between the institutions which employ and sponsor scholarship, and the institutions that educate students.
  • It strikes me that most of the systems that universities use to convey education online (Blackboard, moodle, etc.,) are hopelessly flawed. Either by virtue of being difficult and “gawky” to use, or because they’re proprietary systems, or that they’re not designed for the task at hand, all of the systems that I’m aware of are as much roadblocks to the adoption of new technology in education as anything else.
  • Although quality information (effectively presented, even) is increasingly available online for free, what makes this information valuable in the university setting, including interactivity, feedback on progress, individual attention, validation and certification of mastery, are all of the things that universities (particularly “research”-grade institutions) perform least successfully at.
  • We’ve been seeing research and popular press stuff on the phenomena of “prolonged adolescence,” where young people tend to have a period of several years post-graduation where they have to figure out “what next,” sometimes there’s graduate school, sometimes there’s odd jobs. I’ve become convinced that in an effort to help fill the gap between “vocational education” and “liberal arts/gen ed.” we’ve gotten to the point where we ask people who are 18 (and don’t have a clue what they want to do with their lives, for the most part) to make decisions about their careers that are pretty absurd. Other kinds of educational options should exist, that might help resolve this issue.

Interestingly these thoughts didn’t have very much to do with technology. I guess I mostly feel that the changes in technology are secondary to the larger economic forces likely to affect universities in the coming years. Unless the singularity comes first.

Your thoughts, as always, are more than welcome.

dweebishness of linux users

I ran across this smear piece with regards to Ubuntu users from the perspective of a seasoned Linux user, which I think resonates both with the problem of treating your users like idiots and differently with the kerfuffle over ubuntu one, though this post is a direct sequel to neither post.

The article in question makes critique (sort of) that a little bit of knowledge is a terrible thing, and that by making Linux/Unix open to a less technical swath of users, that the quality of the discourse around the linux world has taken a nose dive. It’s a sort of “grumble grumble get off my lawn, kid,” sort of argument, and while the elitist approach is off-putting (but total par for the course in hacker communities,) I think the post does resonate with a couple of very real phenomena:

1. Ubuntu has led the way for Linux to become a viable option for advanced beginner and intermediate computer users. Particularly since the beginning of 2008 (eg. the 8.04 release). Ubuntu just works, and a lot of people who know their way around a keyboard and a mouse are and can be comfortable using Linux for most of their computing tasks. This necessarily changes the makeup of the typical “Linux User” quite a bit, and I think welcoming these people into the fold can be a challenge, particularly for the more advanced users who have come to expect something very different from the “Linux Community.”

2. This is mostly Microsoft’s fault, but people who started using--likely Windows powered--computers in the nineties (which is a huge portion of people out there), being an ‘intermediate’ means a much different kind of understanding that “old school” Linux users have.

Using a Windows machine effectively, and knowing how to use one of these systems, revolves around knowing what controls are where in the control panel, around being able to “guess” where various settings are within applications, knowing how to keep track of windows that aren’t visible, understanding the hierarchy of the file system, and knowing to reboot early and often. By contrast, using a Linux Machine effectively revolves around understanding the user/group/file permissions system, understanding the architecture of the system/desktop stack, knowing your way around a command line window, and the package manager, and knowing how to edit configuration files if needed.

In short, skills aren’t as transferable between operating systems as they may have once been.

Ubuntu, for it’s flaws (tenuous relationship with the Debian Project, peculiar release cycle), seems to know what it takes to make a system usable with very little upfront cost: How the installer needs to work, how to provide and organize the graphical configuration tools, and how to provide a base installation that is familiar and functional to a broad swath of potential users.

While this does change the dynamic of the community, it’s also the only way that linux on the desktop is going to grow. The transition between windows power user and linux user is not a direct one. (While arguably the transition between OS X and Linux is reasonably straight forward.) The new people who come to the linux desktop are by-and-large going to be users who are quite different from the folks who have historically used Linux.

At the same time, one of the magical things about free software is that the very act of using free software educates users about how their software works and how their machines work. The cause of this is partially intentional, partly by virtue of the fact that much free software is designed to be used by the people who wrote the software, and partly because of free software’s adoptive home of UNIX-liken systems. Regardless of the reason however, we can expect that even the most “n00bish” of users to eventually become more skilled and knowledgeable.


Having said that, in direct response to the article in question, even though I’m a huge devote of a “real” text editor, might it be the case that the era of the “do everything text editor” may be coming to an end? My thought is not that emacs and vi are no longer applicable, but the truth is that building specialized domain specific editing applications is easy enough that building such editing applications inside of vi/emacs doesn’t make the same sort of sense that it made a twenty or thirty years ago? Sure a class of programmers will probably always use emacs, or something like it, but I think the change of emacs being supplanted by things-that-aren’t editors, say, is something that isn’t too difficult to imagine.

If the singularity doesn’t come first, that is.

a short story

About a week ago, by your reading, I finished writing a short story. The fact that I was writing a short story when I should have been working on the novel is perhaps a bit distressing, but I’ve taken the opinion that any work on short fiction--particularly short fiction where I’m excited about the project and reasonably happy with the results--is worth what ever attention and love I can spare for it.

So I took a break from my novel to write a short story. Most of my attempts at short fiction are so abortive that I was hesitant to even talk about it on the blog lest I jinx myself in some way.

But nevertheless, I got to a first draft. A first draft, that has an ending which doesn’t suck. This is a major accomplishment.

I’m not going to talk too much about it now, as it still has to pass muster with my reviewers and get edited into something a bit less rambling, but for right now I’ve chosen to take pleasure in the acomplishment.

I will, however, say that the story is basically a compression of a lot of the ideas in the novel I’m writing. The short story is set about 10-15 years before the story, but it has many of the same core characters, and--I guess--reformulates the core issues in the novel’s story in a different context.

Oh, and it’s a pretty cool space-adventure at the same time.

Because that’s how I swing.

short story lenghts

I write this post as I am (theoretically) putting the finishing touches on a short story that I’ve written.

“But shouldn’t you be working on a novel?” You ask.

Well yes. But this short story is related to the novel, and any time I have the overwhelming urge to write a short story, I’m prone to take it, because I’m not much of a short story writer by temperament and I think it’s a good practice/skill to encourage.

Anyway, so I’m writing this short story. And it’s cruising in right around the 6,500-7,500 is word length for the first draft, which isn’t bad. Actually the whole thing isn’t terribly wretched, which is kinda awesome.

In any case, I wanted to explain one part of my challenge in writing and thinking about writing short form.

You see, for a long time, for some reason, I thought that “short stories” were all around 2,000 words, and that longer things were really novellas, or novelettes at the very least.

Which is totally false the shortest (non-flash) short stories are at least 2,000 words (typically) and novelettes start (according to SFWA) at 7,500 words. This was so embeded in my brain, that I would read (or listen) to stories that were clearly 5,000-10,000 words and think to myself “isn’t it amazing how they fit that much story into 2,000 words?”

sigh

And while I’m not sure having a realistic notion of short story length has made me a better writer of short forms, it’s made it possible to write shorter forms.

And that, my friends, is a start.

[Edit: I totally finished the short story and twittered about it yesterday morning, as I’m posting this, I’ll post about that later. Anyway, impeding big news, that I think I’ll be ready to talk about on Monday or so. Stay tuned and have a good weekend.]

the dark singularity

I read a pretty cool interview with Vernor Vinge, in H+ magazine, where he talked about the coming technological singularity, which I thought was really productive. I’ve read and participated in a lot criticism of “Singularity Theory,” where people make the argument that the singularity is just a mystification on the process of normal technological development, and that all this attention to the technology distracts from “real” issues, and/or that singularity is too abstract, too distant, and will only be recognizable in retrospect.

From reading Vinge’s comments, I’ve come to several realizations:

  • Vinge’s concept of the singularity is pretty narrow, and relates to effect of creating human-grade information technology. Right now, there are a lot of things that humans can do that machines can’t, The singularity then, is the point where that changes.
  • I liked how--and I find this to be the case with most “science theory,” but the scientists often have very narrow theories and the popular press often forces a much more broad interpretation. I think we get too caught up with thinking about the singularity as this cool amazing thing that is the nerd version of “the second coming,” and forget that the singularity would really mark the end of society and culture as we know it now. That it’s a rather frightening proposition.
  • Vinge’s comparison of the singularity to the development of the printing press is productive. He argues that the printing press was conceivable before Gutenberg (they had books, the effects, however were unimaginable, admittedly), in a way that the singularity isn’t conceivable to us given the current state of our lives and technology. In a lot of ways, the technological developments required in the Singularity, without attending to the social and cultural facts. The singularity is really about the outsourcing of cognition (writing, computers, etc.) rather than cramming more computing power onto our microchips.

As i begin to understand this a bit better--as it’s pretty difficult to grok--I’ve begun to think about the singularity and post-singular experience as being a much more dark possibility than had heretofore. There are a lot of problems with “the human era,” and I think technology, particularly as humans interact with technology (eg. cyborg) is pretty amazing. So why wouldn’t the singularity be made of awesome?

Because it wouldn’t be--to borrow an idea from William Gibson--evenly distributed. The post-human era might begin with the advent of singularity-grade intelligences, but there will be a lot of humans left hanging around in the post-human age. Talk about class politics!

Secondly, the singularity represents the end of our society in a very real sort of sense. Maybe literature, art, journalism, manufacturing, farming, computer terminals and their operating systems (lending a whole new meaning to the idea of a “dumb terminal”), and the Internet will continue to be relevant in a post-human age. But probably not exactly. While the means by which these activities and cultural pursuits might be obsoleted (tweaking metabolisms, organic memory transfer, inboard computer interfaces) are interesting, the death of culture is often a difficult and trying process, particularly for the people (like academics, educators, writers, artists, etc.) “Unintelligible” is sort of hard to grasp.

And I think frightening as a result. Perhaps that’s the largest lesson that I got from Vinge’s responses: the singularity is on many levels something to be feared: that when you think about the singularity the response should be on some visceral level “I’d really like to avoid that,” rather than, “Wouldn’t it be cool if this happened.”

And somehow that’s pretty refreshing. At least for me.

the future of content

I finally listened to John Gruber and Merlin Mann’s podcast of their talk at the 2009 SXSWi conference, on “how to succeed at blogging/the Internet” and this, in combination with my ongoing discussion with enkerli about the future of journalism, and an article about gawker media has promoted a series of loosely connected thoughts:

  • Newspapers are dead, dead, dead. This isn’t particularly ground breaking news, but I think it’s interesting to make note of this fact because of this corollary:

  • The Media/Content industry on the Internet has been unable to develop a successful business model for funding the creation of content to replace the business model of the newspapers (where newspapers fund websites/writer and a model which doesn’t revolve around advertising.)

  • I’ve been talking about trying to figure out what constitutes success at this “content creation thing,” for a while, and I don’t think I have a good answer for what those markers of success are. I think page views, are a part of it certainly, and I think the volume of comments, and/or the number of twitter followers you have may be markers of success, but I think we need to get to a place where we think of success as being something a bit less concrete.

    Success might be landing a cool new job because your blog impresses someone. Success might be having enough of a following to be able to sell enough copies of your book/CD/etc. to support yourself. Success might be having enough page-views to support the site in advertising. Success might be five people whose opinion you care about reading your site. Success might be steady progress in the direction of having a readership that eclipses the circulation of the print publications in your field.

    If we use these kinds of standards to judge our work, rather than the standards of old school publishing (page views), it becomes easier to making meaningful qualitative judgments of success.

  • Though I think they’re largely correct about success, Gruber and Mann’s suggestions--I think--fail to explain their own success.

    I think Merlin Mann is successful because he was friends with people like Cory Doctorow and Danny O’Brien at the right moment, because the GTD thing happened, because he’s pretty funny, and because MacBreak Weekly emerged at the right time and he played a big role in making that podcast successful. At the same time I think Gruber is successful because he took Apple Computer seriously at a time when no one really did. And he wrote this thing called markdown. This isn’t to say that either isn’t deserving of their success--hardly--but their advice to just “passionately do your thing and embrace the niche-yness and uniqueness of what you do,” is a good, but I don’t think that’s all it’s going to take to be successful in the next five years.

Additionally I think there are a couple of unnecessary assumptions that we’ve started to make about the future of content on the interent, that are worth questioning. They are, quickly:

  • Blogging as we have known it will endure into the future.
  • Blogging is being fragmented by the emergence of things like twitter and facebook.
  • User generated content (eg. youtube and digg) will destroy professional content producers (ie. NBC and slashdot/the AP.)
  • (Creative) Content will be able to survive in an unstructured format.
  • MediaWiki is the best software to quickly run a wiki-based site.
  • Content Management Systems (drupal, wordpress, MediWiki, etc.) and web programing frameworks (django, rails, drupal) are stable and enduring in the way that we’ve come to expect operating systems to be stable and enduring.
  • Content Management Systems, unlike the content they contain, can mostly survive schisms into niches.
  • The key to successful content-based sites/projects is “more content,” followed by “even more content.” (ie. Quantity trumps all.)

If the singularity doesn’t come first, that is.


ps. As I was sifting through my files I realized that this amazing article by Jeff Vandermeer also, influenced this post to some greater or lesser extent, but I read it about a week before I listened to the podcast, so I wasn’t as aware of its influence. Read that as well.

Midwest Morris Ale Round Up

I wrote a post about the 2008 midwest morris ale as a series of vignettes of great moments and memories from that ale.

This year I don’t have quite the same kinds of stories, or new stories, really: /home/tychoish/websites/tychoish.com/_drafts/ ~/writing/ -There was a killer cool ad-hoc set of “Queen’s Delight,” (Bucknell) my ongoing favorite dance. I handpicked the set, after the organized portion of tour ended, and we did well. Very fun.

  • During dinner Sunday night, there was a little ad-hoc moment were a bunch of people sang some songs in a hallway with good acoustics. This is one of my favorite things to happen on, and it’s hard to plan, and you just have to be lucky to be in the right place at the right time. Songs sung included the ever favorite “Let Union Be In All Our Hearts,” and (at my request, mostly) “When we Go Rolling Home/Round Goes the Wheel of Fourtuine.” Brilliance.
  • There a dance called “Flowers of Edinburgh” (something more or less like this, except we do double time and current midwestern trends in the Bampton tradition are a bit different.) Anyway, while the choreography is simple, the dance is physically challenging in the extreme. It’s one of those dances that doesn’t get done much in daylight. In anycase, someone came up to me and said “Sam! we should do flowers!” and I of course said yes, and both did the dance and called it. My legs are still sore from the experience, as I think there are several muscle groups that humans only need to do this dance, and to do nothing else. In any case, I find this disturbing/hilarious mostly because I’ve become the guy you go to when you want to do this dance. Sigh.
  • On the injury front, I think I’m doing pretty good, and I definitely think that all of the exercise and stuff I did this year has helped my ability to dance better/longer, in pretty noticeable ways. I wasn’t totally unharmed: I basically used up my voice too quickly (calling dances, singing), and I sprained my knee (or something) fairly seriously on the last night doing Queen’s Delight (again), which put the kibosh on my dancing. Thankfully that happened near the end, and I hope a few days of rest, stretching, ice, and anti-inflammatories will have me back in dancing condition.

Spending a weekend away with “my people,” people I don’t get to see very much, was (and is) an incredibly powerful experience. I think that many folks have “going and hanging out with our peeps” moments (academic conferences, science fiction conversions, various retreats) and beyond this comparison I don’t have a very good way of articulating why this Morris Dance gathering I do is so amazing for me.

In other news, I’ll be putting some videos up on YouTube and flickr in the next few days that my mother took. So stay tuned for that, and I’ll get back to (and continue) to post things here.

Cheers!

knitting progress ahoy

I have knitting progress to report. In thee parts.

1. I have finished my second Pi shawl of the year. The first I completed in March for my grandmother. The second, I completed a couple of weeks ago, and while it was a shawl created without goal (or particular purpose) it is an accomplishment of some note. I have yet to block the shawl, but, as I’m not sure where it’s supposed to end up, I’m in no rush. It’s also supremely huge, so I’m not sure I have a good place to block it.

2. I finished my part of my contest entry for my knitting camp. Watch out the rest of you camp-3ers, it’s going to be massively awesome and weird. Lots of weird.

3. This leads us to the most exciting knitting related conclusion I’ve had to announce in quite a while: my works in progress list is way down. I have two sweaters on the needles, a sock (no rush, plain knitting, and a cobweb shawl which I don’t have particular need or inspiration to work on.) One of the sweaters just has one sleeve left to go, and the other sweater, is almost to the armholes.

This is incredibly exciting. While I would like to get both of these sweaters done by the time I go to camp (which will be a bit of a stretch, it’s not a requirement. My show-and-tell is something else entirely.) I get to knit sweaters. I love knitting sweaters.


Pictures forthcoming, also the sweater with only a sleeve left to go is indeed the “Latvian Dreaming” sweater which I started designing/working on a year ago (or more). It’s good to be closing in on that so I can get the instructions up on the web. While I did have a big knitting hiatus this year, and while I have been back knitting in some form for several months now, I’ve felt more like I’ve been in “production” mode, rather than “enjoyment” mode… until now.

So it’s good to be back.