How Hacker News and Social Editing has Jumped the Shark

How’s that for a Malcolm Gladwell-style subtitle title?

Alex Payne posed a piece on why he thought that, basically Hacker News had jumped the shark. He’s right, of course, but I think that his analysis of the root cause of the problem, and therefore the solution he proposes, is a bit too optimistic.

It seems to me that all of these socially edited link-based blogs where lots of people submit links and then the community votes on these stories to generate headlines and a filtered and sorted selection of content based on the appetites of the communities.

Hacker News tried, and succeeded for a time because the contributors (people who voted on, submitted, and commented on items) was small and focused enough that the content managed to coherent overall. There wasn’t enough content to necessarily overwhelm, both the readers and the potential contributors, and most things were interesting to most people. It was a golden age.

I think the problem with this model of generating content is that the golden ages don’t last very long. Sites “jump the shark” as the tightly focused content of the early, gives way to a more loose, less specialized, and more self-interested content submissions and selections. The factors that I think lead to this are:

  • There’s too much content. Such sites should be “filters,” and their fundamental service is to take the whole internet and tell readers “you should be reading this because we think it’s interesting/important.” As communities of editors grow, as the marketing power of filter sites increases, and if the “cost” of submitting a link remains constant, then the use of the filter breaks down and everyone gets overwhelmed.
  • There’s not enough focus/responsibility on the part of the editors. When you have a few dedicated and professional editors, you begin to see consistency and perspective and approach in the content that is curated. When this function is distributed among a large group of community members: amazing things can happen but that’s not a guarantee.
  • Community edited sites tend to become incredibly self interested after a certain point. There’s a certain kind of story either about the community itself or that strikes submitters as being “about the kind of people that participate in the community, or “the kind of thing that people who read the site might like,” rather than things that interest them. Perhaps some of this comes from the game-based dynamic of rating systems and karma, perhaps it’s just a thing that happens.

In many ways, Trivium is probably the best example of a link filter blog. Alex Payne points to MetaFilter as an example of a site which has solved this problem, and although I have had a MeFi account for years, I’ve never been able to really get into it. Long story short, we still need editors and editorial vision, and the issues we’re seeing isn’t about the “focus” of a community, as much as it is a property of communities themselves.

Java and Me

Stan reminded me recently that I have now written two posts about the Java programing and software development. For something that I admittedly don’t particularly care for, and don’t know a great deal about I’ve sure ranted a lot about it. I think I keep returning to think about Java because of how incredibly important Java is to the technology we use and how prevalent Java development remains.

Maybe I’ve read too much by Red Monk folks, but they tend take a very productive approach to these kinds of things. For reference, my posts on the subject are:

Most “end-users” don’t really care much about things like Java except when it doesn’t work, as is the case when some component of the Java platform isn’t present when you want to run a Java program or when you run a “cross platform” java application that doesn’t really work as indented on your platform. For a long time these two issues were prevalent enough that being written in Java was a decernable quality of an application. In most situations, computer programs are just computer programs.

With one exception.

The way software developers use computers and interface with technology leads and constrains the technological reality for the rest of us. For instance, in Google Reader you can scroll up and down using the “j” and “k” keys, which is derived from interaction paradigm of the Vi text editors. It’s great, but was almost certainly put into the software because a developer on the project was a Vi user. While a most features are driven by formal design processes, so much of the way software works is lead by the way developers think about software.

Ultimately Java developers are what make Java is important, not just for me but for everyone.

Searching for Known Results

(Note: I was going through some old files earlier this week and found a couple of old posts that never made it into the live site. This is one of them. I’ve done a little bit of polishing around the edges, but this is as much a post for historical interest as is a reflection of the contemporary state of my thought.)

This post is a follow up to my not much organization post, and as part of my general reorganization, I’ve been toying with anything for emacs which is a tool, or set of tools, which provide search-based interaction with some tasks (opening files, finding files, accessing other information, etc.) in a real-time search-based paradigm. Mmmm buzzwords. Think of it as being like quicksilver or launchy, except for emacs. I’ve come to a conclusion, that I think is generalizable, but made particularly obvious by this particular problem space.

Search, as an interface to a corpus, is only more effective than other organizational methods when you don’t know what the location of what your looking for is, or don’t understand the organizational system that governs the collection where your object is located. When you do know where the needed object is, search may be more cumbersome.

This feels obvious, when put in this way, but is counter to contemporary practice. Take the Google search use case where you find websites that you already know exist. You’d be surprised at how many people find this site by searching for “tychoish” or “tycho garen blog.” These are people who already know that the site exists and are probably people who have visited the site already. Google is forgiving in a way that typing an address into a search bar is not.

This works out alright in the end for websites: there’s no organizing standard for mapping domain names to websites. This is mostly due to the fact that you don’t, in the present practice, use the domain name system in the way that it was originally intended, in that the content of domain names are “brands” rather than a domain of systems and services described by the content of the domain. In the end this is not a huge problem since Google is around to help sort things out.

Similarly “desktop search” tools are helpful when you have a bunch of files scattered throughout file systems, with lots of hierarchy (directories and sub-directories). When you know where files are located, search less helpful. This is not to say that they’re ineffective: you’ll find what you’re looking for, it’ll just take longer.

I think this theory on the diminishing utility of search tool holds up, though I don’t exactly know how to do the research to further the develop the idea in a more concrete direction. Having said that, I think the following questions are important.

  • Are there practical ways to organize our files, that don’t require too much over-thinking before a collection grows unmanageable that make “resorting to search” less necessary?
  • Is (or might) building search tools for people who work with a given body of data (and therefore are familiar with the data, and are less likely to need search) different from building search for people who aren’t familiar with a given corpus?

Onward and Upward!

Anti-Social Media

I’ve been playing with this idea for a Critical Futures blog post for a few days, so you’ll probably see this again at some point. Still, I wanted to pose a couple of questions that have been nagging at me for a while:

  • Does the fact that we think of content as something that is becoming increasingly user generated, or generated outside of traditional professional structures, affect writers' ability to survive from an economic perspective?
  • Does this “crowd sourcing” (if you’ll indulge me,) mean that everyone will think of themselves as writers henceforth? While that’s potentially inspiring from the perspective of democracy, it feels hard to maintain from a literary/textual culture perspective. If everyone is a writer, is there an audience of readers for any kind of writing (fiction, critical, or non-fiction) separate from writers? If not, is there enough audience amongst fellow writers to support the project writing? (Answer: doubtful.)
  • I’m totally willing to accept that the publishing industry as we’ve come to know it is (and will continue) to undergo great change. At the same time, “great change,” means (I think) that some practices will need to change in a fundamental sort of way. It is not enough to manipulate the length publishing schedules of periodicals in order to get them to appear profitable for a while, it’s not enough to publish lots of small runs of books tight budgets that break even really fast.

These strategies delay the inevitable, but don’t address several fundamental problems:

  • The group of readers (e.g. audience) is significantly smaller than the public at large. If we want to grow audiences for our books, blogs, wikis, and other (hyper?)textual products we need to enlarge the group of people who read.
  • Most people who fail read any given text on any given day, fail to do so because they didn’t know it existed, and probably because they felt like they didn’t have time to do so.

Largely I think these issues can be extrapolated to other forms of media, I’m just a writer and think of things in terms of essays, articles, stories, and novels.

In any case, I think the way to save the “media industry” and media creators in particular, is to figure out how to get more people to read and figure out how to improve the discovery process. Hefty challenges, for sure.

Onward and Upward!

Caring about Java

I often find it difficult to feign interest the discussion of Java in the post Sun Microsystems era. Don’t get me wrong, I get that there’s a lot of Java out there, I get that there are a number of technological strengths and advantages that Java has in contrast some other programming platforms. Consider my post about worfism and computer programing for some background on my interest in programing languages and their use.

I apologize if this post is more in the vein of “a number of raw thoughts,” rather than an actual organized essay.

In Favor of Java

Java has a lot of things going for it: it’s very fast, it runs code in a VM that lets the code execute in a mostly isolated environment which increases reliability and security of the applications that run on the Java Platform. I think of these as “hard features” or technological realities that are presently implemented and available for users.

There are also a number of “soft features,” that Java has that inspire people to use it: an extensive and reliable standard library, a large expanse of additional library support for most things, a huge developer community, and it has inclusion in computer science curricula so people are familiar with it. While each of these aspects are relatively minor, and could theoretically apply to a number of different languages and development platforms, they represent a major rationale for it’s continued use.

One of the core selling points of Java has long been the fact that because Java runs on a virtual machine that can abstract differences between different operating systems and architectures, it’s possible to write and compile code once and then run that “binary” on a number of different machines. The buzzword/slogan for this is “write once, run anywhere.” This doesn’t fit easily into the hard/soft feature dichotomy I set up above, but it nevertheless and important factor.

Against Java

Teasing out the history of programing language development is probably a better project for another post (or career?), but while Java might have once had a greater set of support for many common programming tasks, I’m not sure that it’s sizable standard library and common tooling continues to overwhelm it’s peers. At best this is a draw with languages like Perl and Python, but more likely the fact that the JDK is so huge and varied increases incompatibility potentials. And needing to download the whole JDK to run even minimalist Java programs. Other languages have addressed the tooling and library support in different way, and I think the real answer to this problem is write with an eye towards minimalism and make sure that there are really good build systems.

Most of the arguments in favor of Java revolve around the strengths of the Java Virtual Machine, which is the substrate where Java programs run. And it is undeniable that the JVM is an incredibly valuable platform, and every report that I’ve seen concludes that the JVM is really fast, and the VM model does provide a number of persuasive features (e.g. sandboxing, increased portability, performance gains.) That’s cool, but I’m not sure that any of these “hard” features matter these days:

Most programing languages use a VM architecture these days. Raw speed, of the sort that Java has, is less useful than powerful concurrent programing abilities and is offset by the fact that computers themselves are absurdly fast. It’s not to say that Java fails because others have been able to replicate the strengths of the Java platform, but it does fail to inspire excitement.

The worth of Java’s “cross platform” capabilities are probably negated by service-based computing (the “cloud,") and the fact that cross platform applications, GUI or otherwise, are probably an ill gotten dream anyway.

The more I construct these arguments, I keep circling around the idea that while Java pushed a lot of programmers and language designers to think about what kind of features that programing languages needed. The world of computing and programming has changed in a number of significant ways, and we’ve learned a lot about the art of designing programming languages in the mean time. I wonder if my lack of enthusiasm (and yours as well, if I may be so bold) has more to do with a set of assumptions about the way programing languages should be that haven’t aged particularly well. Which isn’t to say that Java isn’t useful, or that it is no longer important, merely that it’s become uninteresting.

Thoughts?

Philosophy of the Present, Egypt

I’ve been watching the Egyptian revolution, off and on since it started. There’s so much interesting stuff going on: the pragmatics of political organization, the foundations of revolutionary movement, the evolving state of American political power, and the way that Egyptians are racialized particularly in contrast to Iranians and Tunisians.

The aspect that I’m most interested is in what western analysis of “Janurary 25th” tells us about how the west has made sense of past revolutionary moments in the last ‘50 years notably the Iranian revolution and “May'68.”

Largely I fear that it is way too soon to really say anything terribly useful on the subject. That hasn’t stopped people, of course.

I’m not sure what to make of either Ken MacLeod’s or Slavoj Žižek’s articles on the subject. There are aspects of this kind of theorizing that I really like and that really appeals to me. At the same time, optimism seems foolhardy.

There’s work still to be done: both theoretical and revolutionary. But isn’t there always?

The Rise and Fall of Netbooks

“I’m old fashioned,” R. said to me in an email, with that link to an article about how tablets have replaced and supplanted netbooks.

In many ways, you have two netbooks: the little one that’s been broken since may that I’m fixing and your real laptop. Which is to say: the advancement of netbooks was not, so much, the small form factor, but the fact that they were under powered computer systems meant to be used mostly with web-based applications.

Initially, netbooks were to have low capacity solid state hard drives and run Linux-based OSes. That was cool, for a while, but the cheap solid state drives turned out to perform more poorly than people expected and conventional hard drives became very cheap/available. Also Microsoft got scared and having seen that small-form factor computing was a real thing, adapted its strategy to seriously target these kinds of devices.

At which point everyone realized that there wasn’t a lot of point in making really small laptops: they were hard to type on and fundamentally they did everything for which you needed “big computers.” So companies started making “big netbooks.” The end result most 14"-15" laptops are basically big netbooks (including having similar resolutions.) The extra size is nice for most mundane uses, and for most the “mobility niche” is filled by smartphones anyway, rendering netbooks-sized device useful. Except they’re still around in different packaging.

Technology, I think, rarely fails. Rather, it gets reimplemented and reabsorbed by the next iteration of the technology. If we don’t pay attention we may miss the connections between iterations, but they are there.

Interestingly, and perhaps orthogonally, Linux lead the development for netbooks. Though tablets are different, and the history is less easily accessible, I think the same thing is happening there. It’ll be interesting to see how that pans out.

Progressions

I’ve been somewhat remiss in posting here. Nevertheless, I’ve managed to get rather a lot of things done in the couple of weeks that I think merits an update post.

I’ve posted two new things to Critical Futures, my final post--for now--about dexy, called make all dexy. and a post about new media on What we Learn from WikiLeaks.

I’ve also updated the Critical Futures Archive, particularly the posts on technical writing series and the new media series. I think hand crafted archives are incredibly valuable, but they’re hard to maintain, which I suppose is the point, and I’ve been a poor steward over the past few years. At the moment, the full archives of Critical Futures (nee tychoish.com, nee tealart.com) are listed on the archive page. The content has never gone anywhere, nor do I think the old content is any better than I used to, but it’s good to not have it totally hidden. I’m also trying to keep the hand crafted archives more up to date. It’s a struggle.

I’ve also been doing some wiki work around these parts, mostly in the technical writing section, including new pages about automicity, automation, compilation, dexy (tag), and /technical-writing/filters. I’m also working on ways of marrying this wiki with the blog, by way of the critical futures section. It’s in progress, of course but there’s a blurb at the bottom of all CF posts that says:

Disqus comments are provided to support legacy comments on old posts, and as a last resort in cases where other means of communication are ineffective. Otherwise please consider leaving a comment.

The link doesn’t work as well as I want to, and maybe I need to set up a specific comments page adding widget, as on the submissions page. Couldn’t hurt. Also on my list of things to do: make a more fitting index page that draws attention to all sorts of content on the wiki, not just rhizome. That’s on the list.

Also on the list:

  • Writing fiction like crazy: I’m working on the penultimate chapter of the novel.
  • Something with the Cyborg Institute. It needs to happen.
  • More regular blogging here. It’s on the list!