Advice for Blogging Successfully

Although I forget it, sometimes, the following video is probably the single best piece of advice for better blogging. Watch the video:

Cory Docotorow - How to be an Uber Blogger

Attention and time are scarce while content is plentiful. “If you write it they’ll read” cannot, thus, be true of blogging. Interesting and important content is interesting, but success lies in managing the attention economy and focusing on output that is easy to read and easy to skim. Its not glamorous, and it requires giving up a bunch of pride, but writers must write with readers in mind.

I’ll skip the meandering analysis and get to a couple of key questions that I think remain open, even 4 years after this video was posted:

  • What about non-newsreel blogs? Blogs that are more analysis and less regurgitation’s of boingboing/metafilter/slashdot?
  • What about non-blog content? Are books and articles, subject to the same overload (yes?) but are the solutions always “write easy to process, easy to skip,” content? (Maybe?)
  • The connection between the “attention” economy, evolving search engine use patterns, and mobile technology change the way that we interact with and compensate for overload?

Discuss!

On Wireless Data

It’s easy to look around at all of the “smart phones,” iPads, wireless modems, and think that the future is here, or even that we’re living on the cusp of a new technological moment. While wireless data is amazing particularly with respect to where it was a few years ago--enhanced by a better understanding of how to make use of wireless data--it is also true that we’re not there yet.

And maybe, given a few years, we’ll get there. But it’ll be a while. The problem is that too much of the way we use the Internet these days assumes high quality connections to the network. Wireless connections are low quality regardless of speed, in that latency is high and dropped packets are common. While some measures can be taken to speed up the transmission of data once connections are established, and this can give the illusion of better quality, the effect is mostly illusory.

Indeed in a lot of ways the largest recent advancements in wireless technology have been with how applications and platforms are designed in the wireless context rather than anything to do with the wireless transmission technology. Much of the development in the wireless space in the last two or three years has revolved around making a little bit of data go a long way, in using the (remarkably powerful) devices for more of the application’s work, and in figuring out how to cache some data for “offline use,” when it’s difficult to use the radio. These are problems that can be addressed and largely solved in software, although there are limitations and inconsistencies in approach that continue to affect user experience.

We, as a result, have a couple of conditions. First that we can transmit a lot of data over the air without much trouble, but data integrity and latency (speed) are things we may have to give up on. Second that application development paradigms that can take advantage of this will succeed. Furthermore, I think it’s fairly safe to say that in the future, successful mobile technology will develop in this direction as opposed against these trends. Actual real-time mobile technology is dead in the water, although I think some simulated real-time communication works quite well in these contexts.

Practically this means, applications that tap an APO for data that is mostly processed locally. Queue-compatible message passing systems that don’t require persistent connections. Software and protocols that assume you’re always “on-line” and are able to store transmissions gracefully until you come out of the subway or get off of a train. Of course, this also means designing applications and systems that are efficient with regards to their use of data will be more successful.

The notion that fewer transmissions that consist of bigger “globs” of data will yield better performance than a large number of very small intermediate transmissions, is terribly foreign. It shouldn’t be, this stuff has been around for a while, but nevertheless here we are.

Isn’t the future grand?

Jekyll and Automation

As this blog ambles forward, albeit haltingly, I find that the process of generating the site has become a much more complicated proposition. I suppose that’s the price of success, or at least the price of verbosity.

Here’s the problem: I really cannot abide by dynamically generated publication systems: there are more things that can go wrong, they can be somewhat inflexible, they don’t always scale very well, and it seems like horrible overkill for what I do. At the same time, I have a huge quantity of static content in this site, and it needs to be generated and managed in some way. It’s an evolving problem, and perhaps one that isn’t of great specific interest to the blog, but I’ve learned some things in the process, and I think it’s worthwhile to do a little bit of rehashing and extrapolating.

The fundamental problem is that the rebuilding-tychoish.com-job takes a long time to rebuild. This is mostly a result of the time it takes to convert the Markdown text to HTML. It’s a couple of minutes for the full build. There are a couple of solutions. The first would be to pass the build script some information about when files were modified and then have it only rebuild those files. This is effective but ends up being complicated: version control systems don’t tend to version mtime and importantly there are pages in the site--like archives--which can become unstuck without some sort of metadata cache between builds. The second solution is to provide very limited automatically generated archives and only regenerate the last 100 or so posts, and supplement the limited archive with more manual archives. That’s what I’ve chosen to do.

The problem is that even the last 100 or so entries takes a dozen seconds or more to regenerate. This might not seem like a lot to you, but the truth that at an interactive terminal, 10-20 seconds feels interminable. So while I’ve spent a lot of time recently trying to fix the underlying problem--the time that it took to regenerate the html--when I realized that the problem wasn’t really that the rebuilds took forever, it was that I had to wait for them to finish. The solution: background the task and send messages to my IM client when the rebuild completed.

The lesson: don’t optimize anything that you don’t have to optimize, and if it annoys you, find a better way to ignore it.

At the same time I’ve purchased a new domain, and I would kind of like to be able to publish something more or less instantly, without hacking on it like crazy. But I’m an edge case. I wish there were a static site generator, like my beloved jekyll that provided great flexibility, and generated static content, in a smart and efficient manner. Most of these site compilers, however, are crude tools with very little logic for smart rebuilding: and really, given the profiles of most sites that they are used to build: this makes total sense.


I realize that this post comes off as pretty complaining, and even so, I’m firmly of the opinion that this way of producing content for the web is the most sane method that exists. I’ve been talking with a friend for a little while about developing a way to build websites and we’ve more or less come upon a similar model. Even my day job project uses a system that runs on the same premise.

Since I started writing this post, I’ve even taken this one step further. In the beginning I had to watch the process build. Then I basically kicked off the build process and sent it to the background and had it send me a message when it was done. Now, I have rebuilds scheduled in cron, so that the site does an automatic rebuild (the long process) a few times a day, and quick rebuilds a few times an hour.

Is this less efficient in the long run? Without a doubt. But processors cycles are cheap, and the builds are only long in the subjective sense. In the end I’d rather not even think that builds are going on, and let the software do all of the thinking and worrying.

The Successful Failure of OpenID

Just about the time I was ready to call OpenID a total failure, something clicked and, if you asked how I thought “OpenID was doing,” I’d have to say that it’s largely a success. But it certianly took long enough to get here.

Lets back up and give some context.

OpenID is a system for distributing and delegating authentication for web services to third party sites. Basically to the end user, rather than signing into a website with your username and password, you sign in with your profile URL on some secondary site that you actually log into. The site you’re trying to log in, asks the secondary site “is this legit,” the secondary site prompts you (usually just the first time, though each OpenID provider may function differently here.) then you’re good to go.

Additionally, and this is the part that I really like about Open ID is that you can delegate the OpenID of a given page to a secondary host. So on tychoish.com you’ll find the following tags in the header of the document:

<link rel="openid.server" href="http://www.livejournal.com/openid/server.bml" />
<link rel="openid.delegate" href="http://tychoish.livejournal.com/" />

So I tell a third party site “I wanna sign in with http://tychoish.com/ as my OpenID,” it goes and sees that I’ve delegated tychoish.com’s OpenID to LiveJournal (incidentally the initiators of OpenID if memory serves,) and LiveJournal handles the authentication and validation for me. If at some point I decide that LiveJournal isn’t doing what I need it to, I can change these tags to a new provider, and all the third party sites go talk to the new provider as if nothing happened. And it’s secure because I control tychoish.com and contain a provider-independent identity server, while still making use of these third party servers. Win.

The thing is that OpenID never really caught on. Though managing a single set of authentication credentials, and a common identity across a number of sites has a lot of benefits to the users, it never really caught on. Or I should say, it took a very long time to be taken seriously. There are a number of reasons for this, in my understanding:

1. Third party vendors wanted to keep big user databases with email addresses. OpenID means, depending on implementation that you can bypass the traditional sign up method. This isn’t a technological requirement but can be confusing in some instances. By giving up the “traditional” value associated with sponsoring account creation, OpenID seemed like a threat to traditional web businesses. There were ways around this, but it’s confusing and as is often the case a dated business model trumped an inspiring business model.

2. There was and is some fud around security. People thought if they weren’t responsible for the authentication process that they wouldn’t be able to ensure that only the people who were supposed to were able to get into a given account. Particularly since the only identifying information associated with an account was a publicly accessible URL. Nevertheless it works, and I think people used these details to make people feel like the system isn’t/wasn’t secure.

3. There are some legitimate technological concerns that need to be sorted out. Particularly around account creation. This is the main confusion cited above. If someone signs up for an account with an OpenID, do they get a username and have to enter that, or do we just use the OpenID URL? Is there an email address or password associated with the account? What if they get locked out and need to get into the account but there’s no email? What if they need to change their OpenID provider/location at some point. These are legitimate concerns, but they’re solvable problems.

4. Some users have had a hard time groking it. Because it breaks with the conventional usage model, and it makes signing into sites simple it’s a bit hard to grok.

What’s fascinating about this is that eventually it did succeed. More even than joy at the fact that I get to use OpenID, finally, I think OpenID presents an interesting lesson in the eventual success of emergent technological phenomena. Google accounts, flickr accounts, and AIM accounts all provide OpenID. And although “facebook connect” is not using OpenID technology, it’s conceptually the same. Sites like StackOverflow have OpenID only authentication, and it’s becoming more popular.

OpenID succeeded not because the campaign to teach everyone that federated identity vis a vis OpenID was the future and the way we should interact with web services, but rather because the developers of web applications learned that this was the easier and more effective way to do things. And, I suspect in as much as 80% or 90% of cases when people use OpenID they don’t have a clue that that’s the technology they’re using. And that’s probably an ok thing.

The question that lingers in my mind as I end this post is: is this parallel any other optimistic technology that we’re interested in right now? Might some other “Open*” technology take away a strategic lesson from the tactical success of OpenID? I’d love to see that.

Onward and Upward!

Putting the Wires in the Cloud

I’m thinking of canceling my home data connectivity and going with a 3G/4G wireless data connection from Sprint.

Here’s the argument for it:

  • I’m not home very much. I work a lot (and there is plenty of internet there), and I spend about two thirds of my weekends away from home. This is something that I expect will become more--rather than less--intense as time goes on. It doesn’t make sense to pay for a full Internet connection here that I barely use.
  • My bandwidth utilization is, I think, relatively low. I’ve turned on some monitoring tools, so I’ll know a bit more later, but in general, most of my actual use of the data connection is in keeping an SSH connection with my server alive. I download email, refresh a few websites more obsessively than I’d like (but I’m getting better with that), and that’s sort of it. I’ve also started running a reverse proxy because that makes some measure of sense.
  • I find it difficult to use the data package on my cellphone. The fact that I get notified of all important emails on my phone, has disincentivized me from actually attending to my email in a useful way, and other than the occasional use of googlemaps (and I really should get an actual GPS to replace that…) If I get the right Wireless modem, however, it would be quasi-feasible to pipe my phone through the wireless Internet connection, so this might be a useful clarification.

The arguments against it are typical:

  • The technology isn’t terribly mature, or particularly well deployed.
  • Metered bandwidth is undesirable.
  • Sprint sucks, or has in my experience, and the other providers are worse.

The questions that remain in my mind are:

  • How well do these services work in moving vehicles? Cars? Trains?
  • How much bandwidth do I actually use?
  • Is this practical?

Feedback is, as always, very much welcomed here. I’m not in a huge rush to act, but I think it makes sense to feel things out. It also, I think posses an interesting question about how I (and we) use the Internet. Is the minimalist thing I do more idealistic than actual? I know that we have a pretty hard time conceptualizing how big a gigabyte of data actually is in practical usage. Further research is, clearly, indicated.


Edit: This plan would have to rely on the fact that I might be spending a large amount of time in a city with unmetered 4G access with sprint. I’ve used a gig and a half of transfer to my laptop’s wireless interface in 5 days. I think that would coincide with when I would be doing the heaviest traffic anyway. I wonder how unlimited the unlimited is…

End User RSS

I’m very close to declaring feed reader bankruptcy. And not just simple “I don’t think I’ll ever catch up with my backlog,” but rather that I’ll pull out of the whole RSS reading game all together. Needless to say, because of the ultimate subject matter--information collection and utilization and cultural participation on the Internet--and my own personal interests and tendencies this has provided some thinking… Here goes nothing:

Problems With RSS

Web 2.0 in a lot of ways introduced the world to ubiquitous RSS. There were now feeds for everything. Awesome right?

I suppose.

My leading problem with RSS is probably a lack of good applications to read RSS with. It’s not that there aren’t some good applications for RSS, its that RSS is too general of a format, and there are too many different kinds of feeds, and so we get these generic applications that simply take the chronology of RSS items from a number of different feeds and present them as if they were emails or one giant feed, with some basic interface niceties. RSS readers, at the moment, make it easier to consume media in a straightforward manner without unnecessary mode switching, and although RSS is accessed by way of a technological “pull,” the user experience is essentially “push.” The problem then, is that feed reading applications don’t offer a real benefit to their users beyond a little bit of added efficiency.

Coming up a close second, is the fact that the publishers of RSS sometimes have silly ideas about user behaviors with regards to RSS. For instance there’s some delusion that if you truncate the content of posts in RSS feeds, people will click on links and visit your site, and generate add revenue. Which is comical. I’m much more likely to stop reading a feed if full text isn’t available than I am to click through to the site. This is probably the biggest single problem with that I see with RSS publication. In general, I think publishers should care as much about the presentation of their content in their feed as they do about the presentation of content on their website. While it’s true that it’s “easier” to get a good looking feed than it is to get a good looking website, attending to the feed is important.

The Solution

Web 2.0 has allowed (and expected) us to have RSS feeds for nearly everything on our sites. Certainly there are so many more rss feeds than anyone really cares to read. More than anything this has emphasized the way that RSS has become the “stealth data format of the web,” and I think it’s pretty clear, that for all its warts, RSS is not a format that normal people are really meant to interact with.

Indeed, in a lot of ways the success of Facebook and Twitter have been as a result of the failure of RSS-ecosystem software to present content to us in a coherent and usable way.

Personally, I still have a Google Reader account, but I’m trying to cull my collection of feeds and wean myself from consuming all feeds in one massive stew. I’ve been using notifixlite for any feed where I’m interested in getting the results in very-near-real time. Google alerts, microblogging feeds, etc.

I’m using the planet function in ikiwiki, particularly in the cyborg institute wiki as a means of reading collection of feeds. This isn’t a lot better than the conventional feed reader, but it might be a start. I’m looking at plagger for the next step.

I hope the next “thing” in this space are some feed readers that add intelligence to the process of presenting the news. “Intelligent” features might include:

  • Noticing the order you read feeds/items and attempting to present items to you in that order.
  • Removing duplicate, or nearly duplicate items from presentation.
  • Integrate--as appropriate--with the other ways that you typically consume information: reading email and instant messaging (in my case.)
  • Provide notifications for new content in an intelligent sort of way. I don’t need an instant message every time a flickr tag that I’m interested in watching updates, but it might be nice if I could set these notifications up on a per-folder or per-feed manner. Better yet, the feed reader might be able to figure this out.
  • Integrate with feedback mechanisms in a clear and coherent way. Both via commenting systems (so integration with something like Disqus might be nice, or the ability auto-fill a comment form), and via email.

It’d be a start at any rate. I look forward to thinking about this more with you in any case. How do you read RSS? What do you wish your feed reader would do that it doesn’t?

Thinking Like a Web Developer

I’ve been reading a lot about web development in the last few weeks. I’m not exactly sure why. There are some interesting things going on in terms of the technology. Frameworks that provide for some interesting possibilities abound, and while I don’t know if the web is the only future for programing, it’s certainly a big part of the future of the way we interact with computers.

So what are you working on developing tycho?

A whole lot of noting. I know how the technology works. I know--more or less--how to build things for the web, and yet I’ve not said “you know what I need to build? A web app that does [this awesome thing]”

Maybe it’s because I’m unimaginative in this regard, or that I tend to think of web applications being a nearly universally wrong solution to any given problem.

I think it’s quite possible that both of these things are true. It’s also likely that when approached with a problem with technology or with data, I don’t instinctively think about how to solve it pragmatically, much less with some sort of web-based system. As I think about it might be the fact that my mind is intensely qualitative. In my psych major days I always had problems coming up with ides for non-hokey quantitative studies (Insofar as such things exist.)

In a lot of ways the questions I ask of the technology aren’t (particularly) “how can I manage this data better,” but rather how can I interact with this technology more efficiently. While I don’t think data interaction is a solved problem, I feel like I’m pretty far ahead of the game, and that the things I do to improve how I work has more to do with tweaking my system to shape the content and way that I’m working. While there’s often some little bits of code involved, it’s not the kind of thing that’s generalizable in the way that an application or web site might be.

The Imperative Tense

Most of the time, you put me in a room with programmers and tell us to talk about our work, the conversation will be really lively. Aside from the fact that I use programmers tools to write, and use a very iterative approach

One thing I notice many of my coworkers doing is saying “I’m going to write a program that’s going to do these four things, and it’s going to be written in such a way as to make these other things possible,” (insert words of awesomeness in this sentence.) And I think “Cool! I can’t wait!”

For a long time this way of talking confused me and almost put me on edge. When I have an idea for a new project, I get these images, and an interesting concept to toy with and I have little conversations in my head with the characters, and I see their world from their eyes, and it’s sort of an absurd experience and I don’t tell people about this. I mean, I might say “I got an awesome idea for a new book,” but usually not more than that. And the truth is that I get ideas for stories all the time and I know that I’ll never really be able to write most of them.

I’m okay with the way programmers plan projects, and I’m pretty happy with my own methodology. Having said that, I think the difference in the way that I think about and plan projects has a lot to do with the way I think about these things.

Onward and Upward!

Web Frameworks

I’m not a web developer. I write the content for (a couple) of websites, and I’m a fairly competent systems administrator. Every once and a while someone will need a website, or I’ll need my site to do something new that I haven’t needed to do before and I’ll hack something together, but for the most part, I try and keep my head out of web development. Indeed, I often think that designing applications to run in the web browser is the wrong solution to most technological problems. Nevertheless, my work (and play) involves a lot of tinkering and work with web-applications, and I do begrudgingly concede the relevance of web applications.

In any case I’ve been reading through the following recently, and I (unsurprisingly have a few thoughts:)

The Trouble With Frameworks

I really enjoyed how this post located “web frameworks” in terms of the larger context: what they’re good for, what they’re not good for, and why they’re so popular. I often feel like I see a lot of writing about why FrameworkA is better or worse than FrameworkB, which doesn’t really answer a useful question. While I wouldn’t blame my gripe with web-based applications entirely on the shoulders of frameworks, it’s interesting to think of “the framework problem” as being a problem with the framework (and the limitations therein) rather than a problem with the web itself.

This isn’t to say that frameworks are inherently bad. Indeed, there is a great deal of work that websites require in-order to function: HTML is a pain to write “by hand,” consistent URLs are desirable, but it’s undesirable to have to mange that by hand. If you need content that’s dynamic, particularly content that is database-backed, there is all sorts of groundwork that needs to be done that’s basic and repetitive even for the most basic functionality. Eliminating this “grunt work” is the strength of the framework, and in this they provide a great utility.

However, from an operations (rather than development) perspective, frameworks suck. By producing tools that are broadly useful to a large audience, the frameworks are by nature not tuned for high performance operations, and they don’t always enforce the most efficient operations (with regards to the databases). Thankfully this is the kind of issue that can be safely delegated to future selves, as premature optimization remains a challenge.

Thoughts on Web.py

Though I’m not much of a Python person, I have a great deal of respect for Python tools. I swear if I were going to learn a language of this type it would almost certainly be Python. Having said that, the tool looks really interesting: it’s minimal and stays out of the way for the most part. It does the really “dumb” things that you don’t want to have to fuss with, but doesn’t do a lot of other stuff. And that’s a great thing.

I’m not sure how accurate this is, but one of the things that initially intrigued me about web.py is that it sort of felt like it allows for a more “UNIX-y” approach to web applications. Most frameworks and systems for publishing content to the web work really well as long as you don’t try and use anything but the application or framework. Drupal, Wordpress, and Rails seem to work best this way. Web.py seems to mostly be a few hooks around common web-programing tasks for Python developers, so that they can build their apps in whatever way they need to. The the monolithic content management approach doesn’t feel very UNIXy by contrast. I think this is something that I need to explore in greater deal.

Having said that, I’m not terribly sure if there are problems that I see in the world that need to be addressed in this manner. So while I can sort of figure out how to make it work, I don’t find myself thinking “wow, the next time I want to do [this], I’ll definitely use web.py.”

But then I’m just a dude who writes stuff.