iPad Reactions

Ok, as a self-respecting geek, I think I have to say something about this whole "iPad" thing.

I'm not as much of an Apple geek as I used to be, that's for sure. It's not that I don't think that Apple's doing something quasi-innovative and useful in the world of technology and the business around consumer technology. I think bringing UNIX to the hands of "mass market" desktop users was a great move. Although the iTunes Store is not without it's issues and concerns, the fact that Apple was able to create a viable environment and market that allowed people to exchange money for software and content is probably a good thing. And Apple has brought to a general public a number of hardware configuration (in recent years) that I think make a lot of sense: the mini-tablet (iPhone/iPod Touch), laptops with usable battery lives, the pocket jukebox (iPod Classic,) and so forth.

Deep breath. Having said that... I'm not terribly impressed with the iPad, or moved by it's potential at the moment. I know Apple often takes a few generations to make something really work, and so I think it's important to not say "this implementation sucks, and so the whole notion sucks, and is with out merit." Of all the things that I've heard (or said,) in the past few weeks of the iPad the following two threads have stood out:

  • I'm really quite interested to see what other makers are going to be doing in this space. What's the Lenovo tablet going to look like? HP? One of the leading complaints about the iPad (and iPhone) is Apple's total-lock-down over the platform, and I think an Android tablet, or a super Nokia N900 is likely to be much more open and killer awesome.
  • I'm interested to see what the iPad platform looks like in a revision or two. Add an SD slot? Multi-tasking? Additional input methods? It could look really awesome, and while I have misgivings (see below) I don't want to write it off entirely.

At the same time, I don't really feel like there's an in-between device that I don't currently have that I'd like to have. In a lot of ways, even I feel like I have too many devices, too many inboxes (of one sort or another), and too much technology to manage. I'm not complaining. The truth of the matter is that my laptop gets great battery life, isn't very big at all, and does everything I need of a computer, and almost everything I need of technology in general. iPods are better for playing music if I'm moving around or in the car, and the Kindle is great for what it is, and I do sort of have a Blackberry habit.. but...

My technological challenge at the moment is that I don't have enough time to get done that which I would like to get done, not that I have a situation where I could be more productive if I only I had a device that would do something more." That's not something that seems to cross my mind very much. It might be nice to have a slightly more accessible emacs instance that I could use to enter snippets of text and work on things in the kinds of moment. I'm thinking a Nokia N900 might fit that bill pretty well, but I'm not sure.

If you're thinking about getting an iPad, what's the niche that you see it filing? Do you have a niche that seems like it might be iPad sized?

Putting the Wires in the Cloud

I'm thinking of canceling my home data connectivity and going with a 3G/4G wireless data connection from Sprint.

Here's the argument for it:

  • I'm not home very much. I work a lot (and there is plenty of internet there), and I spend about two thirds of my weekends away from home. This is something that I expect will become more--rather than less--intense as time goes on. It doesn't make sense to pay for a full Internet connection here that I barely use.
  • My bandwidth utilization is, I think, relatively low. I've turned on some monitoring tools, so I'll know a bit more later, but in general, most of my actual use of the data connection is in keeping an SSH connection with my server alive. I download email, refresh a few websites more obsessively than I'd like (but I'm getting better with that), and that's sort of it. I've also started running a reverse proxy because that makes some measure of sense.
  • I find it difficult to use the data package on my cellphone. The fact that I get notified of all important emails on my phone, has disincentivized me from actually attending to my email in a useful way, and other than the occasional use of googlemaps (and I really should get an actual GPS to replace that...) If I get the right Wireless modem, however, it would be quasi-feasible to pipe my phone through the wireless Internet connection, so this might be a useful clarification.

The arguments against it are typical:

  • The technology isn't terribly mature, or particularly well deployed.
  • Metered bandwidth is undesirable.
  • Sprint sucks, or has in my experience, and the other providers are worse.

The questions that remain in my mind are:

  • How well do these services work in moving vehicles? Cars? Trains?
  • How much bandwidth do I actually use?
  • Is this practical?

Feedback is, as always, very much welcomed here. I'm not in a huge rush to act, but I think it makes sense to feel things out. It also, I think posses an interesting question about how I (and we) use the Internet. Is the minimalist thing I do more idealistic than actual? I know that we have a pretty hard time conceptualizing how big a gigabyte of data actually is in practical usage. Further research is, clearly, indicated.


Edit: This plan would have to rely on the fact that I might be spending a large amount of time in a city with unmetered 4G access with sprint. I've used a gig and a half of transfer to my laptop's wireless interface in 5 days. I think that would coincide with when I would be doing the heaviest traffic anyway. I wonder how unlimited the unlimited is...

Decreasing Emacs Start Time

One oft made complaint about emacs is that it takes forever to start up, particularly if you've got a lot of packages to load it can take a few seconds for everything to start up. In a lot of respects this is an old problem, that isn't as relevant given contemporary hardware. Between improvements to emacs, and the fact that computers these days are incredibly powerful, it's just not a major issue.

Having said that, until recently an emacs instance took as much as 7 seconds to start up. I've beaten it down to under two seconds, and using emacsclient and starting emacs with "emacs --daemon" makes the start up time much more manageable.

Step One: Manage your Display Yourself

I've written about this before, but really even a 2 second start time feels absurd, if I had to start a new emacs session each time I needed to look into a file. "emacs --daemon" and emacsclient mean that each time you "run" emacs rather than start a new emacs instance, it just opens a new frame on the existing instance. Quicker start up times. It means you can open a bunch of buffers in one frame, settle into work on one file, and then open a second buffer and edit one of the previous files you opened. Good stuff. The quirk is that if you've set up your emacs files to load the configuration for your window displays late in game, the windows won't look right. I have a file in my emacs files called gui-init.el, and it looks sort of like this:

(provide 'gui-init)

(defun tychoish-font-small () (interactive) (setq default-frame-alist '((font-backend . "xft")(font . "Inconsolata-08") (vertical-scroll-bars . 0) (menu-bar-lines . 0) (tool-bar-lines . 0) (left-fringe . 1) (right-fringe . 1) (alpha 86 84))) (tool-bar-mode -1) (scroll-bar-mode -1) )

(if (string-match "laptop" system-name) (tychoish-font-big))

Modifying, of course, the system name, and the settings to match your tastes and circumstances. The (if) statement allows you to have a couple of these -font- functions defined and then toggle between them based on which machine you load emacs on. Then in your init file (e.g. .emacs), make sure the first two lines are:

(setq load-path (cons "~/confs/emacs" load-path))
(require 'gui-init)

Establish the load path first so that emacs knows where to look for your required files, and then use the (require) sexep to load in the file. Bingo.

Package Things Yourself

We saw this above, but as much as possible avoid using the load function. When you use load emacs has to (I'm pretty sure) do a fairly expensive file system operation and then load the file and then compile and load the file. This takes time. Using the require function is not without it's own cost, but it does save some time compared to load because it lets you take advantage of the work emacs does with the library loading. At least in my experience.

In your various .el files, insert the following statement:

(provide 'package)

And then in your .emacs, use the following statement

(require 'package)

To load it in. You're probably already familiar with using these to configure packages that you download. Better yet, don't require at all, but use the auto-load function. This just creates a little arrow inside of emacs that says "when this function is called, load this file, and hopefully the 'real' function by this name will be in there." This lets you avoid loading packages that you don't use frequently until you actually need them. The following example provides an auto-load for the identica-mode:

(autoload 'identica-mode "identica-mode.el" "Mode for Updating Identi.ca Microblog" t)

Byte Compile files as much as you can.

Contrary to whatever you've been told, emacs isn't a text editor, as much as it is a virtual machine with a good deal of low level functions established for interacting with text and textual environments and some editing-based interfaces. But really at the core, it's just virtual machine that interprets a quirky Lisp dialect.

The execution model is pretty simple and straightforward, particularly to people who are used to Java and Python: you load source files, emacs imports them and compiles them half way, they're not the kind of thing that you could read on your own or would want to write, but it's not quite machine code either. Byte-compiled files are easier for the machine to read, and quicker to process, but they're not human intelligible. Then when you need to do something with the function that it's byte-compiled, emacs compiles it the rest of the way into machine code and executes it. Usually this all happens too fast that we don't really notice it.

One tried and true means of speeding up emacs load times is to byte-compile files manually so that emacs doesn't have to do it itself when it loads. The emacs-lisp libraries are byte compiled when emacs installs itself, but your files probably aren't. Now generally, only byte-compile files that you're not going to be editing yourself regularly. Byte compiled files have an .elc extension, and as soon as there's a .el file and a .elc of the same name in a directory, emacs will ignore the .el file even if there have been changes made. To byte compile an emacs-lisp file, simply type M-x to get the execute-extended-command prompt, and then run the function byte-compile (i.e. "M-x byte-compile"). Viola!

I hope these all help you all and lead to a slightly more efficient emacs experience.

Independent Web Services

So much of the time, when we talk about network services, technological/software freedom, and this idea of "Cloud" computing, there's a bunch of debate: "is it a good idea?" "are we giving up too much freedom?" "how does this work out economically?" "what about privacy in the cloud?" While these are important questions, without doubt, I fear that they're too ethereal, and we end up tussling with a bunch of questions about the future and present of computing that might not be entirely worth debating (at least for the moment.)

Lets take 2 assertions, to start:

1. There are some applications--things we do with technology--that work best when these applications are running on high performance servers that have consistent connections to the Internet, that we can access regardless of where we are in the world.

2. The only way to have control over your data and computing experience is to be responsible for the administration and maintenance of these services yourself.

Huh?

I mean to say, that if we care about our autonomy, and our freedom as we use computers in the contemporary age (i.e. in the era of cloud computing), the only thing to be done is to run our own services. If the fact that Google has all of your data scares you: run your own mail server. If the fact that all of your microblogging output is on twitter, run your own status.net instance. And so forth.

If we really care about having power over our technological experiences, we must take responsibility for services on the Internet. We can say "wouldn't it be nice if service providers weren't such dicks with our data," or "wouldn't it be nice if software developers wrote networked software that respected our freedom." And while it would be nice, these convinces don't in and of themselves

Control over technology and autonomy in the networked context ultimately means that we as users have to:

  • Administer networked servers that provide us with the services that we want and need to do whatever it is that we do.
  • Participate in some exchange for networked services (i.e. pay for service, either in cash or by way of access to data.)

That's hard! Computers should get easier to use not harder, right?

Leading question there, but...

Yes. One of the leading arguments for consumer-"Cloud Computing" is that by accessing computer services (software) in the browser, developers can provide a more structured and "safe" user experience. At least that's how I understand it.

While this is a great thing in terms of making computers more accessible, no argument from me, I think we must be careful to avoid confusing of use" with technologically limiting. I fervently believe that its possible to design powerful software that is also easy to use, and I think that as often as not, a confusing technology is an opportunity to provide a teaching experience as much as it presents an opportunity to improve a given technology.

And if it comes down to it, there are situations where it doesn't matter so much if you're the one entering the commands into the server. It doesn't much matter if you are the one managing the server or if you've hired someone to configure it for you. As I think about it, there's probably something of a niche here for people to offer management services in a very boutique sort of style.

If we have to contract to people to do our administration for us, is that really a step in the right direction?

I think, it is. At the moment we pay for our networked computing services (i.e. gmail) by looking at google's ads next to our mail and giving Google access to the aggregate of our mail spools so that they can mine it for whatever data they need. The other price that we pay for these services is "lock in:" once we commit to using a service it's quite difficult to change to an alternate provider. Since these are "real costs," it seems reasonable to expect and want to pay (money) for services that don't have these costs. Which is where cooperative and boutique-style services make a lot of sense.

I'm not a systems administrator, I just want to do [the thing that I do] and not have to tinker with my computer. This is a lousy idea.

And that's a lousy question.

To dig in a bit further. I don't think that "doing the [whatever you do]," would necessarily require a lot of tinkering. It might, of course, and the chances are that we've all had to tinker with our technology at one point or another. In most cases tinkering is an upfront rather than ongoing cost. Ideally, the other thing that having full control over your network services you'll be able to use have services which are more tailored to [the thing you do] than the one size fits all application provided by a third party.

Ok, so what's the stack look like.

I'm not sure. There's clearly a common set tasks that we currently use in the networked context. I'm not sure what the application is, exactly, but here's a beginning of what this "application stack" looks like.

  • An XMPP Server like Prosody.im, with PyAIMt and other convectional IM network transports.
  • Some sort of Email Service: Citadel springs instantly to mind as an "all in one solution," but some postfix+procmail+fetchmail+horde/squirrelmail seems to make some sense
  • A web server, either for hosting personal websites, or with some sort of authentication scheme (digest?) for sharing files with yourself. The truth is that web servers, are pretty darn lightweight and it doesn't make sense to not install one. Having said that, people see "web hosting," and probably often think "Well, I don't really need web hosting," when that's almost beside the point.
  • SSH and some system for FUSE (or FUSE-like) mount points, so that they can use and store remote files.
  • There's probably a host of web-based applications that would need to be installed as a matter of course: some sort of web-based RSS reader, wiki-like note taking. Bookmarking. Some sort of notification service, Etc.
  • [your suggestion here.]

Beyond SQL and Database Technology

People have been thinking about databases recently. Even I've been thinking about databases, and I'm not particularly prone to thinking about databases. It's fair given the ongoing drama of the Oracle/Sun, and even mainstream press of the NoSQL Movement. I'd like to take a step back and think a bit more honestly and holistically about the database application, aboth this "NoSQL" phenomena, and about the evolving role of relational database management systems in our technology "ecosystems."

(Seriously folks this is what I think about for fun in my free time.)

I've been milling over the notion that databases, like MySQL and PostgreSQL and Oracle's RDBM products, are not particularly "Unix-like." Sure they run on Unix systems, and look and feel like Unix applications, but the niche fulfill--providing quick access to structured data with a specialized query language, doesn't jive with the Unix philosophies: small specialized tools for precise tasks. "Plain text" as lingua franca of system tools, and so forth.

Databases solve a problem. Indeed they solve a problem in a very functional and workable manner. I don't want to suggest that the relational database model is somehow broken; however, I would like to suggest that industrial strength database systems are over utilized, and have become the go-to solution for storing and interacting with data of any kind, even in cases where they're not a good fit for the job at hand.

I'm not the first person to suggest this, not by a long shot. The NoSQL "movement," addresses this issue from a couple different direction. It's true that NoSQL refers to a collection of practices and approaches related to providing systems for storing data that goes above and beyond the type and model of a database system. In the end NoSQL is about addressing the scaling problem: what happens when we have so much data that it can't easily fit in one database system, or in situations where centralized model is untmaintable for any number of reasons. I think NoSQL is also relevant as we think about storing data that doesn't easily fit into RDBMs'es: I've seen a lot of very poorly architected database systems, that suffer from a "square peg in round hole" problem.

Indeed, as we try and put all of our data in these RDBMs systems, particularly data that doesn't fit very well, these databases loose their ability to scale. The complex logic required to pull more complex data back out of a database and reassemble it for use and analysis is computationally expensive and doesn't scale particularly well.

But let's focus for a moment on the scaling question, apart from the data modeling and storage question. The real problem at the core of the scaling question is: we need a way, a thing, that allows multiple systems to access a shared data store in a reliable and consistent manner.

The ongoing work around clustered file systems seems to address this issue from a much different direction, and perhaps a more interesting perspective. Beyond a certain point--and its a fuzzy point--database systems basically become file system replacements. So rather than work on making databases more like file systems, the thought is (I assume) lets make file systems a bit more "database like." Like I said, I don't know a lot about the ins-and-outs of clustered file systems, but I think, in addition to worrying and thinking the future of current database systems, we need to also think about the future of these very scalable and clustered manner.

I'm not sure what the next-generation data storage technology really looks like, the NoSQL stuff is a step in the right direction, but I'm not sure if it's a large enough step in a lot of ways, as its focus is a bit narrow. To be honest, I'm not incredibly familiar with the work that's going on in the clustered file system space. Nonetheless, I think it's important to not just think about the future of the relational database platforms as such, but the model and the underlying problems that these kinds of data storage methods address, and to think about other possible ways of addressing the original issues.

End User RSS

I'm very close to declaring feed reader bankruptcy. And not just simple "I don't think I'll ever catch up with my backlog," but rather that I'll pull out of the whole RSS reading game all together. Needless to say, because of the ultimate subject matter--information collection and utilization and cultural participation on the Internet--and my own personal interests and tendencies this has provided some thinking... Here goes nothing:

Problems With RSS

Web 2.0 in a lot of ways introduced the world to ubiquitous RSS. There were now feeds for everything. Awesome right?

I suppose.

My leading problem with RSS is probably a lack of good applications to read RSS with. It's not that there aren't some good applications for RSS, its that RSS is too general of a format, and there are too many different kinds of feeds, and so we get these generic applications that simply take the chronology of RSS items from a number of different feeds and present them as if they were emails or one giant feed, with some basic interface niceties. RSS readers, at the moment, make it easier to consume media in a straightforward manner without unnecessary mode switching, and although RSS is accessed by way of a technological "pull," the user experience is essentially "push." The problem then, is that feed reading applications don't offer a real benefit to their users beyond a little bit of added efficiency.

Coming up a close second, is the fact that the publishers of RSS sometimes have silly ideas about user behaviors with regards to RSS. For instance there's some delusion that if you truncate the content of posts in RSS feeds, people will click on links and visit your site, and generate add revenue. Which is comical. I'm much more likely to stop reading a feed if full text isn't available than I am to click through to the site. This is probably the biggest single problem with that I see with RSS publication. In general, I think publishers should care as much about the presentation of their content in their feed as they do about the presentation of content on their website. While it's true that it's "easier" to get a good looking feed than it is to get a good looking website, attending to the feed is important.

The Solution

Web 2.0 has allowed (and expected) us to have RSS feeds for nearly everything on our sites. Certainly there are so many more rss feeds than anyone really cares to read. More than anything this has emphasized the way that RSS has become the "stealth data format of the web," and I think it's pretty clear, that for all its warts, RSS is not a format that normal people are really meant to interact with.

Indeed, in a lot of ways the success of Facebook and Twitter have been as a result of the failure of RSS-ecosystem software to present content to us in a coherent and usable way.

Personally, I still have a Google Reader account, but I'm trying to cull my collection of feeds and wean myself from consuming all feeds in one massive stew. I've been using notifixlite for any feed where I'm interested in getting the results in very-near-real time. Google alerts, microblogging feeds, etc.

I'm using the planet function in ikiwiki, particularly in the cyborg institute wiki as a means of reading collection of feeds. This isn't a lot better than the conventional feed reader, but it might be a start. I'm looking at plagger for the next step.

I hope the next "thing" in this space are some feed readers that add intelligence to the process of presenting the news. "Intelligent" features might include:

  • Noticing the order you read feeds/items and attempting to present items to you in that order.
  • Removing duplicate, or nearly duplicate items from presentation.
  • Integrate--as appropriate--with the other ways that you typically consume information: reading email and instant messaging (in my case.)
  • Provide notifications for new content in an intelligent sort of way. I don't need an instant message every time a flickr tag that I'm interested in watching updates, but it might be nice if I could set these notifications up on a per-folder or per-feed manner. Better yet, the feed reader might be able to figure this out.
  • Integrate with feedback mechanisms in a clear and coherent way. Both via commenting systems (so integration with something like Disqus might be nice, or the ability auto-fill a comment form), and via email.

It'd be a start at any rate. I look forward to thinking about this more with you in any case. How do you read RSS? What do you wish your feed reader would do that it doesn't?

The Blog, Next in Lisp

Here's a crazy idea: in addition to posting an RSS feed, say I start posting the content of the blog as Common Lisp code. Not, to replace any format that I currently publish in, but as an additional output option. Entries might look something like this:

(tychoish:blog-post
   (tychoish:meta-data
      :title "The Blog, Next in Lisp"
      :author "tycho garen"
      :pubtime #'(format () time-t)
      (tychoish:blog-tags '(lisp cyborg crazy))
      (tychoish:archive-collection '(programing)))
  (tychoish:blog-content (markdown)
    "Here's a crazy idea: in addition to posting an RSS feed, say I start
    posting the content of the blog as Common Lisp code. Not, to replace
    any format that I currently publish in, but as an additional output
    option. Entries might look something like this: [...]"))

That's pretty. In a lispy story of way. I'm not sure that it's actually correct, and it makes calls to functions that don't exist, of course. But I hope you can get the gist enough to see where I'm going with this, and maybe enough to correct my newbish mistakes.

By my count there needs to be functions for: blog-post, meta-data, blog-tags, blog-content, and markdown. And of course it's missing some notion of what these functions might do. I'm not terribly sure what they could do, build a better indexing system for the site (lord knows I need it), or more easily create a Lisp-based content reader/browser thatls like a feed reader but more in some way that I haven't envisioned.

In a lot of ways, this isn't any different from RSS. And it is RSS, basically, except you don't have to parse it into some format that your programing language can understand, (assuming you're programing with Lisp, of course, but you are, aren't you?) because it is your programing language. At least in my mind this has a lot in common with the Sygn Project in that both projects focus on providing some sort of loose standard that allow us to share and use data openly and freely, using formats that are easy (enough) to construct by hand, are human readable, and easy to process and use programatically.

In any case, it shouldn't be terribly hard to generate this format, the question is: does seeing the data like this present possiblities to anyone? And, while we're at it, if anyone wants to help define some of the more basic functions, that might be awesome. I look forward to hearing from you all?

Common Lisp, Practically

So in the emacs session running on my laptop (13 days plus) I have a number of buffers open, a great many of which include the entirety of Practical Common Lisp thanks to emacs-w3m, which I've been working through slowly. I've written here about how I find Lisp to be intriguing and grok-able in a way that other programing languages aren't really.

My exposure to lisp isn't great. I hack about with my emacs code, and I do a little bit of tweaking with the window manager that I use (written in common lisp), StumpWM, but other than that I don't actually have much experience. What follows are a series of reflections that I have with regards to lisp:

Although there's a lot of really amazing capabilities in Common Lisp, and a lot of open source energies behind Lisp... Lisp isn't flourishing.

This shouldn't be a great surprise to anyone, lisp is sort of the epitome of "Programing Languages that don't get enough respect." Having said that there are a lot of lisp projects that aren't really well maintained at all. Even things that would just be standard and maintained for other languages (various common libraries and the like) haven't been touched in a few years. While it's not a huge worry, it does make it a bit worrying. Having said that, I don't think lisp is ever really going to go anywhere, and Common Lisp seems like a pretty darn good spec. But I don't have any real exposure to Scheme, and Arc isn't really real yet, I guess.

Lisp works funny, particularly for people who only have a passing familiarity with programming.

We're used to programming languages that either pass the source code through an interpreter (e.g. Python, Ruby, PHP, Perl, and I suppose Java and C#) compile into some sort of intermediate bytecode and then run that code on a virtual machine, then output stuff; conversely there are languages which compile down to some sort of native binary and then execute directly on the hardware. Examples of this second class of languages include: C, C++, and Haskell. Sorry if my examples or descriptions of the execution model aren't particularly precise.

When you run lisp code, you define stuff and load it into the memory of a lisp process, and then stuff happens as the program runs. It's compiled to native code (I'm pretty sure at least,) but there aren't binaries, in the conventional sense. To get a "binary," you have to dump the memory of the program, and pretty much the entire lisp process into a blob. So the base size for executables is way bigger than one might expect. I've also had some success at running scripts with sbcl shebangs from the terminal. That's pretty nifty, not that I've really done very much of that, but its nice to know that it's possible.

Web programing in Lisp. I'm not so sure about that.

So you might see lisp code, and think: "So. Many. God. Damn. Parentheses." and you'd be right. But even well formatted HTML is considerably less "human readable" than Lisp, and I don't think there's a lot of room for debate there. But when you think about it, Lisp actually makes a fair amount of sense for the web.

I've actually done a little bit of poking around and from what I can see, the actual architecture and deployments of lisp aren't terribly bad. There are Apache modules that will pass requests back to a single lisp process (mod_lisp similar to how fastcgi works,) and there's always the option of running performance CL specific web-application servers and just proxying requests to those servers from Apache. Lisp is, or can be, pretty damn fast by contemporary standards, and although there's a lot of under-maintained lisp infrastructure, the basics are covered, including database connectors and java script facilities which might not be incredibly enticing, but all the parts are there.

I mean, having said that, I'm not a web developer, or really much of a developer in general, but it's fun to think about, and even if I only use Lisp to hack on various things here and there, I'm still learning a bunch from the book and that seems more than worthwhile.