the future of content

I finally listened to John Gruber and Merlin Mann's podcast of their talk at the 2009 SXSWi conference, on "how to succeed at blogging/the Internet" and this, in combination with my ongoing discussion with enkerli about the future of journalism, and an article about gawker media has promoted a series of loosely connected thoughts:

  • Newspapers are dead, dead, dead. This isn't particularly ground breaking news, but I think it's interesting to make note of this fact because of this corollary:

  • The Media/Content industry on the Internet has been unable to develop a successful business model for funding the creation of content to replace the business model of the newspapers (where newspapers fund websites/writer and a model which doesn't revolve around advertising.)

  • I've been talking about trying to figure out what constitutes success at this "content creation thing," for a while, and I don't think I have a good answer for what those markers of success are. I think page views, are a part of it certainly, and I think the volume of comments, and/or the number of twitter followers you have may be markers of success, but I think we need to get to a place where we think of success as being something a bit less concrete.

    Success might be landing a cool new job because your blog impresses someone. Success might be having enough of a following to be able to sell enough copies of your book/CD/etc. to support yourself. Success might be having enough page-views to support the site in advertising. Success might be five people whose opinion you care about reading your site. Success might be steady progress in the direction of having a readership that eclipses the circulation of the print publications in your field.

    If we use these kinds of standards to judge our work, rather than the standards of old school publishing (page views), it becomes easier to making meaningful qualitative judgments of success.

  • Though I think they're largely correct about success, Gruber and Mann's suggestions--I think--fail to explain their own success.

    I think Merlin Mann is successful because he was friends with people like Cory Doctorow and Danny O'Brien at the right moment, because the GTD thing happened, because he's pretty funny, and because MacBreak Weekly emerged at the right time and he played a big role in making that podcast successful. At the same time I think Gruber is successful because he took Apple Computer seriously at a time when no one really did. And he wrote this thing called markdown. This isn't to say that either isn't deserving of their success--hardly--but their advice to just "passionately do your thing and embrace the niche-yness and uniqueness of what you do," is a good, but I don't think that's all it's going to take to be successful in the next five years.

Additionally I think there are a couple of unnecessary assumptions that we've started to make about the future of content on the interent, that are worth questioning. They are, quickly:

  • Blogging as we have known it will endure into the future.
  • Blogging is being fragmented by the emergence of things like twitter and facebook.
  • User generated content (eg. youtube and digg) will destroy professional content producers (ie. NBC and slashdot/the AP.)
  • (Creative) Content will be able to survive in an unstructured format.
  • MediaWiki is the best software to quickly run a wiki-based site.
  • Content Management Systems (drupal, wordpress, MediWiki, etc.) and web programing frameworks (django, rails, drupal) are stable and enduring in the way that we've come to expect operating systems to be stable and enduring.
  • Content Management Systems, unlike the content they contain, can mostly survive schisms into niches.
  • The key to successful content-based sites/projects is "more content," followed by "even more content." (ie. Quantity trumps all.)

If the singularity doesn't come first, that is.


ps. As I was sifting through my files I realized that this amazing article by Jeff Vandermeer also, influenced this post to some greater or lesser extent, but I read it about a week before I listened to the podcast, so I wasn't as aware of its influence. Read that as well.

canonical freedom and ubuntu one

Recently, Canonical Ltd., the company which sponsors the Ubuntu family of GNU/Linux distributions recently announced the UbuntuOne service, which is at it's core a service that allows users to synchronize files between multiple Ubuntu-based machines. Having your files sync between multiple machines is a huge feature, and the truth is that there aren't really good solutions that accomplish this task, for any operating system. At the same time there's been a lot of hubbub in the community over this release. It's complex but the issues in the complaint are:

  1. UbuntuOne is a non-free project, in that, the software that's powering the service (on the servers) is not being distributed (in source, or binary) to the users of the service. While the client is being open sourced, the server component is crucial important to users' autonomy.
  2. Ubuntu, if we are to believe what Canonical says, is the name of a community developed Linux distribution based on Debian. Canonical, is a for-profit organization, and it's using the Ubuntu name (the trademark to which it owns) for a non-free software project.
  3. Canonical has also gone back on a promise to release the software that powers LaunchPad under the AGPL. While this isn't directly related to the flap surrounding Ubuntu One, it allows us to (potentially) contextualize the ongoing actions of Canonical with regards to network services.

My response comes in three parts.

Part One, the Technology

File syncing services are technologically pretty simple, and easy to create for yourself. I use ssh and git to synchronize all of my files, data, and settings between machines. I keep the sync manual, but I could automate it pretty easy. It's not free, I pay a small monthly fee for some space on a server, but it works, and I have total control over the process.

Granted, my solution is a bit technical and requires some babying along, and works because 95% of my files are text files. If I had more binary files that I needed to sync, I'd probably use something like rsync which is a great tool for keeping large groups of files synchronized.

In fact rsync is so good, you can probably bet that UbuntuOne is using rsync or some rsync-variant (because it's GNU GPL software, and it's good.) If you're running OS X, or any GNU or Linux based operating system then the chances are, you've already got rsync installed. Pulling together something to keep your files synced between more than one machine just requires a few pieces:

  • something that runs on your computer in the background that keeps track of when files change so that it can send the changes to the server. Conversely this component can also just run on a timer and send changes ever x amount of time (five minutes? if the computer isn't idle.)
  • something that runs on the server that can send changes to other computers when the other computers say ("Has anything changed?").

Done. I'm no programmer--as I'm quick to attest--but I think that I could probably (with some help,) pull together a tutorial for how to get this to work in a few hours.

Part Two, Trademarks, Centralization and Community

I think a lot of people feel betrayed by the blurring of this "thing" that a community has built (Ubunutu) with Canonical Ltd.

Which is totally reasonable, but this is largely an orthogonal problem to the problem with UbuntuOne, and I think is a much larger problem within the free software/open source/crowd sourcing world. This is one of the problems when entrusting trademarks and copy rights to single entities. In a very real way, Canonical--by using UbuntuOne--is trading on the social capital of the Ubunutu community, and that leaves a sour taste in a lot of peoples mouths.

But the issue of ceding control over a name or a product to a centralized group, is something that we have a lot of experience with, with varying results. Some thoughts and examples:

Here's one example: There's a huge "open source" community that's built up around the commercial/proprietary text editor TextMate for OS X. While I think TextMate is really great software, and the TextMate community is made up of really great people, TextMate is largely valuable because of the value created by the community, and it exists (tenuously) on the good graces of the owner of the TextMate intellectual property. While Alan is a great guy, for whom I have a great deal of respect, if anything were to happen to TextMate a lot of people would find that they had nothing to show for their energy and efforts in the TextMate community.

Similarly, MySQL AB (and later Sun Microsystems and now Oracle) owns the entire copyright for the MySQL database, which isn't (or wasn't) a major issue for most developers in the early days, but now given the sale of that company (and it's copyright holdings) puts the development of that code-base into some doubt. I've seen, as a result, much greater buzz around the PostgreSQL project as a result of this doubt, and I think this kind of fall out serves as a good example of what can happen to a community when the centralized body fails to act in the interests of the community, or even threatens to.

This is a huge issue, in the whole "web 2.0"/mashup/social networking/social media space. The logic for the proprietors of these sites and services is "build something, attract users create a REST API that makes it easy for people to develop applications using our service that add value to our service, attract more users, stomp out competition in the space, profit." This is basically, the Twitter/Facebook/ning business model, and while it works to some degree it's all built upon: stable APIs and the enduring good will of the community toward the proprietors of the service. Both of these are difficult to maintain, from what I've seen, as the business model isn't very coherent, and requires the proprietors to balance their own self interest, their community's interests, and find some way to profit in light of an unstable business model. It's tough.

Part Three, Business and Free Network Services.

I've been treading over ideas related to free network businesses and cooperatives and software freedom for weeks now, but I swear it all fell into my lap here. Some basic thoughts, as conclusion for this already too lengthy essay:

  • The UbuntuOne service, like most free network service, is at it's core providing a systems administration service rather than some sort of software product. The software, is relatively trivial compared to making sure the servers are running/accessible/secure
  • The way to offer users' autonomy is to develop easy/free systems administration tools, and to educate them on how to run these systems.
  • Corporations, while important contributors to the free software community, also inevitably serve their own interests, while it's disappointing to see Canonical go down the proprietary track, it's neither surprising nor a betrayal. Canonical has put on a good show and accomplished a great deal, but in retrospect we can imagine a number of things that they could have done differently from way back that would have changed the current situation. (eg. Worked within the Debian Project, developed a tighter business model, etc.)
  • Free software, is very pro-business, but it's not very pro-big-business, as "native free software business models" are built on personal reputations rather than tangible products. It translates to making an honest living pretty well, but it doesn't convert very well into making a lot of money quickly.

Anyway, I better get going. Food for thought.

file system databases

Joe has remarked that he finds it ironic that--in this blog--I sing the praises of using emacs and storing one's data in plain text files, largely as part of a crusade against databases. I also am an ardent supporter of his haven project, which is basically a database project.

While I don't think this is that contradictory, I do understand how one could make that inference, so I think it might be wise to address this issue explicitly. Lets first do a little bit of recapping:

  1. Reasons Why I don't like databases:
    • Inflexible for many kinds of data, and require users to adapt to structure, rather than the other way around.
    • Databases require too much overhead, both during operation and programming to be totally worthwhile except in some large-scale edge cases.
    • Databases abstract control over data from the owner/user of the data to systems administrators and programmers, rather than leaving data in a form that everyone can access and manage
  2. Reasons why I like text files:
    • Everyone and every machine can read text files. They're a lingua-franca.
    • We have many highly sophisticated options for editing and munging data in plain text files.
    • Plain text files are infinitely flexible, both in structure, and in the kinds of data they can store.
  3. Caveats
    • There are some kinds of data that are best stored in database systems.
    • Structure in plain text files is dependent upon the self control and education of the users, which may be a risky situation.
  4. Reason why I like Haven:
    • It combines numerous features that I think are really powerful and key to the development of how we use computers: cryptographic security, flexibly structured data; distributed computing/data storage; versioned data stores; collaborative systems; non-hierarchical organization of data; etc.
    • Joe is awesome.
    • It expands and improves on the Project Xanadu idea.

My response to Joe's question: how does plain text coexist with haven, in your mind.

The answer is pretty simple, really.

At its core, haven isn't so much a database, as it is a file system. We don't think "I'll set up a haven repository/system for this project," but rather "Hang on, I can put my data for this, into the haven system." Haven isn't a bucket that can be designed to hold anything, it's a total system that's meant to hold everything.

And it's just a low level system. Joe's work on haven is focused on a server application, and an API. Everything else are just applications that use haven. One such application would (inevitably) be a FUSE-driver which would expose a Haven system as a file system. So your objects in a haven database would be, basically plain text files.

Which kind of rocks.

Now Haven is just a concept right now, but, in general, FUSE is one of those technologies with amazing possibilities because we have so many amazing tools and mature technologies for manipulating data in file systems. FUSE abstracts the mechanics of file systems, and makes it easy to "think about" data in terms of files, even if it doesn't make a lot of sense to store said data in files. That's really, quite cool, and powerful for the rest of us.

I've seen fuse drivers for Wikipedia, a nonhierarchial file system, http (ie. the web), blogger, and structured data like RSS and other xml, all of which are really cool. I'm not sure if any or all of these systems are done, and I'm not sure that any of these creative uses for FUSE are ready for prime time, but I think it's a step in the right direction, generally.

notes from the fast

Several notes to with regards to information fast that I'm undertaking. And because this is the internet and this is my blog... Well here goes:

  • I had initially suspected that the cause of my ailment was the special thinkpad-track point driver that deals with scrolling didn't get updated when I upgraded to jaunty. This turns out to not be the case, as I had a freeze (again in firefox) just moving around with the arrow keys. That theory gone.

  • C.K. and I determined that--counter to my supposition--the slight/occasional clunking noise is probably the drive head parking itself, and doesn't seems to correspond with the problem. So replacing the drive is both awkward (weird form factor) and not likely to fix the problem

  • I installed emacs-w3m on both computers. It's not entirely intuitive. There are debian/ubuntu packages, but if you install the emacs-snapshot package, then the sequence is upgrade to the latest emacs-snapshot, install w3m-el, uninstall emacs22, and then add w3m code to your emacs init file (.emacs).

    It's, remarkably nice, particularly for looking up links while I'm writing something and reading content-rich pages. The key-bindings are, by default excessively lame and require attention (which I haven't figured out yet). I always thought that emacs web-browsing was way too dweab-y for me, but learning that it's actually really cool is a good thing indeed.

  • This isn't a real fast, as I am still using firefox a little bit bit, and I suspect that I'll always need to have it installed, but I think it's generally good to not have firefox be the default environment for everything that isn't emacs or the terminal.

  • I've basically been avoiding my RSS reader during the course of this experiment. Which I need to spend some time tending to, at least so that I can start using some other reader. This has been an issue since I switched to Linux, and I've failed to come to anything that I really like. I'm tempted to use the gnus news reader to read the RSS, but I fear this might be incredibly awkward/complciated for a very small amount of pay off.

  • By moving web browsing, insofar as it needs to occur, into emacs, the windows I see are: stuff inside of emacs (mostly org-mode and writing); and stuff inside of terminals (mutt, Micawber, bash, etc.). As a result, I get the feeling that all of my windows look the same. I'm interested how people might solve this problem themselves. How do you make an entirely text-driven, undecorated environment have texture? Have... variety between windows that might provide some context to specific tasks.

    This is an aesthetic/design question more than a programmatic one I guess. I've tried playing around, a little with colors in emacs, and still use the default for emacs23 because the others seem difficult to read. I've tried different fonts (in both programs) and I'm quite wed to my current font. I've tried transparency (which doesn't run well for emacs on the laptop)... I'm thinking that adding Conky, or more informative widgets might be helpful, but I'd love to get some feedback from you all...

free project xanadu

It's my hope that this post will combine the following ideas:

1. The concept of "General Information" As Posited Samuel Delany's 1984 novel Stars in my Pocket Like Grains of Sand.

2. The hypertext system, Project Xanadu, as described by Theodor Holm Nelson in his book Literary Machines (and elsewhere) which I've discussed on this blog recently.

3. The contemporary idea of distributed network service, as described in the Franklin Street Statement, and enacted by technologies like git, xmpp, laconi.ca and open microblogging, and others.


We value the Internet--really the "web"--as it is to today, because it's diverse, and flexible. Web pages can look like anything, can do virtually anything from present the text of a book or newspaper to fulfill most of the functionality of your desktop computing needs. What's more all this is indexed and made accessible with google search. That's pretty cool.

While the web's ad-hoc and disorganized structure has made many things possible, there's no reason to assume that the future development of the web will continue in the current direction. microformats, and the proliferation of rss in "Web 2.0," not to mention human generated portals like Mahalo (or google knoll, and even various WikiMedia Foundation Projects), all seem to point to a larger trend toward more structured, hand curated information.

As an aside, I think it's interesting that hand-curation (more human involvement) in information networks while structured data means less human involvement those networks.

I should also clarify that by "more structured" I basically mean an end to web-design as we know it now. Rather than allow designers and--well, people like me--to have a say with regards to how pages are organized, information would be collected in containers with specific structures (headings, lists, tables, metadata, etc.) and the design or display would happen on the client side in the form of specialized browsers, Site specific browsers, but also domain specific browsers. (eg. use this program to view blags and microblog pages, and this program for reading pages from the news services, and this program to view x-new-class of sites). In short, adding structure to content wouldn't limit the realm of possibility, but it would separate content from this stream of thought.

Structure is one part of the Xanadu-model of hypertext/content, and perhaps the most lamented by those of us who are... professionally frustrated by the lack of structure in the contemporary web, but I think it's distribution and federation concepts are too often overlooked, and are quickly becoming relevant to contemporary technology.

Federation, to subtitle, is the set of technologies technologies that allow network services to function without always-on and real-time network. Federation avoids two other technical problems with distributed network services: first, it removes the need for centralized servers that provide canonical versions of content. Secondly, in a distributed environment federation removes the need for local nodes to contain complete copies of the entire network. Xanadu had provisions for the first aspect by not the second while the Internet (more or less) has provisions for the second, but not the first, and free network services--in some senses--attempt to bring the second form of federation to the web and to the Internet.

Federation, for free network services, means finding ways of communicating data between websites so that networks of information can be built in the same way that networks of computers have already been built.


In Stars in my Pocket Like Grains of Sand Delany's "Internet" is a service called "General Information" or GI which exists in a neural link for some of the characters. GI isn't always complete, or accessible in it's most up to date format--and it's users know this--and accept it as a price for living in an interstellar society, but it is accessible on an interstellar level. GI, like free network services is built (either implicitly or explicitly) with the notion that a node on the network could go offline, continue to develop and be useful, and then go back on-line later, and "sync" with it's peer nodes, thus creating some measure of resilience in the network.

The contemporary network uses a resilient routing system to "get around" nodes that drop offline, whereas a truly federated system would store diffs across time and use this "temporal" information to maintain a consistent network. This sort of consistency is going to be really useful--not only because it would allow individuals and small groups to provide their own networked computing services locally, but also because providing data connectivity that is free, always-accessible, fault tallerant, and high speed, is unlikely to appear universally... ever, and certainly not for a long time.


I suppose the next step in this train of thought is to include some discussion of my friend joe's project called "haven," which would tie this to the discussions I've been having with regards to databases. But that's a problem for another time.

writing in org mode

With all luck, I'll have most of a draft of the short story I've been working on done by the time this goes live, but if not certainly rather soon there after. This is an exciting announcement in and of itself, but perhaps the more interesting thing is that in the process of doing this I sank into writing this story in org mode.

My general M.O. for writing for the last several years has just been to write and store the files in markdown and use whatever text editor I fancy. I write the blog this way, I write papers this way. Everything seems to work fine, there are converters for LaTeX, HTML, and the plain text format is absolutely and completely readable to people who aren't as obsessive about text files as I am.

While I'm a huge org-mode proponent, I don't tend to think that org-mode makes a particularly good writing environment (or haven't, heretofore) because unless you use org-mode org files are sometimes a bit ugly, and the syntax is enough different from markdown to confuse me, and...

The general consensus, that I've seen is that while org-mode is indeed a great boon to the intensive-emacs user, that it's not an ideal production editing environment. muse-mode, or my favored markdown-mode might be better if you're actually writing text.

And then, as I got into the writing of this story, I realized that I was flipping rather seriously (and annoyingly) between my notes for the story and the story I was writing. Also, when I'm writing book-length (or conceptually book-length) work, I tend to break up the text into more manageable chapter-length or scene-length files, which is conceptually useful for me.

In a short story, it didn't seem to make sense to break things up into more than one file, and after I'd written a couple thousand words, I realized that something needed to be done. I created a file, with some header meta-data (using the yaml form that jekyll), an org-mode statement to define custom-status words that seem relevant to the writing/editing process, and then first level headers define key scenes or breaks in the story. I've never written (or read, to the best of my memory) a story that required more than one level of organization (but ymmv), and then--and this is the clever part as far as I'm concerned--property drawers for notes about what happens in the scene.

Property drawers stay folded by default, and are intended to store a collection of key-value pairs, but they don't get exported by default, and so are a good way to keep your notes and your writing together and then export, as needed when drafting is done.

Also, I've recently added the following to my key-binding list, which adds a property drawer to the current heading, which is indeed a good thing:

(global-set-key "\M-p" 'org-insert-property-drawer)

I've posted a copy of my template file for your review and edification.

Comments?

glitch and web experiments

So, my laptop (where I seem to be doing most of my writing these days) seems to have developed a wee-glitch. It seems, that (somewhat randomly) the system just freezes irrevocably whilst, get this, scrolling on twitter.com. No really. I'll be minding my own business, and suddenly firefox freezes, I can't interact with the window manager, I can't kill the window server and start over, I can't switch to another virtual terminal to fix things, nada. Hold down the power button and restart. Interestingly throughout all of this the mouse still works, as if to taunt me.

I've not been able to produce the freezing in any other application, and I'm concerned that it might be hardware related (disk access has been sort of weird lately, it's an older computer,) it could also be related to some of the dependencies in Awesome 3.3. I'm waiting for things to sort of even out on a number of fronts before I assign blame. (And switch distributions of GNU/Linux.)

My response, of late, has been to just avoid the web entirely. This isn't a huge problem, as I try and avoid the web as much as possible. I mean, I lead a very networked/digitally connected life, but it turns out that most of it isn't web-based on a day-to-day working sort of way.

The experiment, then is to see just how far in my avoidance of the web. The "information fast" isn't a startlingly new idea, and I'm sort of interested in seeing how this affects my computer usage on the whole. Information fasts work, by forcing/allowing you to take a cold turkey break from the information that you consume and then re-evaluating your information consumption habits and seeing what's worth sticking with and what's not. So basically I'm using this as an exercise to see: What changes, if I say "ok now web-browser," what tools and workflows do I develop, and is this a better way to work?

Hints and suggestions would be helpful. There are some practices that I need to get set up with, and using more effectively. Twitter and identi.ca via IM (check). Offline, multi-computer RSS reading. Offline access/browsing to common resources (eg WikipediaFS and other fuse resources; YaOddMuseMode for the EmacsWiki, some way of reading c2 wiki and so forth.)

We'll see where that leads me. Do people have suggestions for tools in this (and other directions)? Has anyone done this before? Would anyone else be interested in doing the fast with me?

I look forward to hearing from you!

Update: I had a non-twitter related crash. I was browsing, loading a new page and scrolling on the existing page. Bam! I have, in response: upgraded the think-pad touch-point (or whatever) drivers to their jaunty versions as the sources were disabled during the upgrade.

I've also, in this vein, installed and have a fairly effective copy of w3m, an emacs-accessible browser, running. While I don't think this is the way forward forcing myself to use an editor-based browser, might allow me to focus more effectively and rely on the Web more for information than for entertainment. As it should be!

free network businesses

I've been reading the autonomo.us blog and even lurking on their email list for a while, so I've been thinking about "free network services," and what it means to have services that respect users' freedom in the way that we've grown to expect and demand from "conventional" software. This post explores issues of freedom in network services, business models for networked services, and some cyborg issues related to network services. A long list indeed, so lets dive in.

I've been complaining on this blog about how much web applications, the web as a whole, and networked services on the whole suck. Not the concepts, exactly, those are usually fine, but suck for productive users of computers, and for the health of the Internet that first attracted me to cyberculture lo these many years ago. I still think that this is the case, but I've come to understand that a lot of the reason that I have heretofore been opposed to network services as a whole is because they're sort of brazen in their disregard users freedom.

This isn't to say that services which do respect users' freedom are--as a result--not sucky, but it's a big step in the right direction. The barrier to free network services is generally one of business models. Non-free network services center around the provider deriving profit/benefit from collecting users' personal information (the reason why open-id never caught on), from running advertising along side user-generated content (difficult, but more effective than other forms of on-line advertising because the services themselves generally provide persuasive hooks to keep users returning,) or when all else fails, charging a fee.

So to back up for a minute, I suppose we should cover what it means to call a network service "free." Basically, free network services are ones where fundamentally users have control over their data. They can easily import and export whatever data they need from the providers system. That users can choose to participate in the culture of a networked computing by running software on their computer. There are ideas about copy-left and open source with regards to running code on networked services that are connected to these ideas of freedom, but this is more a means to an end (as all copy-left is) rather than--I should think--an end in itself.

Basically, data independence and network federation or distribution. Which takes all of the, by now conventional, business models and tears them to bits. If users are free to move their data to another service (or their own servers) then advertising, leveraging personal information are all out of the window. Even free software advocates look at this problem and say, we have a right to keep network services closed. Which is understandable given that there aren't many business models in the free world. While a lot of folks in the FNS space are working to build pillars of free network technologies, I think some theoretical work on the economics are in order. So here I am. Here are the ideas:

  • The primary "business" opportunity for free network service is in systems administration, and related kinds of tasks. If the software is (mostly) open source and design and implementation can't possibly generate enough income, then keeping the servers running, the software up-to date, and providing support to users is something that provides and generates real value and is a concrete cost that users of software can identify with and justify.
  • Subscription fees are the new advertising. In a lot of ways what a particular service provides (in addition to server resources) is a particular niche community. While federation changes this dynamic somewhat, I think often people are going to be willing to pay some fee to participate in a particular community, so between entrance fees (like meta-filter) and subscription fees (like flickr) you should be able to generate a pretty good hourly rate for the work required.
  • Enterprise Services. We could probably support free network services (and the people behind them) by using those networks as advertisements for enterprise services. See a service on the Internet, and have a company deploy it for internal use on their intranet, and have the developers behind it sell support contracts.
  • Leach money from telecoms. This is my perpetual suggestion, but while most of us Internet folks and network service developers may or may not be making money from our efforts in cyberspace, the telecoms are making money in cyberspace hand over fist, largely on the merits of our work. It's not really possible to bully Ma' Bell, but I think it's a part of the equation that we should be focusing on.
  • Your Suggestion Here. The idea behind business in the free network service space, is that providers are paid for concrete value that they provide, rather than speculation on their abstract value, and as a result we can all think about business models without harming the viability of any of these business models.