tychoish, a wiki

tychoish/ micro tycho

micro tycho

When did Emacs Lisp go Pro0

micro tycho 5 October 2013

Is it just me, or has the development environment for Emacs Lisp gotten really… pr0 in the last year?

There’s always been a lot of emacs lisp floating around, and it’s incredible how useful a lot of this is. At the same time, most of the emacs code I’ve run across hasn’t been… particularly high quality.

Which, you know makes sense, no one gets paid to write emacs lisp, really, and for the most part this code is about solving very local problems in very specific way.

I think that emacs 24 and package.el have driven emacs geeks to writing better and more useful code. Also elnode, not to mention ert, and cask. I also think the emergence of some killer basic libraries help, things like:

In some ways, this illustrates the point: if you build high quality tools to help programmers write great software, they will.

Posted 5 October 2013

Python Job Runner

micro tycho 30 September 2013

I wrote this buildcloth program that provides a simple python-centric build system tool. The idea is you can write build system code in Python and avoid having to wrangle external build systems that have too much domain specificity or general awfulness.

I plan to use Buildcloth in a few projects eventually, but in the mean time the I needed some way to manage a frighteningly large number of tasks in our build system. So I wrote this function, called runner() that takes an iterable of dictionaries that define a single job, and executes each function (as needed) in a multiprocessing worker pool.

Initially I intended this to be a transitional crutch to using buildcloth, but the truth is that it works really quite well. Sometimes you have to write a few thousand lines of code to figure out what the correct 26 lines of code are. I guess. Here it is, discussion to follow.

  • jobs must be an iterable that returns dictionaries in the following form:

    { ‘target’: , ‘dependency’: , ‘job’: , ‘args’: }

    Typically this is a generator function, but you can use any kind of data source.

    target and dependency information is only used to determine if a rebuild is necessary. (i.e. if dependency is newer than target) runner() will not order tasks based on the dependency graph. (That’s what buildcloth’s for.) You can disable dependency checking and just rebuild everything by setting force to True.

    The included check_dependency() function can handle lists of targets and dependencies, if needed.

  • pool designates the size of the worker pool, which defaults to the number of logical processor cores. This is often a good number. A pool size of one is the same as setting parallel to false, which just runs each pool serially without using the worker pool.

    Running things serially may yield better results for some classes of jobs, and is useful for testing.

  • By default runner() returns a count of the jobs run, so you can tell how many jobs you sent were actually run following dependency checking. If you specify results to retvalue, the function will return a generator that will holds the return values of each function run. There are a few other options here which you should be able to sus out yourselves if you care.

I hope it helps you build something awesome! I look forward to hearing about what you build.

Posted 30 September 2013

Emacs Configuration Party

micro tycho 29 September 2013

I spent the bulk of this weekend revising and refactoring my emacs configuration. Forgive the potential narcissism, but I thought a little bit of reflection would be worthwhile here.

I started using emacs in 2008 or 2009. I built the best configuration I could at the time, but I didn’t know very much about software development or managing complexity, and while I did an ok job maintaining it for a while, in the last year or so, I’d done almost no upkeep and had just thrown more crap into an overflowing folder.

The thing about a configuration for a program that you use every day, is that it needs to be able to change with you as your needs change and the kinds of projects you work on grow and change. The old crufty configuration made me avoidant of making any change to my set up, and a collection of annoying bugs cropped up that I had no idea how to fix.

In situations like this, the only thing to do is to set aside some time and fix things.

Before, I had a directory that was full of emacs lisp most of which I’d downloaded from the internet. The directory is in a git repository that syncs to four computers: my main laptop and my work computer (these are virtually the same system, both Arch Linux based), a mac (work) laptop, and a server that runs Debian. There were also symbolic links to other emacs lisp packages scattered here and there on the system. ~/.emacs was a symlink to a file in this directory that had machine specific configuration options and then loaded two files: a file that did some different initialization based on the name of the daemon process (I run 3 or four different emacs daemons at a time, that have slightly different configuration requirements.) To cap it all of there was a “central” file that all of the actual configuration. Which was itself just as awful as what I described. Then, as if this wasn’t enough, there was a layer of tiles namedtycho-<name>.el that contained my own customization and settings that wrapped specific areas of emacs functionality.

Somehow it all worked.

The revision process went well. I decided to, for the moment, avoid package.el (concerns about startup time, being able to edit packages more natively, and wanting to let the emacs packaing ecosystem settle down a little.) Instead I’m vendorizing things in a more consistent manner. Here’s the shake down of my changes:

  1. I’ve created a number of folders in the directory all application specific code. This is mostly third party stuff. Large packages have their own folders, single-file modules are grouped into some basic categories: emacs-lisp libraries, apps, programming-langauge specific groups, navigation tools, cloud-service integrations, calendar tools, and so forth.

  2. The code that used to be in tycho-<name>.el files are the only files that remain in the top level of my emacs directory. They have different and better names. For files that customize emacs they have simple names like display.el and keybindings.el and local-functions.el. For the collection of domain-specific customization these files have names like python-config.el.

    I went through this code and reduced the number of files drastically, moving global settings out of random files, organizing each function with requirements at the top, setting declarations at the top, followed by keybindings, followed by relevant custom function.

  3. The basic initialization code is much more straightforward as well. There’s one file that modifies the load-path, which is the first thing that gets loaded. There’s a small amount of system-specific configuration that remains, but it turns out that about 80% of what I thought was system specific wasn’t. Nifty.

    I also coalesced all of the emacs initialization code, and I refactored the functions to reduce the redundancy in the initialization process.

  4. In the process of moving things around, I was able to delete or archive a bunch of code that I was no longer using, and factor out a bunch of redundant crap. The feeling was spectacular.

The results are great:

  • I’m no longer afraid of editing the code that runs the software that I use pretty much constantly.

  • The most annoying bug ever, that broke using emacs sessions in console, is fixed. I’m not sure what did it, but when I open a file from the command line in a console session, it’s actually opens properly. No clue what did it, but I’m very happy.

  • The start up time for an emacs session is now wicked fast. We’re talking 2-3 seconds depending on hardware. Ideally it’d be 1-2 but I’ll take what I can get for now.

What’s next?

  • I’d like to not have to think much about organizing my emacs configuration for while. I’m particularly envious of good code folding at the moment.

  • I think eventually it’d be good to learn more and begin to use the package management tools, but I want to wait for them to mature and also to try and figure out a way to reduce the init-time penalty, to transition my working legacy system. I understand that it’s the future, but I want to get to know it a bit better first.

  • I’ve been trying to hone my chops in a second, non-python programming language. I’ve been flirting with common lisp, go, and C++, and while I can hobble around each I hope that being able to actually add stuff to my emacs config will make things like code navigation/completion and compiler integration not suck.

Comments? Suggestions?

Posted 29 September 2013

Reuse Python Test Code

micro tycho 4 August 2013

I’ve been writing test cases all day, and while I’m still a few dozen tests (or more!) away from getting coverage for the current project, I’m making progress and feel reasonably good about this project.

In doing so, I’ve revived a trick that I first tried on a toy project a few weeks ago.

In Python, The problem is this: you have two sibling classes that both implement the same interface using different back-ends. It seems reasonable that you could use the same set of tests on both backends, but there’s no obvious way to do that. Right? Here’s the solution:

You create one class, descended from object that has a number of methods that implement tests.

You then create two child classes that inherit first from the test-holding class, and then from unittest.TestCase, hook it up and run it with nose and everything works.

Awesome!

Posted 4 August 2013

The Downfall of Chrome

micro tycho 22 July 2013

I’m always watching how my coworkers and friends are using computers, in part because people will always teach you something brilliant about how to use a computer if you give them a chance1. Also, being as I am, interested in the future of technology, I think Stephen O’Grady is right developers drive technological change.

One thing that I’ve noticed in the past 3 months, or so, is that folks are slowly moving away from Chrome and back to (mostly) Firefox. I don’t think this is emblematic of any greater shift in the way that programmers work and I don’t think that this is evidence that Chrome itself is unsuccessful…

I think there are a few interconnected reasons driving the move away from Chrome:

  1. Chrome can be flakey in some cases: tabs freeze, the memory use tends to run away with large number of tabs.

  2. All of the “magic” that Chrome is doing to make things fast and efficient, increases the actual heat generated by the machine, which is both uncomfortable and reduces battery life for laptops.

Firefox by contrast, has the same good standards compliant rendering, isolates tabs using a process model, has a more stable resource profile, has an emended PDF reader, has an established extension ecosystem, and a regular incremental development cycle. If competition is good for innovation, Chrome gave Firefox the push it needed.

Second, I think the most technologically interesting and important aspects of Chrome (from Google’s perspective,) aren’t actually the ones that would drive adoption of Chrome: notably the automatic “self-updating,” and having a viable runtime for projects like ChromeOS and Dart. Some people have to use it, but they don’t all have to use it for it to succeed.

As you were.


  1. For one emacs using coworker, anytime we watch each other use emacs, we inevitably ask each other “wait, how did you do that,” notably for flyspell-auto-correct-previous-word and dired-maybe-insert-subdir, which are both amazing but even after years of collective emacs use we each were only familiar with one. 

Posted 22 July 2013

Get a Problem

micro tycho 21 July 2013

If you want to learn how to program, I think that Instant Hacking is probably the best resource around. Read it a few times before proceeding. I think if you understand what’s going on in that article you can probably learn everything you need to know about programming on your own given access to documentation and some common sense.

Once you know how to express units of work in code, there’s still a lot to learn about actually making software. I’ve been thinking about how people learn to make software. I’ve been collecting anecdotes/lessons on learning to make software and how to jump from “knowing how programs work” to “making awesome software.” Here’s one…

At a party once, someone with a computer science degree who’d been away from programming for a while asked me what they should do to get back into programming.

I think the desired answer was “you should really look at technology/framework,” but instead I said “you need a problem.”

If you don’t begin a project—of any scope—without a firm grasp of the problem and the bounds of the project, you’ll never make anything of notable quality or utility.

You may recognize that you have a problem or a need, but not have a clear idea of how to translate a problem into a project or a specific set of functionality. This is ok, and common. I find that using the following solution as a “worksheet” helps turn identified problems into more actionable projects:

  • Create a list of the current aspects of the process that are repetitive. Or require manual intervention and action.

  • For each aspect define common mistakes or errors that currently exist in your process.

  • Define your audience or users.

Design decisions, tooling, and development priorities all flow from the answers to these questions.

Posted 21 July 2013

Blogging Progress and New Posts

micro tycho 20 July 2013

I have this bad habit of writing blog posts, on the weekend and then sitting on them for many days, and sometimes many weeks.

I just posted two things to my real blog:

Which are about a work project (with implications for personal projects) that I’ve been thinking about a lot recently.

The tumblr has been a great inspiration to work on blogging more recently. I still have the feeling that blogging isn’t exactly something that I want to spend a lot of time working on: I don’t need to prove to myself that I’m a writer and if I need more practice writing something it’s not under-substantiated short-form polemics.

At the same time, as I discussed in A Day in the Life, it’s true that I don’t get to do a lot of sustained writing on a day to day basis. It’s nice to be able to collect thoughts having an easy way to start the momentum of doing work is great. Even if blogging isn’t the end goal, blogging is sometimes a great tool to jump start the writing engine. As it were. Two things I’ve noticed:

  • I’m still not particularly good at splitting blogging focus between the “short” and “medium” forms. I like the whole tumblr thing, but there’s not much that distinguishes tychoish.com from m.tycho.co. Sigh.

  • I’m also aware that the experience of writing professionally for 4 years (and counting) has made me a much better writer. My sentences are a little longer than they ought to be. My unassisted spelling isn’t great (but it’s a lot better than it used to be.)

So the plan from here: write more often and don’t sit on tychoish.com posts for the hell of it!

Onward and Upward! (I guess.)

Posted 20 July 2013

Go++ / Go Reflections

micro tycho 13 July 2013

I did some hacking on a Go project last week, and while I was able to mostly achieve what I wanted, I remain somewhat dubious.

The good:

  1. The tools are easy to use, and from all appearances seem to work clearly and consistently.

  2. The Go community is doing a bang up job in framing a conversation about concurrency.

  3. It’s killer fast and doesn’t make any excuses for design decisions that are probably unpopular.

The bad/neutral:

  1. Learning Go seems to require a C++/Java/C background, and not having familiarity with that, I feel like everything takes me 100 times longer. This is probably as much of a documentation problem as it is a

  2. For non-systems work: gluing JSON APIs together, simple command line tools, CRUD-type applications, and simple orchestration work, Go is too much.

  3. In some ways the build tools are so good that it sort of feels like you could just do compilation/execution dynamically and (optionally) hide the fact that it generates a binary before running. At the very least, some REPL or SLIME-like interface feels missing.

Prediction. In a year or two there’ll be another Go-like language, with full-on dynamic typing, more batteries-included, that would be eaiser to actually learn. Frankly I think Go is close enough in so many ways that the would-be-Go++ is likely to be directly derived from Go.

Time will tell.

Posted 13 July 2013

Beyond Hello World

micro tycho 6 July 2013

It’s been a long time since I did anything with Lisp, and I came back:

Ok so it’s not much, but I got this feeling afterwords of actually doing something.

This, like my previous experiences with Common Lisp has been to play around with the configuration of my window manager (StumpWM.) There’s evidence in my config of more impressive hacking, but it’s been a while and I’m better at programming now in general than was the last time I dabbled.

Anyway.

This got me thinking about how we learn to program and the common pedagogy associated with programming.

The general consensus is that it’s important to get prospective programmers to do something, anything, very quickly. As a result, learning instructions on programming start with an obligatory “Hello World” program.

Except, that getting systems to output “Hello World” isn’t terribly interesting or useful. It’s also not a terribly good introduction to doing things with software, and in most cases doesn’t convey a sense of how programs expressed in a given language work.

Which isn’t to say that it’s bad, but that it’s not a good starting point. Better I think to say “here’s the second example,” and then say “if you want to see the ‘hello world,’ [click here].” Our introductory examples need to include:

  • a code block (e.g. function, etc.)

  • variable assignment.

  • a data structure (if applicable, maps and lists of some.)

  • a conditional statement.

  • capturing information/state from something external to the program.

As an example, I thought the following bit on strings in c from this c tutorial was pretty great.

Posted 6 July 2013

Toolkit Framework Buzzword

micro tycho 3 July 2013

I was talking Ana Nelson (who rocks!) about Sexy (which is the coolest documentation toolkit framework there is.) …

Actually, let’s hold up there. I was going to write something about “release criteria” and the things we use to judge projects as “complete,” but then the words “documentation toolkit framework” stumbled out so lets unpack that.

I think this is interesting in part because it occupies the space between “buzzword” and “meaning,” and in part because it’s a cool idea.

It’s a buzzword, because it uses words that tech people use “toolkit” and “framework,” in an apparently technical sense, but it doesn’t really convey meaning.

I worked for a company a while back that marketed its core product as a “framework,” even though the “framework” was just a classic piece of enterprise software (nothing wrong with that, after all.) They used framework to convey that they’d developed a generic solution to a common and complex problems. Which they had. While calling the product a framework wasn’t true in any real sense, it was probably a good move. Though I found it frustrating in the moment.

"Documentation toolkit" describes an actual class of software that documentation projects use to take writing output (source files) and transform them into feature filled documentation resources. It’s a hard problem, requirements vary a great deal, and there isn’t a vibrant industry making this kind of software.

Which brings us to “a toolkit framework.”

We’ve started to use the term “framework” to describe things that aren’t meant for end-users, things that provide non-integrated components that developers can use to build tailored solutions. Ruby on Rails is a web application framework, etc.

In the past we might have called them libraries, but today’s frameworks are more than a collection of routines and data structures. Also “library” isn’t terribly sexy. And to be fair most frameworks are really a collection of related libraries rather than a single library (as such.)

The emergence of the framework as a class of software, is interesting because it lets users take a disorganized mess of code and ad-hoc practices and turn that into functional tools. Without having to reinvent the wheel. And without having to force your project to conform with someone else’s requirements or intentions.

More on this, perhaps, in the future.

Posted 3 July 2013

A Day in the Life

micro tycho 1 July 2013

When people ask me, “what do you do [professionally]?” I always seize up, and I never know how to respond. Sure “technical writer” is one of those careers that you see in high school career aptitude tests (right?) but its not terribly understood.

The worst thing is that there are a bunch of different technical writing sub-fields that didn’t overlap. So if you know another technical writer, the chances are they do something closer to documenting business practices for banks (or scientific equipment! or cheese extruders!) and don’t really do anything like what I do.

I’m sometimes tempted to say “I’m a software engineer” or “I make software,” which is true, but I feel like it’s disingenuous. Sure I write code sometimes, but I’m not exactly writing software that’s running on hundreds of thousands of systems (like my coworkers who “real engineers.”)

Maybe it’s just impostor syndrome. So be it.

The second hard part is that I don’t actually write that much, by volume.

The hard part of technical writing isn’t constructing English sentences. That’s the easy and fun part. As a result, any given day is mostly filled with other things. For all of you that wonder what kinds of things a technical writer working in software development is worrying about at any given moment, here’s a list:

  • Release schedules.

    If software isn’t improving or changing, it’s not being used. Depending on products, releases happen at least once a year and sometimes as much as six months a year. If release isn’t continuous. I spend some portion of my week figuring out what new features are and balancing those new features against other features and release dates to figure out when and how we should document them.

  • Feature planning.

    In some sense software development is all about considering context, and figuring out how to properly handle deeply nested situations. This makes great programmers great. It also makes great programmers terrible at documentation. Furthermore it makes programmers really poor designers of things that users interact with. Even when the users are other programmers.

    But it means that I end up spending some amount of time working with programmers saying, “why is this a noun, and all the other options like it are adjectives?” or “isn’t this option structurally similar to this other option that someone else wrote last year?”

  • Publishing.

    There’s really no good reason to publish changes documentation infrequently. If you know about bugs (typos, correction, unclear patches etc.) in the existing documentation it makes sense to go ahead and fix them, and get those out in front of users. And while you can make the publishing process automated to some extent, there’s work associated with planning, testing, and running deployments.

  • Building:

    At the beginning of the year I said to a coworker, “I think that most of the hard problems in documentation projects are really build engineering problems,” and I spend some amount of time making sure that the build system is faster (so that the we can spend less time testing changes,) and that the build system automates common tasks (so that we spend less time mindlessly manipulating text.)

  • Review and editing.

    I read a lot of proposed changes, make comments for style, accuracy, organization, and so forth. This also involves some amount of direct editing and rewriting of exiting content.

  • Bug Fixing.

    Documentation has a weird stabilization curve, and as a result, even after a release “stabalizes” it takes time for the documentation to calm down. Things like “we forgot to tell you about this feature,” or “actually upgrades require an extra step,” or “people should be careful when using this feature.”

    And like software there are other bugs that don’t get caught in the review process. Or bugs in the software that the documentation needs to reflect. Or pieces of text of text became confusing because of changes, etc.

  • Managing organizational drift.

    While most documentation work is only a few hundred words, the experience of documentation is much larger in scope, and I end spending part of my time making sure that the macro-organization makes sense: file names are idiomatic, that chapters have parallel structure, that the navigational tools correctly direct readers “correctly.”

So that’s what I do. Make more sense now?

Didn’t think so.

Posted 1 July 2013

dtf 0.4.0

micro tycho 30 June 2013

I just posted a new release of dtf (0.4.0), my documentation testing framework.

The big deal with this is that it now handles output in a sane and usable way. I have (vague) plans to further extend the reporting features, I needed to make the inner-bits sane.

The end goal is to be able to use dtf as a more generic testing platform (using cases that rely on external tools, for example.)

Not bad for a Saturday afternoon.

Posted 30 June 2013

Topic Based Authoring Failures

micro tycho 29 June 2013

I wrote a long time ago, about atomic documentation which (more or less) is the same as topic based authoring. Both describe the process of breaking information into the smallest coherent blocks and then using the documentation toolkit to compile the kind resource.

Topic based approaches to documentation promise reduced maintenance costs and greater documentation reuse. I’m not sure if anyone’s used “ease of authorship,” as an argument in favor of topic based approaches (they’re conceptually a bit difficult for the author,) but you get the feeling that it was part of the intention.

The obvious parallel is object orientation in programming, and I think the comparison is useful: they both present with optimism about reuse and collaboration through modularity and modern tool chains. While object oriented programming predates topic based authoring, both have been around for a while and even if you aren’t an adherent of object orientation or topic-based authoring, I think it’s impossible to approach programming or documentation without being influenced by either of these paradigms.

Unless you’re working with a really small resource, without some topic-based you end up with redundant documentation that looses consistency and a maintenance nightmare.

The downfalls of “topics,” don’t negate it’s overall utility, but they are significant:

  • topic based authoring makes it harder for non-writers to contribute to the documentation. This makes it more challenging to keep documentation up to date and can hurt overall accuracy.

  • topics force writers to focus on the “micro” documentation at the expense of the “macro” documentation experience. The content is clear, the completeness is good, but the overall experience for users can be pretty awful.

  • topics don’t always reduce duplication of content. For two reasons: first, you often need to refactor the presentation of a piece of information for use in different case. Second, it’s often difficult to identify duplication until you do a post-facto analysis.

Making better tools for topic based authoring could help some of the problems, both to make single sourcing not so onerous, but also to help facilitate the writing process. This should be the subject of another post.

Posted 29 June 2013

Dynamic and Static Packaging Tradeoffs

micro tycho 24 June 2013

From tycho’s adventures in build engineering, a brief note on static/dynamic compilation and related topics in dependency resolution and packaging.

When packaging software for distribution to end users, developers face a choice, or a series of choices that boil down to:

  1. Should we provide the user with the entire build environment including all dependencies, or

    This method is easier for users, but takes more space and can be more expensive (slower) at runtime.

  2. Should we assume that user’s environment will either have all dependencies or be able to correctly resolve all dependencies during installation or run-time.

In short, the second argument tends to win most of the time. It’s sensible, tends to be more efficient, leads to more robust software in the end, and there’s pretty good support in contemporary tools (e.g. dpkg/apt-get, pip, gem, rpm, et AL) to support dependency resolution and management.

Or at least it used to.

Increasingly, as systems get faster and the absolute cost of capacity gets cheaper, the efficiency gains of #2 become much less compelling, and the simplicity (for users) becomes more convincing.

I think the use of static compilation in Go is a great contemporary example of this, but Apple’s use of statically compiled bundles in OS X is reasonably compelling.

Open questions:

  • In a lot of ways the solutions to the static/dynamic debate boil down to a continuous integration/testing problem.

  • As I think about this more, the “simplicity” argument is really mostly about where and when errors appear. While you might be able to prevent some installation errors by distributing static binaries, the major effect of packing dependencies with a program is that you’ll be able to detect incabilities during compilation/packaging rather than at runtime.

    This transcends the traditional compiled/interpreted (e.g. virtual-machine based) division, static compilation.

  • I wonder what effect pinning dependencies to specific versions has on the end user experience. I’m thinking specifically of Python packages here. Similarly, if virtualenv is the primary deployment environment, do we loose some of the efficiency gains of dynamic dependency resolution?

  • Will environments that have used more dynamic models develop tooling for more static bundling? Examples: 1, 2, 3.

Posted 24 June 2013

Don't Change Anything Ever

micro tycho 19 June 2013

Today I made two modifications to the way that my computers work: I changed the network manager to use the most state-of-the-art command line network manager (netctl) and I configured emacs to start as systemd services.

I can tell that this is incredibly thrilling to you all.

The actual impact of the change is minimal at best, everything still works the same way it used to. The big change is that network connections and the applications that I use automatically work and are ready for me without any start-up costs.

At least that’s pretty cool.

I think I’d held off on looking into systemd for a while because I was bitter: upstart, particularly in the early days, was a bunch of headache for not a lot of win; launchd is frustrating as hell to actually use; I’ve never really felt like old-school init systems were that broken (frustrating yes, but not really broken.)

On some level, I take a “don’t change anything, ever,” approach to my system, but it turns out that sometimes changes aren’t bad, and actually make life better.

And in the end it turns out that systemd is actually decent to use: the configuration files make sense, the command line tool is sensible, and it makes it easy to do the things you want to do.

Can’t argue with that!

Posted 19 June 2013

The Small Data Crisis

micro tycho 20 May 2013

I’ve realized, two things in the process of writing more computer programs and trying to figure out how to organize my own data without going crazy.

  1. Even though you’d look at the way people represent their information (lists, notes, etc.), and thing “This information is intrinsically unstructured,” you’d be wrong.

    It’s just that people are really bad at figuring out what the underlying structure of information is (no real surprise) and they’re particularly bad at figuring out what the structure of data is before collecting and collating that data.

  2. In this age of big data, it’s easy to forget that the vast majority of data is actually rather small.

    Big data, which is really a combination of “large machine generated data sets” and a general turn toward “data orientation,” is perhaps the defining aspect of the current technological moment. I’m not negating this.

    I am saying that most people create, interact, and manage (albeit poorly) relatively small data sets. Megabytes of data, thousands of records a couple 10s of thousands of records.

Thoughts?

Posted 20 May 2013

Rainy Saturday Reports

micro tycho 12 May 2013

The thing with new blogging tools is that no matter how easy or awesome they are to use, if you don’t make a habit of using them… this happens: you set them up and then neglect them for weeks on end.

No promises if I’ll post more about any of these things, but:

  • I made some tweaks so that dtf now optionally requires gevent and threadpool which are mostly experimental and heavyweight for general purpose use.

  • I rewrote the table module part of rstcloth so that it’s more usable programatically.

    The table module, is awesome, but predates the rest of RstCloth, and was originally designed to generate reStructuredText tables from input files maintained by humans. Recently I’ve started to generate table content programatically, and this was incredibly painful. So slowly reworking the guts.

  • A year or two ago, I read a lot of the tutorials for popular web development frameworks and was completely overwhelmed: everything made sense, but I had no framework for assimilating the information.

    Fastforward to present. I tried again to read through these tutorials, and lo I actually was able to make sense of them.

    I don’t have any new projects up my sleeve, really, but I’m starting to feel increasingly comfortable with my skills as a programmer, and at least being familiar with web development seems worthwhile.

    Nothing more to report there.

Posted 12 May 2013

5 Ways Tumblr is Different From LiveJournal

micro tycho 30 April 2013

  1. Dashboard / Friends Page.

  2. Less fandom.

  3. Shorter posts.

  4. More porn.

  5. Fewer opportunities for group discussion.

    (i.e. no “communities,” less centralized commenting systems.)

Posted 30 April 2013

Integration Leads to Attention Diffusion

micro tycho 29 April 2013

Back before smartphones were so smart, there was some debate among geeks about the utility and ultimate viability “convergence devices,” or milt-function portable technology. Would people want devices that just did one thing well, or would they want to have just one device that did most things well enough.

History has been pretty clear on what consumers and the industry decided. Being somewhat contrary by instinct, I’d like to ask us to take a step back reopen this question, not because I’d like the industry or common practice to reverse but because I think there’s more to convergence than having fewer things in your pocket.

Here’s a conjecture:

The greater the degree of functional convergence in a piece of technology, software or hardware, the greater the potential for diffused attention, greater congestive overhead, fragmented user experiences, and distraction.

So while iPhones and Android devices are great, they’re probably inferior to blackberries at writing email.

Seem reasonable. Feel free to object.

Though cellphones are the most obvious example, there are others:

  • emacs is probably the worlds first convergence platform.

  • the modern web browser, and web have a great deal of functional convergence.

  • Some web-based applications, like gmail and facebook, have a pretty high level of internal functional convergence.

I wonder how this infinite integration affects peoples ability to be productive, and if there are good strategies for counteracting the attention diffusion and fragmentation.

Untested thoughts:

  • good IX and interaction methods could offset some of the cognitive overhead.

  • it might be possible that at a certain point, increased integration would decrease the context-switch cost.

  • recently convergence devices/platforms have centralized event or notification systems that act like a HUD system, so that you can block out extraneous noise.

Thoughts?

Posted 29 April 2013

Fresh Starts

micro tycho 27 April 2013

I’ve been hosting my own blog with various quantities of attention to the details for a dozen years or so, and while I still like tychoish and plan to continue using it for various projects… but I feel a certain lack of spontaneity, and engagement, not to mention a huge rut.

Not that these are new feelings or problems, and I’ve had a history of reinventing my blogging system/location every few years for a while, but it’s usually worthwhile.

So then I deleted the 900 posts on this tumblr account that had accumulated from various automated means, and set up tychoish.com/micro to mirror content. Also, m.tycho.co will point to tumblr and I think I have this tumblesocks thing all set up, so we’ll see how this go.

Expect a few test posts, and what not while I get up to speed. Cheers!

Posted 27 April 2013