Common Gotchas

This is a post I wrote a long time ago and never posted, but I’ve started getting back into doing some work in Common Lisp and thought it’d be good to send this one off.

On my recent “(re)learn Common Lisp” journey, I’ve happened across a few things that I’ve found frustrating or confusing: this post is a collection of them, in hopes that other people don’t struggle with them:

  • Implementing an existing generic function for a class of your own, and have other callers specialize use your method implementation you must import the generic function, otherwise other callers will (might?) fall back to another method. This makes sense in retrospect, but definitely wasn’t clear on the first go.

  • As a related follow on, you don’t have to define a generic function in order to write or use a method, and I’ve found that using methods is actually quite nice for doing some type checking, at the same time, it can get you into a pickle if you later add the generic function and it’s not exported/imported as you want.

  • Property lists seem cool for a small light weight mapping, but they’re annoying to handle as part of public APIs, mostly because they’re indistinguishable from regular lists, association lists are preferable, and maybe with make-hash even hash-tables.

  • Declaring data structures inline is particularly gawky. I sometimes want to build a list or a hash map in-line an alist, and it’s difficult to do that in a terse way that doesn’t involve building the structure programatically. I’ve been writing (list (cons "a" t) (cons "b" nil)) sort of things, which I don’t love.

    You could render this as:

    `(("a" . t) ("b" . nil)) 
    

    Having said that, I’ve always found the back-tick hard to read, so I tend to disprefer it.

  • If you have a variadic macro (i.e. that takes &rest args), or even I suppose any kind of macro, and you have it’s arguments in a list, there’s no a way, outside of eval to call the macro, which is super annoying, and makes macros significantly less appealing as part of public APIs. My current conclusion is that macros are great when you want to add syntax to make the code you’re writing clearer or to introduce a new paradigm, but for things that could also be a function, or are thin wrappers on for function, just use a function.

Methods of Adoption

Before I started actually working as a software engineer full time, writing code was this fun thing I was always trying to figure out on my own, and it was fun, and I could hardly sit down at my computer without learning something. These days, I do very little of this kind of work. I learn more about computers by doing my job and frankly, the kind of software I write for work is way more satisfying than any of the software I would end up writing for myself.

I think this is because the projects that a team of engineers can work on are necessarily larger and more impactful. When you build software with a team, most of the time the product either finds users (or your end up without a job.) When you build software with other people and for other people, the things that make software good (more rigorous design, good test discipline, scale,) are more likely to be prevalent. Those are the things that make writing software fun.

Wait, you ask “this is a lisp post?” and “where is the lisp content?” Wait for it…

In Pave the On/Off Ramps1 I started exploring this idea that technical adoption is less a function of basic capabilities or numbers of features in general, but rather about the specific features that support and further adoption and create confidence in maintenance and interoperability. A huge part of the decision process is finding good answers to “can I use these tools as part of the larger system of tools that I’m using?” and “can I use this tool a bit without needing to commit to using it for everything?”

Technologies which are and demand ideological compliance are very difficult to move into with confidence. A lot of technologies and tools demand ideological compliance, and their adoption depends on once-in-a-generation sea changes or significant risks.2 The alternate method, to integrate into people’s existing workflows and systems, and provide great tools that work for some usecases and to prove their capability is much more reliable: if somewhat less exciting.

The great thing about Common Lisp is that it always leans towards the pragmatic rather than the ideological. Common Lisp has a bunch of tools--both in the langauge and in the ecosystem--which are great to use but also not required. You don’t have to use CLOS (but it’s really cool), you don’t have to use ASDF, there isn’t one paradigm of developing or designing software that you have to be constrained to. Do what works.


I think there are a lot of questions that sort of follow on from this, particularly about lisp and the adoption of new technologies. So let’s go through the ones I can think of, FAQ style:

  • What kind of applications would a “pave the exits” support?

    It almost doesn’t matter, but the answer is probably a fairly boring set of industrial applications: services that transform and analyze data, data migration tools, command-line (build, deployment) tools for developers and operators, platform orchestration tools, and the like. This is all boring (on the one hand,) but most software is boring, and it’s rarely the case that programming langauge actually matters much.

    In addition, CL has a pretty mature set of tools for integrating with C libaries and might be a decent alternative to other langauges with more complex distribution stories. You could see CL being a good langauge for writing extensions on top of existing tools (for both Java with ABCL and C/C++ with ECL and CLASP), depending.

  • How does industrial adoption of Common Lisp benefit the Common Lisp community?

    First, more people writing common lisp for their jobs, which (assuming they have a good experience,) could proliferate into more projects. A larger community, maybe means a larger volume of participation in existing projects (and more projects in general.) Additionally, more industrial applications means more jobs for people who are interested in writing CL, and that seems pretty cool.

  • How can CL compete with more established languages like Java, Go, and Rust?

    I’m not sure competition is really the right model for thinking about this: there’s so much software to write that “my langauge vs your langauge” is just a poor model for thinking about this: there’s enough work to be done that everyone can be successful.

    At the same time, I haven’t heard about people who are deeply excited about writing Java, and Go folks (which I count myself among) tend to be pretty pragmatic as well. I see lots of people who are excited about Rust, and it’s definitely a cool langauge though it shines best at lower level problems than CL and has a reasonable FFI so it might be the case that there’s some exciting room for using CL for higher level tasks on top of rust fundamentals.


  1. In line with the idea that product management and design is about identifying what people are doing and then institutionalizing this is similar to the urban planning idea of “paving cowpaths,” I sort of think of this as “paving the exits,” though I recognize that this is a bit force.d ↩︎

  2. I’m thinking of things like the moment of enterprise “object oriented programing” giving rise to Java and friends, or the big-data watershed moment in 2009 (or so) giving rise to so-called NoSQL databases. Without these kinds of events you the adoption of these big paradigm-shifting technologies is spotty and relies on the force of will of a particular technical leader, for better (and often) worse. ↩︎

Pave the On and Off Ramps

I participated in a great conversation in the #commonlisp channel on libera (IRC) the other day, during which I found a formulation of a familar argument that felt more clear and more concrete.

The question--which comes up pretty often, realistically--centered on adoption of Common Lisp. CL has some great tools, and a bunch of great libraries (particularly these days,) why don’t we see greater adoption? Its a good question, and maybe 5 year ago I would have said “the libraries and ecosystem are a bit fragmented,” and this was true. It’s less true now--for good reasons!--Quicklisp is just great and there’s a lot of coverage for doing common things.

I think it has to do with the connectivity and support at the edges of a project, an as I think about it, this is probably true of any kind of project.

When you decide to use a new tool or technology you ask yourself three basic questions:

  1. “is this tool (e.g. language) capable of fulfilling my current needs” (for programming languages, this is very often yes,)
  2. “are there tools (libraries) to support my use so I can focus on my core business objectives,” so that you’re not spending the entire time writing serialization libraries and HTTP servers, which is also often the case.
  3. “will I be able to integrate what I’m building now with other tools I use and things I have built in the past.” This isn’t so hard, but it’s a thing that CL (and lots of other projects) struggle with.

In short, you want to be able to build a thing with the confidence that it’s possible to finish, that you’ll be able to focus on the core parts of the product and not get distracted by what should be core library functionality, and finally that the thing you build can play nicely with all the other things you’ve written or already have. Without this third piece, writing a piece of software with such a tool is a bit of a trap.

We can imagine tools that expose data only via quasi-opaque APIs that require special clients or encoding schemes, or that lack drivers for common databases, or integration with other common tools (metrics! RPC!) or runtime environments. This is all very reasonable. For CL this might look like:

  • great support for gRPC

    There’s a grpc library that exists, is being maintained, and has basically all the features you’d want except support for TLS (a moderately big deal for operational reasons,) and async method support (not really a big deal.) It does depend on CFFI, which makes for a potentially awkward compilation story, but that’s a minor quibble.

    The point is not gRPC qua gRPC, the point is that gRPC is really prevalent globally and it makes sense to be able to meet developers who have existing gRPC services (or might like to imagine that they would,) and be able to give them confidence that whatever they build (in say CL) will be useable in the future.

  • compilation that targets WASM

    Somewhat unexpectedly (to me, given that I don’t do a lot of web programming,) WebAssembly seems to be the way deploy portable machine code into environments that you don’t have full control over,1 and while I don’t 100% understand all of it, I think it’s generally a good thing to make it easier to build software that can run in lots of situation.

  • unequivocally excellent support for JSON (ex)

    I remember working on a small project where I thought “ah yes, I’ll just write a little API server in CL that will just output JSON,” and I completely got mired in various comparisons between JSON libraries and interfaces to JSON data. While this is a well understood problem it’s not a very cut and dry problem.

    The thing I wanted was to be able to take input in JSON and be able to handle it in CL in a reasonable way: given a stream (or a string, or equivalent) can I turn it into an object in CL (CLOS object? hashmap?)? I’m willing to implement special methods to support it given basic interfaces, but the type conversion between CL types and JSON isn’t always as straight forward as it is in other languages. Similarly with outputting data: is there a good method that will take my object and convert it to a JSON stream or string? There’s always a gulf between what’s possible and what’s easy and ergonomic.

I present these not as a complaint, or even as a call to action to address the specific issues that I raise (though I certianly wouldn’t complain if it were taken as such,) but more as an illustration of technical decision making and the things that make it possible for a team or a project to say yes to a specific technology.

There are lots of examples of technologies succeeding from a large competitive feild mostly on the basis of having great interoperability with existing solutions and tools, even if the core technology was less exciting or innovative. Technology wins on the basis of interoperability and user’s trust, not (exactly) on the basis of features.


  1. I think the one real exception is runtimes that have really good static binaries and support for easy cross-compiling (e.g. Go, maybe Rust.) ↩︎

Programming in the Common Lisp Ecosystem

I’ve been writing more and more Common Lips recently and while I reflected a bunch on the experience in a recent post that I recently followed up on .

Why Ecosystems Matter

Most of my thinking and analysis of CL comes down to the ecosystem: the language has some really compelling (and fun!) features, so the question really comes down to the ecosystem. There are two main reasons to care about ecosystems in programming languages:

  • a vibrant ecosystem cuts down the time that an individual developer or team has to spend doing infrastructural work, to get started. Ecosystems provide everything from libraries for common tasks as well as conventions and established patterns for the big fundamental application choices, not to mention things like easily discoverable answers to common problems.

    The more time between “I have an idea” to “I have running (proof-of-concept quality) code running,” matters so much. Everything is possible to a point, but the more friction between “idea” and “working prototype” can be a big problem.

  • a bigger and more vibrant ecosystem makes it more tenable for companies/sponsors (of all sizes) to choose to use Common Lisp for various projects, and there’s a little bit of a chicken and egg problem here, admittedly. Companies and sponsors want to be confidence that they’ll be able to efficiently replace engineers if needed, integrate or lisp components into larger ecosystems, or be able to get support problems. These are all kind of intangible (and reasonable!) and the larger and more vibrant the ecosystem the less risk there is.

    In many ways, recent developments in technology more broadly make lisp slightly more viable, as a result of making it easier to build applications that use multiple languages and tools. Things like microservices, better generic deployment orchestration tools, greater adoption of IDLs (including swagger, thrift and GRPC,) all make language choice less monolithic at the organization level.

Great Things

I’ve really enjoyed working with a few projects and tools. I’ll probably write more about these individually in the near future, but in brief:

  • chanl provides. As a current/recovering Go programmer, this library is very familiar and great to have. In some ways, the API provides a bit more introspection, and flexibility that I’ve always wanted in Go.
  • lake is a buildsystem tool, in the tradition of make, but with a few additional great features, like target namespacing, a clear definition between “file targets” and “task targets,” as well as support for SSH operations, which makes it a reasonable replacement for things like fabric, and other basic deployment tools.
  • cl-docutils provides the basis for a document processing system. I’m particularly partial because I’ve been using the python (reference) implementation for years, but the implementation is really quite good and quite easy to extend.
  • roswell is really great for getting started with CL, and also for making it possible to test library code against different implementations and versions of the language. I’m a touch iffy on using it to install packages into it’s own directory, but it’s pretty great.
  • ASDF is the “buildsystem” component of CL, comparable to setuptools in python, and it (particularly the latest versions,) is really great. I like the ability to produce binaries directly from asdf, and the “package-inferred” is a great addition (basically, giving python-style automatic package discovery.)
  • There’s a full Apache Thrift implementation. While I’m not presently working on anything that would require a legit RPC protocol, being able to integrate CL components into larger ecosystem, having the option is useful.
  • Hunchensocket adds websockets! Web sockets are a weird little corner of any stack, but it’s nice to be able to have the option of being able to do this kind of programming. Also CL seems like a really good platform to do
  • make-hash makes constructing hash tables easier, which is sort of needlessly gawky otherwise.
  • ceramic provides bridges between CL and Electron for delivering desktop applications based on web technologies in CL.

I kept thinking that there wouldn’t be good examples of various things, (there’s a Kafka driver! there’s support for various other Apache ecosystem components,) but there are, and that’s great. There’s gaps, of course, but fewer, I think, than you’d expect.

The Dark Underbelly

The biggest problem in CL is probably discoverability: lots of folks are building great tools and it’s hard to really know about the projects.

I thought about phrasing this as a kind of list of things that would be good for supporting bounties or something of the like. Also if I’ve missed something, please let me know! I’ve tried to look for a lot of things, but discovery is hard.

Quibbles

  • rove doesn’t seem to work when multi-threaded results effectively. It’s listed in the readme, but I was able to write really trivial tests that crashed the test harness.
  • Chanl would be super lovely with some kind of concept of cancellation (like contexts in Go,) and while it’s nice to have a bit more thread introspection, given that the threads are somewhat heavier weight, being able to avoid resource leaks seems like a good plan.
  • There doesn’t seem to be any library capable of producing YAML formated data. I don’t have a specific need, but it’d be nice.
  • it would be nice to have some way of configuring the quicklisp client to be able to prefer quicklisp (stable) but also using ultralisp (or another source) if that’s available.
  • Putting the capacity in asdf to produce binaries easily is great, and the only thing missing from buildapp/cl-launch is multi-entry binaries. That’d be swell. It might also be easier as an alternative to have support for some git-style sub-commands in a commandline parser (which doesn’t easily exist at the moment'), but one-command-per-binary, seems difficult to manage.
  • there are no available implementations of a multi-reader single-writer mutex, which seems like an oversite, and yet, here we are.

Bigger Projects

  • There are no encoders/decoders for data formats like Apache Parquet, and the protocol buffers implementation don’t support proto3. Neither of these are particular deal breakers, but having good tools dealing with common developments, lowers to cost and risk of using CL in more applications.
  • No support for http/2 and therefore gRPC. Having the ability to write software in CL with the knowledge that it’ll be able to integrate with other components, is good for the ecosystem.
  • There is no great modern MongoDB driver. There were a couple of early implementations, but there are important changes to the MongoDB protocol. A clearer interface for producing BSON might be useful too.
  • I’ve looked for libraries and tools to integrate and manage aspects of things like systemd, docker, and k8s. k8s seems easiest to close, as things like cube can be generated from updated swagger definitions, but there’s less for the others.
  • Application delievery remains a bit of an open. I’m particularly interested in being able to produce binaries that target other platforms/systems (cross compilation,) but there are a class of problems related to being able to ship tools once built.
  • I’m eagerly waiting and concerned about the plight of the current implementations around the move of ARM to Darwin, in the intermediate term. My sense is that the transition won’t be super difficult, but it seems like a thing.

Learning Common Lisp Again

In a recent post I spoke about abandoning a previous project that had gone off the rails, and I’ve been doing more work in Common Lisp, and I wanted to report a bit more, with some recent developments. There’s a lot of writing about learning to program for the first time, and a fair amount of writing about lisp itself, neither are particularly relevant to me, and I suspect there may be others who might find themselves in a similar position in the future.

My Starting Point

I already know how to program, and have a decent understanding of how to build and connect software components. I’ve been writing a lot of Go (Lang) for the last 4 years, and wrote rather a lot of Python before that. I’m an emacs user, and I use a Common Lisp window manager, so I’ve always found myself writing little bits of lisp here and there, but it never quite felt like I could do anything of consequence in Lisp, despite thinking that Lisp is really cool and that I wanted to write more.

My goals and rational are reasonably simple:

  • I’m always building little tools to support the way that I use computers, nothing is particularly complex, but it’d enjoy being able to do this in CL rather than in other languages, mostly because I think it’d be nice to not do that in the same languages that I work in professionally.1
  • Common Lisp is really cool, and I think it’d be good if it were more widely used, and I think by writing more of it and writing posts like this is probably the best way to make that happen.
  • Learning new things is always good, and I think having a personal project to learn something new will be a good way of stretching my self as a developer. Most of my development as a programmer has focused on
  • Common Lisp has a bunch of features that I really like in a programming language: real threads, easy to run/produce static binaries, (almost) reasonable encapsulation/isolation features.

On Learning

Knowing how to program makes learning how to program easier: broadly speaking programming languages are similar to each other, and if you have a good model for the kinds of constructs and abstractions that are common in software, then learning a new language is just about learning the new syntax and learning a bit more about new idioms and figuring out how different language features can make it easier to solve problems that have been difficult in other languages.

In a lot of ways, if you already feel confident and fluent in a programming language, learning a second language, is really about teaching yourself how to learn a new language, which you can then apply to all future languages as needed.

Except realistically, “third languages” aren’t super common: it’s hard to get to the same level of fluency that you have with earlier languages, and often we learn “third-and-later” languages are learned in the context of some existing code base or project4, so it’s hard to generalize our familiarity outside of that context.

It’s also the case that it’s often pretty easy to learn a language enough to be able to perform common or familiar tasks, but fluency is hard, particularly in different idioms. Using CL as an excuse to do kinds of programming that I have more limited experience with: web programming, GUI programming, using different kinds of databases.

My usual method for learning a new programming language is to write a program of moderate complexity and size but in a problem space that I know pretty well. This makes it possible to gain familiarity, and map concepts that I understand to new concepts, while working on a well understood project. In short, I’m left to focus exclusively on “how do I do this?” type-problems and not “is this possible,” or “what should I do?” type-problems.

Conclusion

The more I think about it, the more I realize that when we talk about “knowing a programming language,” inevitably linked to a specific kind of programming: the kind of Lisp that I’ve been writing has skewed toward the object oriented end of the lisp spectrum with less functional bits than perhaps average. I’m also still a bit green when it comes to macros.

There are kinds of programs that I don’t really have much experience writing:

  • GUI things,
  • the front-half of the web stack,2
  • processing/working with ASTs, (lint tools, etc.)
  • lower-level kind of runtime implementation.

There’s lots of new things to learn, and new areas to explore!

Notes


  1. There are a few reasons for this. Mostly, I think in a lot of cases, it’s right to choose programming languages that are well known (Python, Java+JVM friends, and JavaScript), easy to learn (Go), and fit in with existing ecosystems (which vary a bit by domain,) so while it might the be right choice it’s a bit limiting. It’s also the case that putting some boundaries/context switching between personal projects and work projects could be helpful in improving quality of life. ↩︎

  2. Because it’s 2020, I’ve done a lot of work on “web apps,” but most of my work has been focused on areas of applications including including data layer, application architecture, and core business logic, and reliability/observability areas, and less with anything material to rendering web-pages. Most projects have a lot of work to be done, and I have no real regrets, but it does mean there’s plenty to learn. I wrote an earlier post about the problems of the concept of “full-stack engineering” which feels relevant. ↩︎

Common Lisp Grip, Project Updates, and Progress

Last week, I did a release, I guess, of cl-grip which is a logging library that I wrote after reflecting on common lisp logging earlier. I wanted to write up some notes about it that aren’t covered in the read me, and also talk a little bit4 about what else I’m working on.

cl-grip

This is a really fun and useful project and it was really the right size for a project for me to really get into, and practice a bunch of different areas (packages! threads! testing!) and I think it’s useful to boot. The read me is pretty comprehensive, but I thought I’d collect some additional color here:

Really at it’s core cl-grip isn’t a logging library, it’s just a collection of interfaces that make it easy to write logging and messaging tools, which is a really cool basis for an idea, (I’ve been working on and with a similar system in Go for years.)

As result, there’s interfaces and plumbing for doing most logging related things, but no actual implementations. I was very excited to leave out the “log rotation handling feature,” which feels like an anachronism at this point, though it’d be easy enough to add that kind of handler in if needed. Although I’m going to let it stew for a little while, I’m excited to expand upon it in the future:

  • additional message types, including capturing stack frames for debugging, or system information for monitoring.
  • being able to connect and send messages directly to likely targets, including systemd’s journal and splunk collectors.
  • a collection of more absurd output targets to cover “alerting” type workloads, like desktop notifications, SUMP, and Slack targets.

I’m also excited to see if other people are interested in using it. I’ve submitted it to Quicklisp and Ultralisp, so give it a whirl!

See the cl-grip repo on github.

Eggqulibrium

At the behest of a friend I’ve been working on an “egg equilibrium” solver, the idea being to provide a tool that can given a bunch of recipes that use partial eggs (yolks and whites) can provide optimal solutions that use a fixed set of eggs.

So far I’ve implemented some prototypes that given a number of egg parts, attempt collects recipes until there are no partial eggs in use, so that there are no leftovers. I’ve also implemented the “if I have these partial eggs, what can I make to use them all.” I’ve also implemented a rudimentary CLI interface (that was a trip!) and a really simple interface to recipe data (both parsing from a CSV format and an in memory format that makes solving the equilibrium problem easier.)

I’m using it as an opportunity to learn different things, and find a way to learn more about things I’ve not yet touched in lisp (or anywhere really,) so I’m thinking about:

  • building a web-based interface using some combination of caveman, parenscript, and related tools. This could include features like “user submitted databases,” as well as links to the sources the recpies, in addition to the basic “web forms, APIs, and table rendering.”
  • storing the data in a database (probably SQLite, mostly) both to support persistence and other more advanced features, but also because I’ve not done database things from Lisp at all.

See the eggquilibrium repo on github it’s still pretty rough, but perhaps it’ll be interesting!'

Other Projects

  • Writing more! I’m trying to be less obsessive about blogging, as I think it’s useful (and perhaps interesting for you all too.) I’ve been writing a bunch and not posting very much of it. My goal is to mix sort of grandiose musing on technology and engineering, with discussions of Lisp, Emacs, and programming projects.
  • Working on producing texinfo output from cl-docutils! I’ve been toying around with the idea of writing a publication system targeted at producing books--long-form non-fiction, collections of essays, and fiction--rather than the blogs or technical resources that most such tools are focused on. This is sort of part 0 of this process.
  • Hacking on some Common Lisp projects, I’m particularly interested in the Nyxt and StumpWM.

Common Lisp and Logging

I’ve made the decision to make all of personal project code that I write to do in Common Lisp. See this post for some of the background for this decision.

It didn’t take me long to say “I think I need a logging package,” and I quickly found this wonderful comparsion of CL logging libraries, and only a little longer to be somewhat disappointed.

In general, my requirements for a logger are:

  • straightforward API for logging.
  • levels for filtering messages by importance
  • library in common use and commonly available.
  • easy to configure output targets (e.g. system’s journal, external services, etc).
  • support for structured logging.

I think my rationale is pretty clear: loggers should be easy to use because the more information that can flow through the logger, the better. Assigning a level to all log messages is great for filtering practically, and it’s ubiquitous enough that it’s really part of having a good API. While I’m not opposed to writing my own logging system,1 but I think I’d rather not in this case: there’s too much that’s gained by using the conventional choice.

Configurable outputs and structured logging are stretch goals, but frankly are the most powerful features you can add to a logger. Lots of logging work is spent crafting well formed logging strings, when really, you just want some kind of arbitrary map and makes it easier to make use of logging at scale, which is to say, when you’re running higher workloads and multiple coppies of an application.

Ecosystem

I’ve dug in a bit to a couple of loggers, sort of using the framework above to evaluate the state of the existing tools. Here are some notes:

log4cl

My analysis of the CL logging packages is basically that log4cl is the most mature and actively maintained tool, but beyond the basic fundamentals, it’s “selling” features are… interesting.2 The big selling features:

  • integration with the developers IDE (slime,) which makes it possible to use the logging system like a debugger, almost. This is actually a phenomenal feature, particularly for development and debugging. The downside is that it wouldn’t be totally unreasonable to use it production, and that’s sort of terrifying.
  • it attempts to capture a lot of information about logging call sites so you can better map back from log messages to the state of the system when the call was made. Again, this makes it a debugging tool, and that’s awesome, but it’s overhead, and frankly I’ve never found it difficult to grep through code.
  • lots of attention to log rotation and log file management. There’s not a lot of utility in writing log data to files directly. In most cases you want to write to standard out: the program is being used interactively, and users may want to see the log of what happens. In cases where you’re running in a daemon mode, any more you’re not, systemd or similar just captures your output. Even then, you’re probably in a situation where you want to send the log output to some other process (e.g. an external service, or some kind of local socket for fluentd/etc.)
  • hierarchical organization of log messages is just less useful than annotation with metadata, in practice, and using hierarchical methods to filter logs into different streams or reduce logging volumes just obscures things and makes it harder to understand what’s happening in a system.

Having said that, the API surface area is big, and it’s a bit confusing to just start using the logger.

a-cl-logger

The acl package is pretty straightforward, and has a lot of features that I think are consistent with my interests and desires:

  • support for JSON output,
  • internal support for additional output formats (e.g. logstash,)
  • more simple API

It comes with a couple of pretty strong draw backs:

  • there are limited testing.
  • it’s SBCL only, because it relies on SBCL fundamentals in collecting extra context about log messages. There’s a pending pull request to add ECL compile support, but it looks like it wouldn’t be quite that simple.
  • the overhead of collecting contextual information comes at an overhead, and I think logging should err on the side of higher performance, and making expensive things optional, just because it’s hard to opt into/out of logging later.

Conclusion

So where does that leave me?

I’m not really sure.

I created a scratch project to write a simple logging project, but I’m definitely not prioritizing working on that over other projects. In the mean time I’ll probably end up just not logging very much, or maybe giving log4cl a spin.

Notes


  1. When I started writing Go I did this, I wrote a logging tool, for a bunch of reasons. While I think it was the right decision at the time, I’m not sure that it holds up. Using novel infrastructure in projects makes integration a bit more complicated and creates overhead for would be contributors. ↩︎

  2. To be honest, I think that log4cl is a fine package, and really a product of an earlier era, and it makes the standard assumptions about the way that logs were used, that makes sense given a very different view of how services should be operated. ↩︎

A Common Failure

I’ve been intermittently working on a common lisp library to produce a binary encoding of arbitrary objects, and I think I’m going to be abandoning the project. This is an explanation of that decision and an reflection on my experience.

Why Common Lisp?

First, some background. I’ve always thought that Common Lisp is a language with a bunch of cool features and selling points, but unsurprisingly, I’ve never really had the experience of writing more than some one-off bits of code in CL, which isn’t surprising. This project was a good experience for really digging into writing and managing a conceptually larger project which was a good kick in the pants to learn more.

The things I like:

  • the implementations of the core runtime are really robust and high quality, and make it possible to imagine running your code in a bunch of different contexts. Even though it’s a language with relatively few users, it feels robust in a way. The most common implementations also have ways of producing fully self contained static binaries (like Go, say), which makes the thought of distributing software seem reasonable.
  • quicklisp, a package/library management tool is new (in the last decade or so,) has really raised the level of the ecosystem. It’s not as complete as I’d expect in many ways, but quicklisp changed CL from something quaint to something that you could actually imagine using.
  • the object system is really nice. There isn’t quite compile time-type checking on the values of slots (attributes) of objects, though you can opt in. My general feeling is that I can pretty easily get the feeling of writing statically typed code with all of the freedom of writing dynamic code.
  • multiple dispatch, and the conceptual approach to genericism, is amazing and really simplifies flow control in a lot of cases. You implement the methods you need, for the appropriate types/objects and then just write the logic you need, and the function call machinery just does the right thing. There’s surprisingly little conditional logic, as a result.

Things I don’t like:

  • there are all sorts of things that don’t quite have existing libraries, and so I find myself wanting to do things that require more effort than necessary. This project to write a binary encoding tool would have been a component in service of a much larger project. It’d be nice if you could skip some of the lower level components, or didn’t have your design choices so constrained by gaps in infrastructure.
  • at the same time, the library ecosystem is pretty fractured and there are common tools around which there aren’t really consensus. Why are there so many half-finished YAML and JSON libraries? There are a bunch of HTTP server (!) implementations, but really you need 2 and not 5?
  • looping/iteration isn’t intuitive and difficult to get common patterns to work. The answer, in most cases is to use (map) with lambdas rather than loops, in most cases, but there’s this pitfall where you try and use a (loop) and really, that’s rarely the right answer.
  • implicit returns seem like an over sight, hilariously, Rust also makes this error. Implicit returns also make knowing what type a function or method returns quite hard to reason about.

Writing an Encoder

So the project I wrote was an attempt to write really object oriented code as a way to writing an object encoder to a JSON-like format. Simple enough, I had a good mental model of the format, and my general approach to doing any kind of binary format processing is to just write a crap ton of unit tests and work somewhat iteratively.

I had a lot of fun with the project, and it gave me a bunch of key experiences which make me feel comfortable saying that I’m able to write common lisp even if it’s not a language that I feel maximally comfortable in (yet?). The experiences that really helped included:

  • producing enough code to really have to think about how packaging and code organization worked. I’d written a function here and there, before, but never something where I needed to really understand and use the library/module/packaging (e.g. systems and libraries.) infrastructure.
  • writing and running lots of tests. I don’t always follow test-driven development closely, but writing lots of tests is part of my process, and being able to watch the layers of this code come together was a lot of fun and very instructive.
  • this project for me, was mostly about API design and it was nice to have a project that didn’t require much design in terms of the actual functionality, as object encoding is pretty straight forward.

From an educational perspective all of my goals were achieved.

Failure Mode

The problem is that it didn’t work out, in the final analysis. While the library I constructed was able to encode and decode objects and was internally correct, I never got it to produce encoding that other implementations of the same specification could reliably read, and the ability to read data encoded by other libraries only worked in trivial cases. In the end:

  • this was mostly a project designed to help me gain competence in a programming language I don’t really know, and in that I was successful.
  • adding this encoding format isn’t particularly useful to any project I’m thinking of working on in the short term, and doesn’t directly enable me to do anything in particular.
  • the architecture of the library would not be particularly performant in practice, as the encoding process didn’t deploy a buffer pool of any kind, and it would have been harder than not to back fill that in, and I wasn’t particularly interested in that.
  • it didn’t work, and the effort to debug the issue would be more substantive than I’m really interested in doing at this point, particularly given the limited utility.

So here we are. Onto a different project!