Cloud computing, and with it most of tech, has been really hot on the idea of "serverless" computing, which is to say, services and applications that are deployed, provisioned, and priced separate from conventional "server" resources (memory, storage, bandwidth.) The idea is that we can build and expose ways of deploying and running applications and services, even low-level components like "databases" and "function execution", in ways that mean that developers and operators can avoid thinking about computers qua computers.

Serverless is the logical extension of "platform as a service" offerings that have been an oft missed goal for a long time. You write high-level applications and code that is designed to run in some kind of sandbox, with external services provided in some kind of ala carte model via integrations with other products or services. The PaaS, then, can take care of everything else: load balancing incoming requests, redundancy to support higher availability, and any kind of maintains on the lower level infrastructure. Serverless is often just PaaS but more: provide a complete stack of services to satisfy needs (databases, queues, background work, authentication, caching, on top of the runtime,) and then change the pricing model to be based on request/utilization rather than time or resources.

Fundamentally, this allows the separation of concerns between "writing software," and "running software," and allows much if not all of the running of software to be delegated to service providers. This kind of separation is useful for developers, and in general runtime environments seem like the kind of thing that most technical organizations shouldn't need to focus on: outsourcing may actually be good right?

Well maybe.

Let's be clear, serverless platforms primarily benefit the provider of the services for two reasons:

  • serverless models allow providers to build offerings that are multi-tenant, and give provider the ability to reap the benefit of managing request load dynamically and sharing resources between services/clients.
  • utilization pricing for services is always going to be higher than commodity pricing for the underlying components. Running your on servers ("metal") is cheaper than using cloud infrastructure, over time, but capacity planning, redundancy, and management overhead, make that difficult in practice. The proposition is that while serverless may cost more per-unit, it has lower management costs for users (fewer people in "ops" roles,) and is more flexible if request patterns change.

So we know why the industry seems to want serverless to be a thing, but does it actually make sense?

Maybe?

Makers of software strive (or ought to) make their software easy to run, and having very explicit expectations about the runtime environment, make software easier to run. Similarly, being able to write code without needing to manage the runtime, monitoring, logging, while using packaged services for caching storage and databases seems like a great boon.

The downsides to software producers, however, are plentiful:

  • vendor lock-in is real, not just because it places your application at the mercy of an external provider, as they do maintenance, or as their API and platform evolves on their timeframe.
  • hosted systems, mean that it's difficult to do local development and testing: either every developer needs to have their own sandbox (at some expense and management overhead), or you have to maintain a separate runtime environment for development.
  • application's cannot have service levels which exceed the service level agreements of their underlying providers. If your serverless platform has an SLA which is less robust than the SLA of your application you're in trouble.
  • when something breaks, there are few operational remedies available. Upstream timeouts are often not changeable and most forms of manual intervention aren't available.
  • pricing probably only makes sense for organizations operating at either small scale (most organizations, particularly for greenfield projects,) but is never really viable for any kind of scale, and probably doesn't make sense in any situation at scale.
  • some problems and kinds of software just don't work in a serverless model: big data sets that exceed reasonable RAM requirements, data processing problems which aren't easily parallelizable, workloads with long running operations, or workloads that require lower level network or hardware access.
  • most serverless systems will incur some overhead over dedicated/serverfull alternatives and therefore have worse performance/efficiency, and potentially less predictable performance, especially in very high-volume situations.

Where does that leave us?

  • Many applications and bespoke tooling should probably use serverless tools. Particularly if your organization is already committed to a specific cloud ecosystem, this can make a lot of sense.
  • Prototypes, unequivocally make sense to rely on off-the-shelf, potentially serverless tooling, particularly for things like runtimes.
  • If and when you begin to productionize applications, find ways to provide useful abstractions between the deployment system and the application. These kinds of architectural choices help address concerns about lock-in and making it easy to do development work without dependencies.
  • Think seriously about your budget for doing operational work, holistically, if possible, and how you plan to manage serverless components (access, cost control, monitoring and alerting, etc.) in connection with existing infrastructure.

Serverless is interesting, and I think it's interesting to say "what if application development happened in a very controlled environment with a very high level set of APIs." There are clearly a lot of cases where it makes a lot of sense, and then a bunch of situations where it's clearly a suspect call. And it's early days, so we'll see in a few years how things work out. In any case, thinking critically about infrastructure is always a good plan.