This is a follow up to my New Go Modules post about a project that I've been working on for the past few months: github.com/tychoish/fun.
fun is a collection of simple libraries using generics to do a collection of relatively mundane things, with a focus on well-built tools to make it easier for developers to solve higher level problems without needing to re-implement low level infrastructure, or use some rougher parts of the go standard library. It has no dependencies outside of the standard library, and contains a number of pretty cool tools, which were fun to write. Some of the basic structures were:
I wrote linked list implementations (single and double), adapted a single-ended Queue implementation that a former coworker did as part of an abandoned branch of development, and wrote a similar Deque. (fun/pubsub)
I adapted and developed a "broker" which uses the queues above to be able to do one-to-many pubsub implementations. (fun/pubsub)
I made a few high-level standard library synchronization tools (sync.Map, sync.WaitGroup, sync.Pool, and atomic.Value) even more higher level, with the help generics and more than a few stubborn opinions. (fun/adt; atomic data types)
I revisited an error aggregation tool that I wrote years ago, and made it a bunch faster, less quirky, and more compatible with the current state of the art with regards to error handling (e.g. errors.Is and errors.As). (fun/erc). I also wrote some basic error mangling tools including a more concise errors.Join, tools to unwind/unwrap errors, and a simple error type ers.Error to provide for constant errors (fun/ers)
I wrote an iterator interface and a collection of function wrappers and types (in the top level package), for interacting with iterators (or really, streams in one way or another,) and decorating those handlers and processors with common kinds of operations, modifications, and simple semantics.
I don't know that it will catch on, but I've written (and rewritten) a lot of worker pools and stream processing things to use thse tools, and it's been a useful programming model. By providing the wrappers and the iteration, users can implement features almost functionally (which should be easy to test.)
I built a collection of service orchestration tools to manage long running application services (e.g. http servers, worker pools, long running subprocesses, etc.) and a collection of context helpers to make it easier to manage the lifecycle of the kind of long-running applications I end up working on most of the time. Every time I've joined a new project... ever, I end up doing some amount of service orchestration work, and this is the toolkit I would want. (fun/srv)
I wrote some simple testing frameworks and tools for assertions (fun/assert) halt-on-failure, but with a tesitfy-inspired interface, and better reporiting along with a mirrored, fail-but-continue fun/assert/check, along with fun/testt ("test-tee" or "testy") which has a bunch of simple helpers for using contexts, and fun/ensure, which is a totally different take on an assertion library.
I don't want this post to be documentation about the project; there are a lot of docs in the README and on the go documentation, also the implementations are meant to be pretty readable, so feel free to dive in there. I did want to call out a few ways that I've begun using this library and the lessons it's taught me in my day to day work.
I set a goal of writing code that's 100% covered by tests. This is hard, and only possible in part because it's a stateless library without dependencies, but I learned a lot about the code I was writing, and feel quite confident in it as a result.
I also set a goal of having no dependencies outside of the module and the standard library: I didn't want to require users opt in using a set of tools that I liked, or that would require on going maintenance to update and manage. Not only is this a good goal in terms of facilitating adoption, it also constrained what I would do, and forced me to write things with external extensibility: there had to be hooks, interfaces and function types had to be easy to implement, and I decided to limit scope for other things.
Having a more complete set of atomic types and tools (particularly the map and the typed atomic value, also the typed integer atomic types in the standard library are great), has allowed me to approach concurrent programming problems without doing crazy things with channels or putting mutexes everywhere. I don't think either channels or mutexes are a problem in the hands of a practiced practitioner, but having a viable alternative means it's one less thing to go wrong, and you can save the big guns (mutexes) for more complex synchronization problems.
Linked lists are super cool. I've previously taken the opinion that you shouldn't implement your own sequence types ever, and mostly avoided writing one of these before now. Having now done it, and now having truly double-ended structures means things like "adding something to the beginning" or "inserting into/removing from the middle" of a sequence isn't so awkward.
The experimental slices library makes this a little less awkward with standard library slices/arrays, and they are proably faster.
It was really fun to take the concept of an interator and then build out from this concept to build tools that would make them easy to use, and got some good filtering, parallel processing, and map/reduce tools. I definitely learned a bunch but also I think (and have found) these tools useful.
I've long held that most go applications should be structured so that you shouldn't really need to think too much about concurrency when writing business logic. I've previously tried to posit that the solution to this was to provide robust queue processing tools (e.g. amboy), but I think that's too heavy weight, and it was cool to be able to think about the solution to this concept from a different angle.
Anyway, give it a try, and tell me what you think! Also pull requests are welcome!