For a long time I've used this go library testify, and mostly it's been pretty great: it provides a bunch of tools that you'd expect in a testing library, in the grand tradition of jUnit/xUnit/etc., and managed to come out on top in a field similar libraries a few years ago. It was (and is, but particularly then) easy to look at the testing package and say "wouldn't it be nice if there were a bit more higher-level functionality," but I've recently come around to the idea that maybe it's not worth it. [1] This is a post to collect and expand upon that thought, and also explain why I'm going through some older projects to cut out the dependency.

First, and most importantly, I should say that testify isn't that bad, and there's definitely a common way to use the library that's totally reasonable. My complaint is basically:

  • The "suite" functionality for managing fixtures is a bit confusing: first it's really easy to get the capitalization of the Setup/Teardown (TearDown?) functions wrong and have part of your fixture not run, and they're enough different from "plain tests" to be a bit confusing. Frankly, writing test cases out by hand and using Go's subtest functionality is more clear anyway.
  • I've never used testify's mocking functionality, in part because I don't tend to do much mock-based testing (which I see as a good thing,) and for the cases where I want to use mocks, I tend to prefer either hand written mocks or something like mockery.
  • While I know "require" means "halt on failure" and "assert" means "continue on error," and it makes sense now, "assert" in most [2] other languages means "halt on failure" so this is a bit confusing. Also while there are cases where you do want continue on error semantics for test assertions, (I suppose,) it doesn't come up that often'
  • There are a few warts, with the assertions (including requires,) most notably that you can create an "assertion object" that wraps a *testing.T, which is really an anti-pattern, and can cause assertion failures to be reported at the wrong level.
  • There are a few testify assertions that have some wonky argument structure, notably that Equal wants arguments in expected, actual form but Len wants arguments in object, expected form. I have to look that up every time.
  • I despise the failure reporting format. I typically run tests in my text editor and then use "jump to failure" point when a test fails, and testify assertions aren't well formed in the way that basically every other tool are (including the standard library!) [3] such that it's fussy to find a failure when it happens.

The alternative is just to check the errors manually and use t.Fatal and t.Fatalf to halt test execution (and t.Error and t.Errorf for the continue on error case.) So we get code that looks like this:

// with testify:
require.NoErorr(t, err)

// otherwise:
if err != nil {
     t.Fatal(err)
}

In addition to giving us better reporting, the second case looks like code that is more typical of code that you might write outside of test code, and so gives you a chance to use the production API which can help you detect any awkwardness but also serve as a kind of documentation. Additionally, if you're not lazy, the failure messages that you pass to Fatal can be quite useful in explaining what's gone wrong.

Testify is fine and it's not worth rewriting existing tests to exclude the dependency (except maybe in small libraries) but for new code, give it a shot!

[1]I must also confess that my coworker played some role in this conversion.
[2]I'd guess all, but I haven't done a survey.
[3]Ok, the stdlib failures have the problem, where the failures are just attributed to the filename (no path) of the failure, which doesn't work great in the situation where you have a lot of packages with similarly named files and you're running tests from the root of the project.