The way I see most Go HTTP clients tested is: somebody has a Transport interface, they inject a fake implementation in tests, and they assert that the right http.Request was constructed. This works. It also catches roughly half the bugs real HTTP behavior can cause. The other half of the bugs live in the parts of your code that you stubbed out.

For years I’ve been using httptest.Server instead, and I think it’s underused. It’s a real HTTP server, backed by a real net.Listener, running in-process. Your code-under-test dials it over the loopback network, through the real Go HTTP client stack, through the real net/http parsing and serialization. The entire path is exercised.

Here’s the basic pattern:

func TestFetchUser(t *testing.T) {
    srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        if r.URL.Path != "/users/42" {
            http.Error(w, "not found", http.StatusNotFound)
            return
        }
        w.Header().Set("Content-Type", "application/json")
        fmt.Fprintf(w, `{"id": 42, "name": "Alice"}`)
    }))
    defer srv.Close()

    client := NewClient(srv.URL)
    user, err := client.FetchUser(context.Background(), 42)
    if err != nil {
        t.Fatalf("FetchUser: %v", err)
    }
    if user.Name != "Alice" {
        t.Fatalf("user.Name = %q, want Alice", user.Name)
    }
}

srv.URL is the loopback address + port. You pass that to your client and the client connects like it would to anything else.

What this catches that a mocked Transport doesn’t:

  • Timeout behavior. A mocked transport doesn’t block. A real server can. If you want to test context.Context cancellation on a slow server, you need a real one.
  • Connection pooling bugs. Mocked transports don’t share connections. Real ones do. If you’re leaking connections by not reading response bodies, httptest.Server behaves like production.
  • Header parsing. net/http has subtle parsing behavior around case-insensitivity, multi-value headers, Transfer-Encoding, etc. Mocked transports skip all of that.
  • TLS configuration. httptest.NewTLSServer gives you a server with a self-signed cert. Your client’s TLS config gets tested. The server’s srv.Client() returns a pre-configured *http.Client that trusts the self-signed cert.

The TLS one is the one I care most about. I’ve seen production outages caused by a client that worked in unit tests but failed against a real TLS server. With httptest.NewTLSServer, you catch certificate verification, SNI, ALPN negotiation, all of it.

A pattern I use a lot for testing retry logic:

func TestFetchUser_Retries(t *testing.T) {
    var calls atomic.Int32
    srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        n := calls.Add(1)
        if n < 3 {
            // first two attempts fail
            http.Error(w, "server error", http.StatusServiceUnavailable)
            return
        }
        w.Header().Set("Content-Type", "application/json")
        fmt.Fprint(w, `{"id": 42, "name": "Alice"}`)
    }))
    defer srv.Close()

    client := NewClient(srv.URL, WithMaxRetries(3))
    user, err := client.FetchUser(context.Background(), 42)
    if err != nil {
        t.Fatalf("FetchUser: %v", err)
    }
    if got := calls.Load(); got != 3 {
        t.Errorf("expected 3 calls, got %d", got)
    }
    _ = user
}

The counter tells you your retry logic actually retried. You can extend this to test exponential backoff (by recording timestamps), circuit breakers (by toggling server state), and all sorts of things that mock transports make hard.

Another lovely thing: simulating partial responses and disconnects. You want to test how your client handles the server closing the connection mid-response?

srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
    hj, _ := w.(http.Hijacker)
    conn, _, _ := hj.Hijack()
    // write a partial response then drop the connection
    conn.Write([]byte("HTTP/1.1 200 OK\r\nContent-Length: 100\r\n\r\n"))
    conn.Write([]byte("only 50 bytes of the "))
    conn.Close()
}))

http.Hijacker lets you take over the underlying TCP connection. This is wildly useful for simulating pathological server behavior: premature EOF, bytes arriving in weird chunks, connection resets, slow responses where each byte comes after a delay.

A few tips I’ve picked up:

  • Always defer srv.Close(). If you don’t, the listener leaks, and in long test runs you can run out of ports.
  • Use srv.Client() for TLS servers. It’s already configured with the right roots. If you use your own client, you’ll need to pass the cert or use InsecureSkipVerify, which I consider an anti-pattern in tests.
  • Set a short read/write timeout on the server if you’re testing timeouts. Otherwise, the default is infinite and your test might hang.
  • httptest.NewUnstartedServer returns a server you haven’t started yet, so you can configure it (e.g., TLS config, handler) before Start().
  • Parallelism. httptest.Server is safe to use in parallel tests. Each one gets a different port. Don’t hardcode ports.

The one place I still mock is for unit testing very specific edge cases in my own request construction — “when the user passes null here, does my code produce the right JSON?” For that, I use a fake RoundTripper or just call the internal helper directly. But for integration-flavored tests of client behavior, httptest.Server has caught real bugs that mocks never would have.

If you’re not using it, try it on one test this week. You’ll probably never go back.