context.Context is not a cache
A team I used to consult with had a very specific pattern: anything that looked vaguely “request-scoped” got stuffed into context.Context via context.WithValue. User ID, tenant ID, feature flags, the logger, the database handle, a request-specific rate limiter, and — I kid you not — a fully-initialized pricing engine.
Their argument, when I asked, was that this was cleaner than passing these things explicitly. You didn’t need to thread seven parameters through every function. You just passed ctx and pulled what you needed with ctx.Value(someKey). It looked a lot like dependency injection, if you squinted.
It’s worth understanding what context.WithValue actually does. Every call to WithValue allocates a new valueCtx struct, which wraps the parent. Lookup is a linked-list walk — you call ctx.Value(key), which checks if the current context has that key, and if not, recurses into the parent. This means:
type valueCtx struct {
Context
key, val any
}
func (c *valueCtx) Value(key any) any {
if c.key == key {
return c.val
}
return value(c.Context, key)
}
A context with 15 values is a linked list with 15 nodes. A lookup at the bottom of the list walks all 15 nodes. It’s O(n) per lookup, where n is the depth of values attached to this particular context.
Normally this is fine. You attach one or two things to a context — a request ID, a trace span — and the linked list stays short. But if you’re putting everything in there, and you’re looking things up in a hot loop, you’re doing a whole lot of interface-typed linked-list walking on every iteration.
This team had a function that, for each pricing decision, pulled the pricing engine out of context. It was called roughly a thousand times per request. Every one of those calls did a linked-list walk plus an interface type assertion. I pulled a CPU profile in a staging environment under load, and context.(*valueCtx).Value was the second-hottest function in the whole service. Their p99 was suffering by about 14ms for no good reason.
The fix was, of course, just… pass the pricing engine as an argument. Or, if that’s too many arguments, bundle the actual request-scoped dependencies into a struct and pass that:
type RequestScope struct {
UserID string
TenantID string
Pricing *PricingEngine
Log *slog.Logger
FeatureFlags FlagSet
}
func HandlePricing(ctx context.Context, scope *RequestScope, items []Item) (Bill, error) {
// use scope.Pricing directly, no context lookups
}
context.Context stays lean: cancellation, deadline, request ID, trace span. Nothing else.
The rule I now use for whether to put something in context:
- Does it need to propagate across API boundaries where you can’t change the signature? (tracing, deadlines, cancellation — yes, context)
- Is it read once, at the edge? (auth, request ID — context is fine, you read it once and stash it in a local)
- Is it read in a hot loop? (pricing, database handles — NO, pass it explicitly)
The other thing I’ll say is that context.WithValue is type-unsafe. You hand an any in and you pull an any out, and you type-assert at the call site. If the caller passes the wrong key, or a library you depend on changes a key type, you get a silent nil back. I’ve debugged “why is my logger nil” more times than I’d like to admit. Explicit arguments give you compile-time errors. Context values give you nil at runtime. Choose wisely.
A small side point: context.WithValue is documented as “should be used sparingly,” but I wish the docs were more specific. “Sparingly” is a word that lets people write thirty values into a context and tell themselves they’re being sparing. Something like “should only carry request-scoped metadata, not dependencies or configuration” would have saved this team a lot of p99 latency.
If you take one thing away: context is a cancellation and metadata propagation mechanism. It is not a DI container. It’s not a map. It’s not a cache. If your function needs something, take it as an argument, where the compiler will protect you.