We tried buildpacks. I don't recommend them for most teams.
Cloud Native Buildpacks had been on my “look at this eventually” list for years. Last year I finally carved out time. We ran a pilot on three services for six months. Ended up ripping it out.
Here’s the honest evaluation. Buildpacks are good software made by smart people. They might be great for your team. They weren’t great for ours.
What buildpacks actually are
You have source code. Instead of writing a Dockerfile, you run pack build myimage --builder heroku/builder (or similar). Buildpacks detect what language you’re using, pick the right base image, install dependencies, produce a layered image. No Dockerfile.
The selling points:
- No Dockerfile maintenance.
- Automatic rebase (update base image without rebuilding the whole thing).
- Security patching without app-owner involvement.
- Good layer caching.
- Reproducible builds.
All of those are real.
What we actually experienced
The “no Dockerfile” promise is 70% true. For simple services, yes, buildpacks just work. For services that need anything slightly weird — a system library, a custom C extension, a non-standard language version — you end up writing a “customization” in the buildpack system, which is its own YAML + shell combination, which is a Dockerfile by another name.
One of our services needed librdkafka at build time. In Dockerfile land: apt-get install librdkafka-dev. In buildpack land: write a custom buildpack that declares librdkafka as a dependency and figures out how to install it into the builder image. This took me most of a day. It is not shorter than a Dockerfile.
The layering is opaque. When buildpacks produce layers, the reasons for each layer aren’t obvious. “Why did my cache invalidate?” is a question I asked a lot and couldn’t always answer. For debugging “why is my CI slow,” I want transparency; buildpacks gave me a black box.
Cross-cutting changes are hard. We wanted to add a tiny ca-certificates update across all services. In Dockerfile land, you update a base image, rebuild. In buildpacks, you update the builder, and… well, the builders come from upstream. We forked the heroku builder to add our cert. Now we’re maintaining a forked builder. This was not what I signed up for.
Debugging images is weird. docker run -it myimage sh works on Dockerfile images. Buildpack images run as a non-root user by default, sometimes without a shell, and you can’t always exec in for debugging. You can work around this, but you need to learn buildpack-specific incantations.
Where I think buildpacks are good
- Small teams with lots of services, all standard stacks. If you have 30 simple Python/Node services and no customization, buildpacks eliminate a lot of Dockerfile boilerplate.
- Teams that don’t have Docker expertise in-house. Buildpacks hide the container layer complexity.
- Policy-heavy environments. If your org wants to enforce “all images come from an approved builder,” buildpacks let the platform team centralize this.
For us — 40 services, mixed stacks, some with unusual dependencies, strong Docker expertise on the team — the cost of “learn the buildpack way” wasn’t paid back by the benefits.
Dockerfiles we ended up with
Our Dockerfiles are honestly not that scary:
FROM python:3.12-slim-bookworm AS base
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates build-essential && rm -rf /var/lib/apt/lists/*
FROM base AS deps
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
FROM base AS runtime
WORKDIR /app
COPY --from=deps /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY --from=deps /usr/local/bin /usr/local/bin
COPY . .
USER 1000:1000
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
30-ish lines. Multi-stage. Understandable. Cachable. The team can read and modify it. That’s the baseline I want.
The rebase question
The buildpacks feature that I really wanted was “update base image without rebuilding the app layer.” You can do this with a bit of work in plain Docker too:
# replace the base layer of an existing image
crane rebase myimage:v1 --original python:3.12-slim --rebased python:3.12-slim
Less magic than buildpacks, same result for the base OS patching use case. We use a nightly job that rebases all our images when upstream Python ships a patch.
Reflection
Buildpacks have a clear philosophy: hide the container from the application developer. That’s the right choice for some teams and the wrong choice for others. It was the wrong choice for us because we weren’t actually fighting Dockerfile complexity — our Dockerfiles were fine. We were fighting base image discipline, which has other solutions.
Don’t adopt buildpacks because they’re cool. Adopt them if your actual problem is “we have too many Dockerfiles and we want to throw them out.” If your Dockerfiles are working, keep them.
Related: Dev containers at 30 engineers is another “this tool is good but not free” piece.