We rolled out dev containers across a team of about 30 engineers over the last year. The project was supposed to take a quarter and it took the whole year, because it turns out the middle of the dev container journey is full of weird paper cuts nobody writes blog posts about.

Here are my notes for anyone about to do this.

The good

Onboarding time for new engineers dropped from ~3 days to about 4 hours. That’s the whole “new laptop, git clone, get the test suite to pass” window. That alone would have justified the project.

The “works on my machine” class of bugs also dropped dramatically. When everyone’s python --version and node --version and libssl versions match, the class of bugs just… disappears.

The less-good

Performance on macOS. Docker Desktop’s file-sharing is slow even with VirtioFS. A test suite that ran in 22 seconds natively took 68 seconds in a dev container on the same Mac. We investigated for weeks. The fix involves mounting only what’s needed with consistency: cached, moving node_modules and .venv into volumes rather than bind mounts, and accepting that I/O-bound operations will just be slower. Most engineers now run their code in the container but run their test suite on the host for the inner loop.

Rebuilds are slow. When someone updates the Dockerfile, every engineer has to rebuild. If they have 40GB of other images from previous projects, they run out of disk. If they let their container daemon run out of disk, weird things happen. We added a weekly “docker system prune” reminder to the team channel.

The “one container, many services” question. Early on we had one monolithic dev container with Postgres, Redis, our app, etc. all stuffed in. It was easy to onboard but hard to keep updated. We moved to devcontainer.json + compose, with each service in its own image. This was the right call but the transition had friction — some engineers loved the monolith, some loved the split.

The things we over-engineered

Per-branch containers. We spent a month trying to make it so each git branch had its own rebuilt container, so switching branches between “last week’s Postgres 14 world” and “this week’s Postgres 15 world” was automatic. We shipped it, used it for three weeks, hated it, and removed it. Engineers preferred to manually docker compose down and docker compose up when the setup changed, because the automated version was slower and had edge cases when two branches shared a build cache.

VSCode remote extensions. We tried to enforce a list of “approved extensions” via the devcontainer config. It was a battle every time someone wanted a new extension. We backed off and now only specify a base set; engineers add their own.

The things that worked

A make dev that does the right thing. We wrote a Makefile (actually a Justfile, but same idea) that’s the canonical “get me a working dev environment.” It wraps docker compose up, checks for common issues, seeds the DB, and prints a ready-to-use URL. If it stops working, we fix it in the repo, not in chat.

dev:
	docker compose up -d db redis
	docker compose run --rm app bash -lc "scripts/wait-for-db.sh && scripts/seed.sh && ./manage.py runserver 0.0.0.0:8000"

A devcontainer CI check. In CI, we rebuild the dev container from scratch and run the smoke test against it. This catches “you changed the Dockerfile but forgot to update the devcontainer.json base image” bugs before they hit 30 engineers’ laptops.

A “restart me please” button. Our compose stack has a small web UI with a button that does a graceful restart of the app container without blowing away the DB volume. Sounds trivial, saves maybe 15 minutes a day per engineer.

The operational tail

A year in, I still spend maybe 2 hours a week on dev container maintenance across the team. Broken image builds, outdated base images, a new engineer with a weird CPU architecture (M4 Macs broke a few things for us), Docker Desktop auto-updates that changed networking. It’s not zero. It was never going to be zero. It’s less than helping 30 engineers each install 12 things manually, but only just.

Reflection

If you’re thinking about dev containers for a team, budget 2-3x the time you think it’ll take. The initial setup is the easy part. Keeping it working across team growth, OS updates, and ecosystem changes is where the time goes. Worth it, but don’t promise quick wins on week two.

Related: Justfile patterns for monorepos.