Our platform team is four people. We run maybe 40 services across two clusters. We ran Flux for a year, then switched to Argo CD for a quarter, then ended up back on Flux with one specific Argo tool in the mix. Here is what actually shaped the decision.

What we wanted

  • Deployments from git, with clear audit trail.
  • Visibility for devs who do not know kubectl.
  • Reasonable disaster recovery: cluster rebuilds in under an hour.
  • Low operational cost because we are four people.

Flux

Flux v2 is agent-focused. Install it once, point it at a git repo, it reconciles. The configuration is a few CRDs:

apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
  name: platform
  namespace: flux-system
spec:
  interval: 1m
  url: https://git.example.com/platform/k8s.git
  ref:
    branch: main
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: platform
  namespace: flux-system
spec:
  interval: 10m
  path: ./clusters/prod
  sourceRef:
    kind: GitRepository
    name: platform
  prune: true

That is almost the whole setup. Services live under ./clusters/prod as kustomize layers.

What I liked:

  • Reconciliation is silent when nothing changes. Flux logs are mostly uninteresting, which is the right amount of boring.
  • Helm support via HelmRelease is solid.
  • Image automation (auto-bump tags based on a regex or semver) is built in. We use it for a subset of services.
  • No UI. For a small team this is fine.

What I didn’t like:

  • No UI. For devs who were not on the platform team, there was no easy “why is my service not deployed?” button. They had to kubectl-and-pray.
  • Error surfacing is per-CRD. If your root Kustomization is fine but a child HelmRelease is not, you have to dig.
  • The CLI (flux) is fine but not fully featured. Complex debug still requires kubectl.

Argo CD

Argo CD is UI-first. Install it, log in, and you get a graphical tree of applications with health and sync status. It feels friendlier.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: app-core
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://git.example.com/platform/k8s.git
    targetRevision: main
    path: clusters/prod/app-core
  destination:
    server: https://kubernetes.default.svc
    namespace: app
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

What I liked:

  • The UI. Devs self-served a lot of “is my thing deployed?” questions.
  • Sync waves for ordering (annotation-based).
  • ApplicationSets to template out similar apps.
  • Rollback button. Yes, you can do this in Flux via git revert, but a button makes it accessible.

What I didn’t like:

  • The mental model of Application + Project + ApplicationSet is more than Flux’s. Each additional abstraction is a thing to understand.
  • The UI becomes a second source of truth in subtle ways. Some changes made there do not round-trip cleanly. We had someone “fix” a bad deploy by editing the live manifest in Argo, and then wondered why git diverged.
  • Resource footprint was noticeably larger. Two of the controllers ate memory in a way Flux’s did not.
  • RBAC for the UI is its own model. Keeping it in sync with our k8s RBAC was a minor chore.

Why we moved back to Flux (mostly)

Three reasons:

  1. The Argo UI encouraged bad habits. People clicking buttons instead of making PRs. This is fixable with RBAC but requires discipline.

  2. The resource footprint mattered on our smaller cluster. Flux uses around 200 Mi of memory total. Argo CD was closer to 1.2 Gi. On a prod cluster, fine. On our homelab-style staging cluster, annoying.

  3. Our devs got used to a Slack bot that reports deploy status from Flux webhooks. They mostly stopped asking for a UI.

Why we kept ApplicationSet-style templating

The one thing Argo did better for us was the ability to template out “the same config for 20 similar microservices”. We had a lot of “just a Go service with a standard set of sidecars” services. ApplicationSet generated an Argo Application per service from a list.

We replicated this with Flux using kustomize components and a generator:

# kustomization.yaml
components:
  - ../../components/go-service
  - ../../components/prom-scrape
patches:
  - path: name.yaml

Plus a small CI job that writes out the per-service kustomization.yaml from a list in a central config file. Works. Less magical than ApplicationSet, easier to debug.

The hybrid we ended up with

Flux for reconciliation. ArgoCD’s argocd-image-updater for one specific auto-bump workflow we liked. That is it. The Argo CD UI is not installed. We use weave gitops as a thin dashboard for Flux, which covers 80% of the UI need at 20% of the cost.

Other considerations

  • If your team is bigger and devs interact with the cluster UI daily, Argo’s polish wins. Our team is four people.
  • If you need multi-cluster management where apps live in git but deploy to many clusters, Argo’s Application with destinations is nicer. We have two clusters and don’t need it.
  • If you use Helm heavily, both handle it. Flux’s HelmRelease lets you pin chart versions, manage values, and lifecycle precisely. Argo’s helm support is also fine.
  • If you do not have git but do have S3 or OCI, both support those, Flux’s OCIRepository is elegant.

Reflection

The “correct” answer is workload-dependent. For small teams that are comfortable with kubectl and prefer boring controllers, Flux is usually the right call. For organizations where many people who are not platform engineers need to see deploy status, Argo’s UI pays for itself.

If you are starting fresh and do not know which you will prefer, I would install Flux first (the footprint is smaller, the mental model is simpler), and layer weave gitops on top if you miss a UI. Switch to Argo only if the UI becomes the deciding factor.

Related: see my post on four kubectl plugins I keep coming back to for the small-team tooling aesthetic I like.