jonsully1.dev

Deploying Argo CD to Minikube: A Local GitOps Playground

Cover Image for Deploying Argo CD to Minikube: A Local GitOps Playground
Photo by   on 
John O'Sullivan
John O'Sullivan
Senior Full Stack Engineer
& DevOps Practitioner

📖 6 minute read

From a full Docker disk to a running Argo CD instance, every step, every gotcha.

Introduction

Argo CD is a declarative GitOps continuous delivery tool for Kubernetes. It watches Git repositories and automatically syncs your cluster state to match, so you can stop running kubectl apply by hand.

The official Argo CD Helm chart makes deploying it fairly straightforward, but getting it running locally on Minikube has a few sharp edges, especially on macOS with the Docker driver. This post walks through the full process and covers the real issues we hit along the way.

What You'll Need

Before starting, make sure you have Docker Desktop installed and running, Minikube (brew install minikube), and kubectl (brew install kubectl). You'll also want roughly 4 GB of free RAM and 10 GB of free Docker disk space.

We'll install Helm during the guide if you don't already have it.

Step 1 - Check Docker Disk Space

This one catches people out. Our first minikube start failed immediately:

❌  Exiting due to RSRC_DOCKER_STORAGE: Docker is out of disk space!
    (/var is at 99% of capacity)

Before doing anything else, check how much space Docker is using:

docker system df
TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          9         7         11.78GB   7.71GB (65%)
Containers      7         7         125.9MB   0B (0%)
Local Volumes   88        6         36.78GB   34.18GB (92%)
Build Cache     174       0         5.772GB   5.772GB

88 volumes, 174 build cache entries, and only 6 of those volumes were actually in use. If you're in a similar state, just prune everything:

docker system prune -a --volumes -f

This reclaimed about 40 GB for us. It removes all unused images, volumes, and build cache. It's aggressive, but when Docker is at capacity you don't have much choice.

One thing to be aware of: this removes all unused Docker data. If you have images or volumes you need for other projects, use docker volume prune and docker image prune selectively instead.

Step 2 - Start Minikube

If you have an existing Minikube cluster that's misbehaving (we saw kubelet: Stopped and apiserver: Stopped after the disk-full event), it's easiest to just delete it and start fresh:

minikube delete
minikube start --memory=4096 --cpus=2 --kubernetes-version=v1.28.0

Argo CD runs 7 pods, so allocating 4 GB of RAM keeps things comfortable. The chart requires Kubernetes >=1.23.0, and v1.28 is a stable choice.

You should see:

🔥  Creating docker container (CPUs=2, Memory=4096MB) ...
🐳  Preparing Kubernetes v1.28.0 on Docker 28.1.1 ...
🏄  Done! kubectl is now configured to use "minikube" cluster

Verify everything is healthy:

kubectl get po -A
NAMESPACE     NAME                               READY   STATUS
kube-system   coredns-5dd5756b68-5zrnl           1/1     Running
kube-system   etcd-minikube                      1/1     Running
kube-system   kube-apiserver-minikube            1/1     Running
kube-system   kube-controller-manager-minikube   1/1     Running
kube-system   kube-proxy-wjb8r                   1/1     Running
kube-system   kube-scheduler-minikube            1/1     Running
kube-system   storage-provisioner                1/1     Running

All system pods should be Running and 1/1 Ready.

Step 3 - Install Helm and Enable Ingress

Install Helm if you don't have it:

brew install helm

We also need the NGINX Ingress controller, which we'll use later to access the Argo CD UI via a hostname:

minikube addons enable ingress

This deploys the ingress-nginx-controller pod into the ingress-nginx namespace. Give it a moment, then verify it's running:

kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS
ingress-nginx-controller-578cbb6f4b-hrlgk   1/1     Running

Step 4 - Clone the Chart and Build Dependencies

Clone the Argo CD Helm chart repository:

git clone https://github.com/argoproj/argo-helm.git
cd argo-helm

The chart has a dependency on redis-ha from the dandydeveloper Helm repo. We won't actually enable HA mode, but Helm still needs the dependency chart downloaded before it can render templates:

helm repo add dandydeveloper https://dandydeveloper.github.io/charts/
helm repo update
cd charts/argo-cd && helm dependency build && cd ../..

Step 5 - Validate the Chart

It's worth validating the chart before deploying. This catches template issues in seconds rather than after a failed install:

# Lint - checks for template syntax issues
helm lint ./charts/argo-cd
# => 1 chart(s) linted, 0 chart(s) failed

# Template - renders all manifests without actually deploying
helm template argocd ./charts/argo-cd -n argocd | head -50

The template output should show correctly namespaced ServiceAccounts, Deployments, and Services. If either of these commands throws errors, fix them now rather than debugging a half-deployed release.

Step 6 - Deploy Argo CD

Now for the good bit. Create the namespace and install the chart:

# Create the namespace
kubectl create namespace argocd

# Install the Helm release
helm upgrade --install argocd ./charts/argo-cd -n argocd

The install completes in seconds:

Release "argocd" does not exist. Installing it now.
NAME: argocd
NAMESPACE: argocd
STATUS: deployed
REVISION: 1

The default values are actually well suited for a local deployment. You get single replicas, no HA overhead, the server runs in insecure mode (no TLS, which is fine for local), and the CRDs are installed automatically.

Step 7 - Create an Ingress Resource

To access Argo CD via a hostname in the browser, we need an Ingress resource. Save this as argocd-ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: argocd
  namespace: argocd
spec:
  ingressClassName: nginx
  rules:
  - host: argocd.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: argocd-server
            port:
              number: 80

Apply it:

kubectl apply -f argocd-ingress.yaml

Step 8 - Verify the Deployment

Check that all 7 pods are running:

kubectl get pods -n argocd
NAME                                                READY   STATUS
argocd-application-controller-0                     1/1     Running
argocd-applicationset-controller-5b75664654-srgxg   1/1     Running
argocd-dex-server-67c488786f-zfgxw                  1/1     Running
argocd-notifications-controller-79d4f66df4-h9b2t    1/1     Running
argocd-redis-85dfb46f6c-7trmh                       1/1     Running
argocd-repo-server-6d7fc66fbc-sjlv4                 1/1     Running
argocd-server-798f5b55f9-796qz                      1/1     Running

Verify the CRDs were created:

kubectl get crd | grep argoproj
applications.argoproj.io      2026-03-11T18:03:23Z
applicationsets.argoproj.io   2026-03-11T18:03:23Z
appprojects.argoproj.io       2026-03-11T18:03:23Z

Check the ingress:

kubectl get ingress -n argocd
NAME     CLASS   HOSTS          ADDRESS        PORTS
argocd   nginx   argocd.local   192.168.49.2   80

Step 9 - Access the UI

This is where macOS with the Docker driver gets a bit fiddly. The ingress isn't directly reachable at the Minikube node IP, so we need to do a couple of extra things.

First, add a hosts entry so your browser knows where to find argocd.local:

echo '127.0.0.1 argocd.local' | sudo tee -a /etc/hosts

Then, open a separate terminal and start the Minikube tunnel:

minikube tunnel

This creates a network route so that traffic to 127.0.0.1 actually reaches the NGINX ingress controller inside the cluster. You'll need to leave this running.

Now grab the admin password:

kubectl -n argocd get secret argocd-initial-admin-secret \
  -o jsonpath="{.data.password}" | base64 -d

Open your browser at http://argocd.local and log in with the username admin and the password from the command above.

If you'd rather skip the ingress, tunnel, and hosts entry altogether, port-forwarding gets you to the UI in one command:

kubectl port-forward svc/argocd-server -n argocd 8080:80

Then just open http://localhost:8080.

Issues We Hit

Problem Cause Fix
RSRC_DOCKER_STORAGE on minikube start Docker at 99% disk capacity docker system prune -a --volumes -f freed 40 GB
Minikube kubelet/apiserver stuck Stopped Corrupted profile after disk-full state minikube delete + fresh minikube start
zsh: command not found: helm Helm not installed brew install helm
DNS_PROBE_FINISHED_NXDOMAIN in browser No /etc/hosts entry for the ingress hostname Added 127.0.0.1 argocd.local to /etc/hosts
Ingress unreachable despite hosts entry macOS Docker driver doesn't expose node IPs minikube tunnel routes 127.0.0.1 into the cluster

The Full Command Sequence

For reference, here's every command in order:

# 1. Free Docker disk space (if needed)
docker system prune -a --volumes -f

# 2. Start Minikube
minikube delete  # only if existing cluster is broken
minikube start --memory=4096 --cpus=2 --kubernetes-version=v1.28.0

# 3. Install prerequisites
brew install helm
minikube addons enable ingress

# 4. Clone and build chart dependencies
git clone https://github.com/argoproj/argo-helm.git && cd argo-helm
helm repo add dandydeveloper https://dandydeveloper.github.io/charts/
helm repo update
cd charts/argo-cd && helm dependency build && cd ../..

# 5. Validate
helm lint ./charts/argo-cd
helm template argocd ./charts/argo-cd -n argocd

# 6. Deploy
kubectl create namespace argocd
helm upgrade --install argocd ./charts/argo-cd -n argocd

# 7. Create ingress (save the YAML from Step 7 as argocd-ingress.yaml)
kubectl apply -f argocd-ingress.yaml

# 8. Access
echo '127.0.0.1 argocd.local' | sudo tee -a /etc/hosts
minikube tunnel  # run in a separate terminal
kubectl -n argocd get secret argocd-initial-admin-secret \
  -o jsonpath="{.data.password}" | base64 -d
# Open http://argocd.local

What I Learned

Docker disk space is the thing that'll get you first. Old volumes and build cache pile up quietly, and then one day Minikube just refuses to start. Get into the habit of running docker system df before you start debugging anything else.

On macOS, minikube tunnel is non-negotiable if you want to use ingress. The Docker driver doesn't expose node IPs to the host, so without the tunnel your ingress resources exist inside the cluster but you can't actually reach them from a browser.

Running helm lint and helm template before deploying is a small step that saves a lot of pain. A broken template is much easier to debug before it hits the cluster than after.

The default chart values turned out to be surprisingly Minikube-friendly. Single replicas, no HA, no resource limits, insecure server mode. Everything just works on a local cluster without any overrides.

And if ingress, DNS, or the tunnel gives you grief, kubectl port-forward bypasses all of it. It's not as clean, but it gets you to the UI immediately.