Deploying Harbor Registry to Minikube: A Local Private Container Registry

& DevOps Practitioner

& DevOps Practitioner
π 6 minute read
A private container registry on your laptop, database and all, in one Helm install.
Introduction
Harbor is an open source container registry that handles image storage, signing, and vulnerability scanning. If you want a private registry running locally for development or testing, Harbor on Minikube is a solid option. The Helm chart bundles everything you need, including PostgreSQL and Redis, so there's nothing to provision externally.
This post walks through the full deployment on macOS with the Docker driver, covering the setup, the configuration, and the one thing that tripped us up.
What You'll Need
Before starting, make sure you have Docker Desktop installed and running, Minikube (brew install minikube), kubectl (brew install kubectl), and Helm (brew install helm).
Harbor deploys 7 pods and a few of them have real memory requirements, so you'll want at least 8 GB of RAM allocated to Minikube:
minikube start --memory=8192 --cpus=4 --kubernetes-version=v1.28.0
You'll also need the NGINX Ingress addon enabled and minikube tunnel running in a separate terminal:
minikube addons enable ingress
minikube tunnel # leave this running
If you followed our Argo CD Minikube guide, you've already done all of this.
Background: What Harbor Actually Deploys
Harbor isn't a single container. It's more like a small ecosystem. The Helm chart (v1.13.1, Harbor v2.9.1) creates up to 9 different workloads depending on your configuration:
- Core - the main API server that handles authentication, projects, and coordination
- Portal - the web UI, served by a static nginx container
- Registry - the actual Docker Distribution server plus a Harbor registry controller (two containers in one pod)
- Jobservice - handles async work like replication, garbage collection, and scan jobs
- Database - PostgreSQL, deployed as a StatefulSet
- Redis - cache and job queue, also a StatefulSet
- Trivy - vulnerability scanner for container images
- Nginx - reverse proxy, but only when you're not using ingress (so we skip it)
- Exporter - Prometheus metrics, only when
metrics.enabledis true (we skip this too)
On Minikube we end up with 7 pods. Several of them (core, database, trivy) use a decent amount of memory, which is why the 8 GB RAM allocation matters.
Step 1 - Create the Values File
We created deployment/minikube-values.yaml with values tailored for a single-node local cluster. Here are the key settings:
Database is set to internal, which tells the chart to deploy PostgreSQL as a StatefulSet with its own persistent volume. No need to set up an external database. We set the password to changeit.
Ingress uses controller: default and className: nginx to work with the Minikube ingress addon. The hostname is harbor.johno.local and the externalURL matches it.
Storage uses empty storageClass values, which means Minikube's default standard class picks up the PVCs automatically.
Resource limits are set conservatively. We gave Trivy a 1 Gi memory limit to stop it eating all the available RAM, and set lower database connection pool sizes (maxIdleConns: 50, maxOpenConns: 100) since we're running on a single node.
IPv6 is disabled. Minikube is IPv4-only, so ipFamily.ipv6.enabled: false.
TLS is set to auto-generate self-signed certificates. Fine for local dev.
Redis runs internally as a StatefulSet, same as the database. No external Redis needed.
Step 2 - Validate the Chart
Always worth checking before you deploy:
# Lint - catches template syntax issues
helm lint . -f deployment/minikube-values.yaml
# => 1 chart(s) linted, 0 chart(s) failed
# Template - renders all manifests without deploying
helm template harbor . -n harbor -f deployment/minikube-values.yaml | head -100
Harbor has no external chart dependencies, so there's no helm dependency build step. Everything is self-contained in the templates/ directory.
Step 3 - Deploy
Create the namespace and install:
kubectl create namespace harbor
helm install harbor -n harbor -f deployment/minikube-values.yaml .
That's it. The chart handles everything: ingress, TLS, database, Redis.
NAME: harbor
NAMESPACE: harbor
STATUS: deployed
REVISION: 1
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://harbor.johno.local
Step 4 - Wait for Pods
Harbor takes a few minutes to come up. The database StatefulSet needs to initialise PostgreSQL, and the core pod won't pass its startup probe until the database is ready. Give it 3 to 5 minutes:
kubectl -n harbor get pods -w
After about 10 minutes, everything was up:
NAME READY STATUS RESTARTS AGE
harbor-core-798588ddb9-j6fn9 1/1 Running 0 10m
harbor-database-0 1/1 Running 0 10m
harbor-jobservice-6b7488cdc4-kwznp 1/1 Running 4 (9m ago) 10m
harbor-portal-6c84fd9478-zwbzh 1/1 Running 0 10m
harbor-redis-0 1/1 Running 0 10m
harbor-registry-86f7d55cfd-2ks4c 2/2 Running 0 10m
harbor-trivy-0 1/1 Running 0 10m
Don't worry about the 4 restarts on jobservice. It depends on core, which depends on the database. The pod just retries until its dependencies are ready. This resolves itself and isn't a sign of misconfiguration.
We also checked that all the PVCs were bound:
kubectl -n harbor get pvc
Five PVCs (registry 5Gi, database 1Gi, redis 1Gi, trivy 2Gi, jobservice 1Gi), all Bound to dynamically provisioned volumes via the standard StorageClass.
Step 5 - Set Up DNS
Add the hostname to /etc/hosts:
echo '127.0.0.1 harbor.johno.local' | sudo tee -a /etc/hosts
Use 127.0.0.1, not the Minikube node IP (192.168.49.2). On macOS with the Docker driver, minikube tunnel routes traffic via localhost, not the node IP. If you use 192.168.49.2 you'll get ERR_CONNECTION_REFUSED in the browser.
A quick curl confirmed it:
# Works:
curl -sk https://127.0.0.1 -H "Host: harbor.johno.local" # => 200
# Doesn't work:
curl -sk https://192.168.49.2 -H "Host: harbor.johno.local" # => connection refused
Step 6 - Access the UI
Open https://harbor.johno.local in your browser. You'll get a "Not Secure" warning because the TLS certificate is auto-generated and self-signed. Click through it.
Log in with:
- Username:
admin - Password:
Harbor12345
The password comes from the harborAdminPassword field in the values file. Change it after first login under Administration > Configuration.
After logging in you'll see the default library project, which is a public project that any authenticated user can push to. There's a blue banner at the top confirming the Trivy scanner has been installed automatically.
Step 7 - Test Docker Push/Pull (Optional)
If you want to actually push images to Harbor, you need to configure Docker to trust the self-signed certificate:
# Extract the auto-generated CA cert
kubectl -n harbor get secret harbor-ingress \
-o jsonpath='{.data.ca\.crt}' | base64 -d > harbor-ca.crt
# Option A: Add to the macOS trust store
sudo security add-trusted-cert -d -r trustRoot \
-k /Library/Keychains/System.keychain harbor-ca.crt
# Option B: Configure Docker Desktop to trust it
mkdir -p ~/.docker/certs.d/harbor.johno.local
cp harbor-ca.crt ~/.docker/certs.d/harbor.johno.local/ca.crt
# Test it
docker login harbor.johno.local -u admin -p Harbor12345
docker tag nginx:latest harbor.johno.local/library/nginx:latest
docker push harbor.johno.local/library/nginx:latest
Issues We Hit
| Problem | Cause | Fix |
|---|---|---|
ERR_CONNECTION_REFUSED in browser |
/etc/hosts pointed to 192.168.49.2 instead of 127.0.0.1 |
Changed hosts entry to use 127.0.0.1 |
| Jobservice pod restarting 4 times | Normal startup ordering, waits for core and database | No fix needed, resolves itself in a few minutes |
| Browser shows "Not Secure" warning | Auto-generated self-signed TLS certificate | Expected for local dev, click through it |
The Full Command Sequence
Assuming Minikube and the ingress addon are already running:
# 1. Create namespace
kubectl create namespace harbor
# 2. Validate the chart
helm lint . -f deployment/minikube-values.yaml
helm template harbor . -n harbor -f deployment/minikube-values.yaml | head -100
# 3. Deploy
helm install harbor -n harbor -f deployment/minikube-values.yaml .
# 4. Wait for pods
kubectl -n harbor get pods -w
# 5. Verify PVCs
kubectl -n harbor get pvc
# 6. DNS resolution
echo '127.0.0.1 harbor.johno.local' | sudo tee -a /etc/hosts
# 7. Access the UI
# Open https://harbor.johno.local
# Login: admin / Harbor12345
To tear it all down:
helm uninstall harbor -n harbor
kubectl delete namespace harbor
sudo sed -i '' '/harbor.johno.local/d' /etc/hosts
Architecture on Minikube
Browser (https://harbor.johno.local)
|
127.0.0.1 (via minikube tunnel)
|
[NGINX Ingress Controller] (minikube addon)
|
ββββββ΄βββββββββββββββββ
β harbor-ingress β
ββββββ¬βββββββββββββββββ
βββ / β harbor-portal (Web UI)
βββ /api/ β harbor-core (REST API)
βββ /v2/ β harbor-core (Docker Registry API)
βββ /service/ β harbor-core (Token service)
βββ /chartrepo/ β harbor-core (Chart museum)
βββ /c/ β harbor-core (Misc routes)
Internal cluster traffic:
harbor-core βββ harbor-database (PostgreSQL :5432)
harbor-core βββ harbor-redis (Redis :6379)
harbor-core βββ harbor-registry (Distribution :5000)
harbor-core βββ harbor-jobservice (Async jobs :8080)
harbor-core βββ harbor-trivy (Vuln scanning :8080)
The ingress controller routes based on the Host header, so if you're running other services on the same cluster with different hostnames, they'll coexist without any additional configuration.
What I Learned
The internal database and Redis make this surprisingly self-contained. Setting database.type: internal tells the chart to deploy PostgreSQL as a StatefulSet with its own PVC. Same for Redis. No external services to provision, no connection strings to wire up. The entire deployment is one helm install command.
The hosts file IP is the one gotcha. With the Docker driver, minikube tunnel exposes services on 127.0.0.1, not the node IP. If you put 192.168.49.2 in your hosts file, nothing will connect. Always use 127.0.0.1.
Resource allocation matters. Harbor deploys 7 pods and several of them have real memory requirements. Anything less than 8 GB allocated to Minikube will likely result in OOMKilled pods or scheduling failures. If you're running other workloads on the same cluster, allocate more.
The Helm chart is well-designed. Ingress, TLS, storage, internal services are all configurable from the values file. You don't need to apply any extra manifests. For a chart that deploys 9 potential components, the experience of actually using it is clean.