Horizontal Pod Autoscaler
Seven applications have HPA enabled for automatic scaling based on CPU utilization.
HPA-Managed Applications
| Application | Min Replicas | Max Replicas | CPU Target |
|---|---|---|---|
| bookmarked | 1 | 10 | 80% |
| educationelly | 1 | 10 | 80% |
| educationelly-graphql | 1 | 10 | 80% |
| firebook | 1 | 10 | 80% |
| intervalai | 1 | 10 | 80% |
| portfolio-api | 1 | 10 | 80% |
| tenantflow | 1 | 10 | 80% |
The HPA vs ArgoCD Flapping Problem
When HPA is managing replica counts, ArgoCD can enter an infinite sync loop:
Root Cause
The portfolio-common deployment template always rendered spec.replicas, even when HPA was enabled. ArgoCD saw the Git-declared replica count as the desired state, while HPA continuously adjusted it.
The Fix (Two-Part)
1. Conditional replicas in the library chart:
# portfolio-common/templates/_deployment.tpl
{{- if not $.Values.autoscaling.enabled }}
replicas: {{ $.Values.replicaCount }}
{{- end }}
When autoscaling.enabled: true, the spec.replicas field is omitted entirely, letting HPA be the sole owner.
2. ArgoCD ignoreDifferences:
As a safety net, ArgoCD Application manifests include:
spec:
ignoreDifferences:
- group: apps
kind: Deployment
jsonPointers:
- /spec/replicas
This tells ArgoCD to ignore the spec.replicas field when computing drift, preventing sync loops even if the field is present.
Symptoms
- ArgoCD shows "OutOfSync" status with "successfully synced" message
- Pods constantly recycling (terminate → create → terminate)
- ArgoCD sync history shows rapid successive syncs
PodDisruptionBudgets
HPA-managed and multi-replica applications have PodDisruptionBudgets (PDBs) to protect availability during voluntary disruptions like node drains, cluster upgrades, or rolling updates.
The library chart includes a _pdb.tpl template, with each app including a thin pdb.yaml wrapper:
# bookmarked/templates/pdb.yaml
{{- include "portfolio-common.pdb" . }}
Applications with PDBs
| Application | Why |
|---|---|
| bookmarked | HPA-enabled, multi-replica |
| educationelly | HPA-enabled, multi-replica |
| educationelly-graphql | HPA-enabled, multi-replica |
| intervalai | HPA-enabled, multi-replica |
| portfolio-gatsby | Multi-replica static site |
PDBs ensure the cluster maintains minimum available pods when a node is drained for maintenance. On a 3-node cluster, this is critical — draining one node moves ~33% of workloads, and without PDBs, all replicas of a service could be evicted simultaneously.