K3s Cluster Topology
The platform runs on a 3-node K3s cluster — a lightweight, certified Kubernetes distribution. The cluster consists of 1 server (control plane) node and 2 worker nodes.
Node Architecture
Scheduling & Availability
The cluster uses several mechanisms to distribute workloads and maintain availability across the 3 nodes:
- Soft node affinity: Applications use
preferredDuringSchedulingIgnoredDuringExecutioninstead of hardnodeSelector, allowing the scheduler flexibility when nodes are under pressure - Topology spread constraints: Multi-replica deployments spread pods across nodes using
topologySpreadConstraintswithmaxSkew: 1 - PriorityClasses: Cluster-wide priority classes ensure critical workloads (databases, ingress, monitoring) are scheduled before lower-priority pods during resource contention
- Resource quotas: Namespace-level resource quotas prevent any single namespace from consuming all cluster resources
Namespace Layout
Application Deployment Pattern
Each application follows a consistent pattern:
- Client deployment: Nginx serving the built frontend (React/Storybook)
- Server deployment (where applicable): Node.js Express/Apollo API server
- Service: ClusterIP service for internal routing
- Ingress: Standard Kubernetes Ingress with
ingressClassName: traefik - HPA: Horizontal Pod Autoscaler for auto-scaling
Interactive Components
Explore extracted UI components from each application in the Storybook Showcase. See the Applications Overview for per-app story links.