base-workload.yaml high-priority.yaml
Depending on which solution you want to try, you'll want one or more additional files, which will be linked in each solution section.$ kubectl apply -f base-workload.yaml
After applying this manifest you'll want to scale the base-workload deployment. The number of replicas will depend on your node size -- each replica requests 256 MiB of memory, and you want to scale it so there is just under 1 GiB (and at least 512 MiB) of allocatable memory on the node running these pods. To find the node these are running on:$ kubectl get pods -o \ -l
Then to find out the amount of free memory on the node:$ kubectl describe node | grep -A
And finally, to scale the deployment:$ kubectl scale deployment --replicas
$ kubectl apply -f high-priority.yaml
$ kubectl get pods -l
You should notice that everything but the high-priority workload has scheduled; for example:NAME READY STATUS RESTARTS AGE
base-workload-9c4b4bf97-5mqvx 1/1 Running 0 5m20s
base-workload-9c4b4bf97-5rjf5 1/1 Running 0 5m20s
base-workload-9c4b4bf97-g7l86 1/1 Running 0 5m20s
base-workload-9c4b4bf97-rlqgj 1/1 Running 0 5m20s
base-workload-9c4b4bf97-t27kk 1/1 Running 0 5m2s
base-workload-9c4b4bf97-v8mw8 1/1 Running 0 8m40s
high-priority-569db858cc-q6gxd 0/1 Pending 0 7s
marker-pod-56d89b6d94-rw9zw 1/1 Running 0 8m41s
Describing the high-priority pod should tell you that preemption failed:
$ kubectl describe pods -l Name: high-priority-569db858cc-q6gxd Namespace: default Priority: 20241113 Priority Class Name: kubecon-demo Service Account: default [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 6m18s default-scheduler 0/1 nod es are available: 1 Insufficient memory. preemption: 0/1 nodes are available: 1 node(s) didn't match pod affinity rules. Warning FailedScheduling 60s default-scheduler 0/1 nod es are available: 1 Insufficient memory. preemption: 0/1 nodes are available: 1 node(s) didn't match pod affinity rules.
$ kubectl delete deployment -l $ kubectl apply -f high-priority-resized.yaml
(We could also downsize the base workload, but the math is easier for resizing just one pod.)$ kubectl delete deployment -l $ kubectl apply -f high-priority-no-affinity.yaml
$ kubectl delete deployment -l $ kubectl delete deployment -l $ kubectl apply -f base-workload-high-priority.yaml
Scale the deployment as you did before, then deploy the high-priority pod:$ kubectl apply -f high-priority.yaml
Note that nothing is preempted, though the events on the pending pod are slightly different:$ kubectl describe po -l [...] Warning FailedScheduling 6m19s default-scheduler 0/1 nodes are available: 1 Insufficient memory. preemption: 0/1 nodes are available: 1 I nsufficient memory.
$ kubectl get pods -l
You should notice that the high-priority pod is now scheduled. (Note that one or more of the base-workload pods may be Pending depending on the solution you implemented and how many nodes you have.)$ kubectl delete deployment -l $ kubectl delete priorityclass kubecon-demo