Oleg Atamanenko

thoughts about programming

Deploying in Kubernetes. Checklist.

While kubernetes is easy to start with, it is quite challenging to master and know all details. In this post I will provide checklist of important manifest stanzas that are applicable to most applications that are targeted to run in production and which are expected to not have downtime during cluster maintenance and/or application updates.

Deploying to kubernetes is easy: create manifest with your Deployment and then kubectl apply it.

The most basic deployment manifest looks like this:

apiVersion: apps/v1
kind: Deployment
    app.kubernetes.io/name: test-app
  name: test-app
  namespace: default
      app.kubernetes.io/name: test-app
        app.kubernetes.io/name: test-app
      - name: controller
        image: nginx

It works as is, but you may improve reliability of this deployment by configuring additional fields for the manifest.


Use metadata field efficiently. You can add labels who owns the deployment, if they are part of the bigger project, and so on. This will allow to discover all deployments that are owned by a specific department:

kubectl get deploy -n production -l department=marketing

CPU/Memory Requests/Resources

Correct configuration is important. Requests are used for scheduling (kubelet reports node configuration to scheduler and scheduler uses this information when decides where the pod will be assigned). Limits are used for enforcing usage in runtime.

Things to remember:

  1. If you go over memory limit, app will be OOMKilled.
  2. If you go over CPU limit, app will be throttled. In fact this is more complicated and your app may be throttled before reaching limit, but this is a topic for another post.
  3. The configuration set affects Quality of Service for pod.
  4. QoS affects what happens with your pod when kubelet on the node is out of resources

Do not use latest tag in images

I spoke about it previously. Use exact version, i.e. nginx:1.19.2. This is better than latest, but even better to use sha256 of the image:

image: nginx@sha256:9d660d69e53c286fbdd472122b4b583a46e8a27f10515e415d2578f8478b9aad

Update Strategy

Default strategy is RollingUpdate. If you run multiple replicas of the application, consider tuning maxUnavailable/maxSurge based on your requirements. If your app have multiple replicas and each replica requires a lot of CPU/Memory, having maxSurge might require autoscaler to provision extra nodes.

Service Accounts

By default, each deployment will use default service account. If app requires access to kubernetes API, consider creating separate service account for the app. This will allow improve your isolation and security:

  1. Use PodSecurityPolicy for fine-grained authorization of what’s allowed to pod to do on the node.
  2. Use RBAC ClusterRoleBinding/RoleBinding to control permissions for the resources in Kubernetes.


securityContext allows to control security context of the pod. Recommended to enforce runAsNonRoot. See documentation

Pod Disruption Budget

If you have multiple replicas of the application, create PodDisruptionBudget for the Deployment. See documentation for more details.

Liveness & Readiness Probes

I cannot stress more importance of it. I’ve seen application being taken down when they should not and vice versa. Your biggest nightmare will be if you do a rollout when new pods are crashlooping and olds pods are not terminated when they should.

Lifecycle Hook - Post Start / Pre Stop

Lifecycle hooks allows gracefully terminate application. I.e. you can finish current request, save state and then terminate. See documentation

Priority Classes

Not all apps are created equal. Some apps are more important. Consider defining priority classes and use appropriate priority class for the application. Read more in official documentation

Taints and tolerations

Sometimes there are specific requirements where application should or should not run. Taints allows to taint nodes to prevent regular workload from scheduling on those node. To allow workload to be scheduled on the nodes - use Tolerations. See documentation

Affinities and anti-affinities

Affinities and anti-affinities provides you more control where to schedule the workload. For example, you might want to use pod anti-affinity to distribute replicas of the application across different nodes or availability zones. See documentation for details. Another great feature that you might need is Topology Manager for better allocation of the workload.


This is a must read documentation to learn / refresh your knowledge: