– Introduction and using Replication Controller at link: https://tech2fun.net/liveness-probe-and-replication-controller-for-managed-health-of-pod-in-k8s/

I/Replica Sets Overview:

– Replica Sets is new generation of Replication Controller and replaces it completely (Replication Controllers will eventually be deprecated).

– In facts, not generate Replica Set object directly but automated created by higher-level Deployment resource in YAML file. A Replica Set resource brings benefits exactly as Replication Controller includes ensuring number instance of pod running and scaling (up or down) easily.

– The difference between 2 resources is Replication Controller only support matching managed pod with certain label (match both key and value defined in selector part). Replica Set’s selector support to matching pods more flexible. For example: Replication Controller not support selector pod by using key env with value is production or staging at the same time but Replica Set can match all pod to group with both these values of a key.

– Replica Set resource also support regular expression for matching labels of pod. Example: Select all pod with key env=*.

II/Defining a ReplicaSet:

apiVersion: apps/v1

kind: ReplicaSet

metadata:

 name: apache-rs

spec:

 replicas: 3

 selector:

  matchLabels:

   app: apache

 template:

  metadata:

    labels:

      app: apache

  spec:

   containers:

    – image: httpd

      name: apache-container

      ports:

       – containerPort: 80

III/Using the Replica Set’s more expressive label selectors:

– Instead of using matchLabels selector Replica Set resource support powerful matchExpressions Property to use for label selectors part. Each expression must contain a key, an operator, and possibly (depending on the operator) a list of values. Valid operators include: In, NotIn, Exists, DoesNotExists.

– In and NotIn operator use to check value of key in a defined list. Exists and DoesNotExists operator only check existing of key in labels despite of values.

*Marking a node to be unschedulable state:

– Marking a node as unschedulable prevents the scheduler from placing new pods onto that Node, but does not affect existing Pods on the Node.

# kubectl cordon $NODENAME

– Mark node status schedulable again using command:

# kubectl uncordon $NODENAME

IV/Running exactly one pod on each node with Daemon Sets:

– Using Replica Sets running a specific number of pods deployed random across all nodes in K8s cluster. But in some case when you want a pod to run on each and every node in the cluster (and each node needs to run exactly one instance of the pod, need use Daemon Sets for deploying and managing pods).

– In reality cases need exactly one instance pod in each node in cluster is pod with application/service for monitoring/operating system (Example: log monitor, system performance monitor app,…).

– To run a pod on all cluster nodes, you create a Daemon Set object, which is much like a Replication Controller or a Replica Set, except that pods created by a Daemon Set already have a target node specified and skip the Kubernetes Scheduler.

– If a node goes down, the Daemon Set doesn’t cause the pod to be created elsewhere. But when a new node is added to the cluster, the Daemon Set immediately deploys a new pod instance to it.

– Default Daemon Set will deploy each instance of pod on each node but also support run pods only on certain subset of  nodes by specifying the nodeSelector property in the pod template, which is part of the Daemon Set definition.

Example: Using Daemon Set object to run pod with image cAdvisor for monitoring resource usage of each node in K8s cluster

– YAML file for deploying DaemonSet object:

apiVersion: apps/v1

kind: DaemonSet

metadata:

  name: cadvisor-ds

spec:

 selector:

  matchLabels:

   app: monitor

 template:

  metadata:

   labels:

    app: monitor

  spec:

    containers:

      – image: gcr.io/cadvisor/cadvisor

        name: cadvisor-container

        resources:

          requests:

            memory: 200Mi

            cpu: 150m

          limits:

            memory: 1000Mi

            cpu: 300m

        ports:

         – containerPort: 8080

        volumeMounts:

         – mountPath: /rootfs

           name: vol-mount1

           readOnly: true

         – mountPath: /var/run

           name: vol-mount2

           readOnly: true

         – mountPath: /sys

           name: vol-mount3

           readOnly: true

         – mountPath: /var/lib/docker/

           name: vol-mount4

           readOnly: true

    automountServiceAccountToken: false

    terminationGracePeriodSeconds: 30

    volumes:

     – name: vol-mount1

       hostPath:

         path: /      

     – name: vol-mount2

       hostPath:

         path: /var/run       

     – name: vol-mount3

       hostPath:

         path: /sys       

     – name: vol-mount4

       hostPath:

         path: /var/lib/docker/

– Checking exactly one instance of pod running on each node

– Configure using forwarding port to access cadvisor dashboard of pod running on one node


LEAVE A REPLY

Please enter your comment!
Please enter your name here