NodePlacement in ArgoCD operator

Use Case: How to run default Openshift Gitops Control Plane on Infrastructure nodes

September 18, 2021

The NodePlacement Property

The nodePlacement property in the ArgoCD operator allows a user to set a nodeSelector and tolerations in the workloads deployed by the operator. This allows the user to have better control over deciding the location of the workloads and where they run.

Importance of NodeSelectors

Kubernetes gives its users the advantage of automatic scheduling of pods on nodes. It does this by making sure that pods are scheduled on the nodes which have spare resources. In most cases this is much better than any manual effort for selecting nodes.

However there may be scenarios where the user wants the pods to run on certain nodes, it could be when we want the pods to run on a machine that has certain computation abilities like GPUs or SSDs. We could also want the pods to run on machines that are in the same availability region. In these cases NodeSelectors help us immensely by allowing us to specify the nodes which the pod should select to run on.

The 2 steps to use NodeSelectors

  1. Label the node that needs to be selected
  2. kubectl label nodes <node-name> <label-key>=<label-value>

  3. Add NodeSelector to the podSpec

  4. apiVersion: v1 
    kind: Pod
    metadata:
    name: example-pod
    spec:
    nodeSelector:
    label1: value1

Importance of Tolerations

In this blog , I explain how to use taints and tolerations for Pod Scheduling. Tolerations allows us to further control where the pods should or should not run.

How to use NodePlacement property in ArgoCD

The ArgoCD resource is a kubernetes Custom Resource. The CRD of ArgoCD allows the configuration of the components on the ArgoCD cluster

To use the nodePlacement property, add the following lines to the ArgoCD manifest file.

apiVersion: argoproj.io/v1alpha1 
kind: ArgoCD
metadata:
name: example-argocd
labels:
example: nodeplacement-example
spec:
nodePlacement:
nodeSelector:
key1: value1
tolerations:
- key: key1
operator: Equal
value: value1
effect: NoSchedule
- key: key1
operator: Equal
value: value1
effect: NoExecute

How to use taints and tolerations for running Gitops on Infrastructure Nodes

Openshift allows users to run Gitops control plane on Infrastructure nodes, this allows the customer to save cost of the platform as compute cost is not added for these nodes.

  1. To specify a node as an infrastructure node, label of infra nodes needs to be applied like so,
  2. oc label node  node-role.kubernetes.io/infra=""
  3. Then in the gitopsservice CR, set the toggle runOnInfra: true , this will add the nodeSelectors in all the workloads and they will be scheduled on the Infrastructure Nodes.

  4. Taints and toleration can optionally be used in the above scenario to enhance the use case. Customers can add taints on their infrastructure nodes to repel any node that shouldn’t be scheduled on these nodes, for example Openshift allows certain subscription components on the Infrastructure nodes to run without compute cost, example - RedHat Openshift Pipeline, RedHat Openshift Gitops, Openshift Registry etc.
  5. Customers can add taints to allow only pods from these components to run on the Infrastructure Node and repel all the others.

  6. Steps to run gitops plane on Infra nodes with taints and tolerations
    • Add taint on the node
    • oc adm taint nodes -l node-role.kubernetes.io/infra 
                                      infra=reserved:NoSchedule infra=reserved:NoExecute
    • For Gitops operator add the following in the gitopsservice CR
    • oc edit gitopsservice -n openshift-gitops
      spec: 
      runOnInfra: true
      tolerations:
      - effect: NoSchedule
      key: infra
      value: reserved
      - effect: NoExecute
      key: infra
      value: reserved

      After these steps, you can check the openshift console to check if the pods are scheduled correctly