The NodePlacement Property
The nodePlacement property in the ArgoCD operator allows a user to set a nodeSelector and tolerations in the workloads deployed by the operator. This allows the user to have better control over deciding the location of the workloads and where they run.
Importance of NodeSelectors
Kubernetes gives its users the advantage of automatic scheduling of pods on nodes. It does this by making sure that pods are scheduled on the nodes which have spare resources. In most cases this is much better than any manual effort for selecting nodes.
However there may be scenarios where the user wants the pods to run on certain nodes, it could be when we want the pods to run on a machine that has certain computation abilities like GPUs or SSDs. We could also want the pods to run on machines that are in the same availability region. In these cases NodeSelectors help us immensely by allowing us to specify the nodes which the pod should select to run on.
The 2 steps to use NodeSelectors
- Label the node that needs to be selected
- Add NodeSelector to the podSpec
kubectl label nodes <node-name> <label-key>=<label-value>
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
nodeSelector:
label1: value1
Importance of Tolerations
In this blog , I explain how to use taints and tolerations for Pod Scheduling. Tolerations allows us to further control where the pods should or should not run.
How to use NodePlacement property in ArgoCD
The ArgoCD resource is a kubernetes Custom Resource. The CRD of ArgoCD allows the configuration of the components on the ArgoCD cluster
To use the nodePlacement property, add the following lines to the ArgoCD manifest file.
apiVersion: argoproj.io/v1alpha1
kind: ArgoCD
metadata:
name: example-argocd
labels:
example: nodeplacement-example
spec:
nodePlacement:
nodeSelector:
key1: value1
tolerations:
- key: key1
operator: Equal
value: value1
effect: NoSchedule
- key: key1
operator: Equal
value: value1
effect: NoExecute
How to use taints and tolerations for running Gitops on Infrastructure Nodes
Openshift allows users to run Gitops control plane on Infrastructure nodes, this allows the customer to save cost of the platform as compute cost is not added for these nodes.
- To specify a node as an infrastructure node, label of infra nodes needs to be applied like so,
- Then in the gitopsservice CR, set the toggle
runOnInfra: true
, this will add the nodeSelectors in all the workloads and they will be scheduled on the Infrastructure Nodes. - Taints and toleration can optionally be used in the above scenario to enhance the use case. Customers can add taints on their infrastructure nodes to repel any node that shouldn’t be scheduled on these nodes, for example Openshift allows certain subscription components on the Infrastructure nodes to run without compute cost, example - RedHat Openshift Pipeline, RedHat Openshift Gitops, Openshift Registry etc.
- Steps to run gitops plane on Infra nodes with taints and tolerations
- Add taint on the node
- For Gitops operator add the following in the gitopsservice CR
oc label nodenode-role.kubernetes.io/infra=""
Customers can add taints to allow only pods from these components to run on the Infrastructure Node and repel all the others.
oc adm taint nodes -l node-role.kubernetes.io/infra infra=reserved:NoSchedule infra=reserved:NoExecute
oc edit gitopsservice -n openshift-gitops
spec:
runOnInfra: true
tolerations:
- effect: NoSchedule
key: infra
value: reserved
- effect: NoExecute
key: infra
value: reserved
After these steps, you can check the openshift console to check if the pods are scheduled correctly