Support for nodeSelector, Labels, Taints, and Tolerations
You can use scheduling constraints (nodeSelector
, labels, taints, and tolerations) in your Kubernetes cluster to schedule and segregate the workloads on separate node pools. Although it's not a requirement of your Kubernetes cluster to deploy
When you specify custom labels, taints, and tolerations, you can better control how the Mission Control Agent and the event broker services are deployed in your infrastructure. Using scheduling constraints helps to reduce the cross-talk between workloads and is useful in scenarios like creating dedicated nodes, distributing Pods evenly across the cluster, or co-locating Pods on the same machine. For some more examples of how you might use scheduling constraints with PubSub+ Cloud, see Use Cases.
Regardless of whether scheduling constraints are available, or the type of node pool you choose, Solace works with you to efficiently leverage your infrastructure, ensuring that High Availability groups are spread across availability zones for highest reliability, and creating a successful deployment to meet you requirements.
If you choose to use nodeSelector
, labels, taints, tolerations, or proxies you must let Solace know at the time of deployment. This ensures that we generate a values.yaml
file with the appropriate parameters for the Helm Chart. The example below shows part of the values.yaml
that we would provide for using a Toleration and nodeSelector
for the Mission Control Agent:
cloudAgentTolerations: - key: key1 operator: Equal value: effect: NoSchedule cloudAgentLabels: solace: test version: ta cloudAgentNodeSelectors: test: version1 solace: cloud-agent
The Mission Control Agent propagates these tolerations and nodeSelectors
to the pods at creation time.
For more information about scheduling constraints and proxies, see the following sections of the Kubernetes documentation:
Using nodeSelector
Using nodeSelector
is a simple way to constrain how workloads are scheduled. The nodeSelector
field is part of a pod's configuration, which follows the syntax described by the Kubernetes PodSpec.The nodeSelector
is a set of key-value pairs that specifies the node on which the pod can run. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well).
To use nodeSelector
, you first attach a label to the node, and then you add matching nodeSelector
values to the configuration for each pod that you want to run on that node.
For example, let's say you want to add the label region=americas
to a node called kubernetes-mesh-node-1.test
. You can assign the label at creation time via the configuration file, as shown below:
{
"kind": "Node",
"apiVersion": "v2",
"metadata": {
"name": "kubernetes-mesh-node-1.test",
"labels": {
"region": "americas"
}
}
}
Alternatively, you can run the following command to add the label to the existing node:
kubectl label nodes kubernetes-mesh-node-1.test region=americas
Now you can add a corresponding nodeSelector
to the configuration (PodSpec) for your pod so that it runs on the labeled node. For example:
apiVersion: v2
kind: metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
region: americas
When Kubernetes applies the configuration, the pod is scheduled on the node that you attached the label to.
When you deploy PubSub+ Cloud, instead of creating these configuration files directly, Solace adds the required labels, nodeSelectors
, and so on, to the values.yml
file that is used to generate the Helm chart for the Mission Control Agent. The Mission Control Agent then uses these settings to label the various resources when it creates them.
Use Cases
The following are some examples of situations where you can use scheduling constraints to control how event broker service nodes are scheduled in your cluster.
Multiple Environments in a Cluster
If you want to deploy multiple environments in the same cluster, you need to make sure that each environment gets its own set of nodes. For example, you might have a test and a production environment. In this case you would assign a label such as environment=test
to one set of nodes, and environment=prod
to another set of nodes. You can then configure the Mission Control Agent to assign the corresponding nodeSelector
to the pods it creates so that those pods are scheduled in the correct environment.
Schedule Workload to Appropriately-Scaled Node Pool
Suppose you are deploying the Mission Control Agent in the cloud and you want to automatically scale your nodes to match the size of the event broker service you select.
To do this, you can create one autoscaling node pool for each instance type, as required by the PubSub+ Cloud Service Classes. You label each node pool with a ServiceClass
label, and configure the Mission Control Agent to assign the correct nodeSelector
to the workload (the event broker pod). This allows Kubernetes to schedule the workload to the node pool that best matches the workload’s requirement, resulting in better resource utilization
The node pool requirements are the same for Enterprise service classes whether they are configured as standalone or High-Availability services.
You need the following node pools:
- One node pool (
prod1k
) for Developer, Enterprise 250, and Enterprise 1K service classes - One node pool (
prod10k
) for Enterprise 5K and Enterprise 10K service classes - One node pool (
prod100k
) for Enterprise 50K and Enterprise 100K service classes - One node pool (
monitoring
) for monitoring pods
These node pools require labels and taints as described in the following table:
Node Pool | Labels | Taints |
---|---|---|
prod1k
|
|
|
prod10k
|
|
|
prod100k
|
|
|
monitoring
|
|
|
To enable these service class selectors, you would add the following parameter to the values.yaml
configuration file:
k8s: useServiceClassSelectors: true
Multiple Storage Zones
If you are deploying the Mission Control Agent in a cluster that has multiple storage zones, you must ensure that the workload (pods) get deployed into the zone where it can be attached to the correct storage instance.
As in previous examples, you can configure the Mission Control Agent to schedule its Primary and Backup messaging pods to specific zones with the use of an appropriate nodeSelector
(for example, StorageZone
).
You could also use a nodeSelector
to assign a storage class to the Primary and Backup pods so that each uses a specific storage class. Using this combination, you could have the Primary and Backup pods using different storage zones, and also different storage classes.
Multiple Failure Domains
You are deploying the Mission Control Agent in a cluster that has multiple availability zones or failure domains. You want to ensure that each HA event broker service distributes its three pods (Primary, Backup and Monitor) into a different availability zones or failure domains to reduce the vulnerability of the event broker service as a whole.
To do this, you can configure the Mission Control Agent to schedule its pods to correct failure domains with an appropriate nodeSelector
(for example, AvailabilityZone
). This allows Kubernetes to correctly schedule the pods into the different zones.