Resource Requirements for Kubernetes

This section lists the various resources required to run PubSub+ Cloud in a Kubernetes environment. The resources consumed by event broker services are provided by worker nodes, which are part of the Kubernetes cluster.

The resources for each pod must be sufficient for the type of service it is running:

  • messaging—the pod is running a software event broker in messaging mode. In a high-availability (HA) configuration, the event broker takes either the primary or backup role.
  • monitoring—the pod is running a software event broker in monitoring mode (applies only to HA configurations)
  • Mission Control Agent—the pod is running the Mission Control Agent, which communicates with the Solace Home Cloud to configure and manage event broker services. For more information, see Mission Control Agent.

When sizing worker nodes, it is important to provision more RAM than listed in the table. The table below lists the RAM required by pods and doesn't take into account the overhead taken by Kubernetes. Typically, this overhead is less than 2 GB but can reach 4 GB.

The Developer service class is a standalone messaging node (requires one pod). The Enterprise 250 and all larger Service Classes are either HA groups, which require two messaging pods (Primary and Backup) and one monitoring pod , or standalone services, which require one messaging pod and one monitoring pod. Standalone event broker services are available after you have added them as a service class to your Service Limits. To use standalone event broker services, contact Solaceor request a limit change.

Running Multiple Pods per Worker Node

Typically, Solace runs only one pod per worker node. This helps with cost management in Elastic Cloud platforms where Solace can add VMs as needed.

It is also possible to run multiple pods on a worker node. This configuration can be helpful in customer environments with less dynamic IaaS virtualization. In this case, ensure that the requirements listed below are multiplied by the number of pods that will be running on the worker node. The worker node will have to provide that much, plus more (i.e., an extra 2 GB RAM) for overhead taken by Kubernetes systems pods.

For example, a customer could create two worker nodes that have 32 GB of RAM, 8 cores each, with one worker node that has 4 cores with 10 GB of RAM (8 GB + extra 2 GB). The customer could then deploy four Enterprise 1K services on those three worker nodes. The 32 GB nodes would be running four pods each (Primary and Backup nodes), and the 10 GB node would be running four monitoring pods.

Multiple Mission Control Agents per Kubernetes Cluster

It is also possible to run more than one Mission Control Agent in one Kubernetes cluster. A customer may wish to have a single cluster serve multiple environments such as development, QA, production, and so on. Installing multiple Mission Control Agents in the same Kubernetes cluster allows these customer environments to reside together in the same cluster.

The Mission Control Agent requires one namespace to be fully dedicated to it. Thus to run multiple Mission Control Agents, multiple namespaces are required.

Each Mission Control Agent represents one data center in PubSub+ Cloud, which means that a Kubernetes cluster with multiple Mission Control Agents is hosting multiple data centers from the PubSub+ Cloud Console point of view. The worker nodes are shared amongst these Mission Control Agents, therefore it is important to have enough resources provided by the worker nodes to be able to schedule all the services created by the different Mission Control Agents. When worker nodes are sized so they can run multiple software broker pods, it is also possible for pods from different Mission Control Agents to get scheduled on the same node.

Dynamic Volume Provisioning

PubSub+ Cloud requires dynamic volume provisioning, which is requested through Persistent Volume Claims managed by Kubernetes StatefulSets. This requires the infrastructure to provide a storage backend that is supported by one of the Kubernetes storage plugins. In addition, PubSub+ Cloud requires a block storage backend for dynamic volume provisioning (file system based backends are not supported).

To accomplish data encryption at rest for the software event broker messages and configuration, the storage backend must provide encrypted volumes. The broker itself does not provide data encryption at rest.

Supported Storage Solutions

PubSub+ Cloud has been tested to work with the following storage solutions:

  • Portworx
  • Ceph
  • Cinder (Openstack)
  • vSphere storage for Kubernetes

Do not use the Network File System (NFS) protocol as part of your storage solution with PubSub+ Cloud.

Volume Performance Requirements

PubSub+ Cloud requires a storage backend that is capable of providing the following performance:

  Latency IOPS
Minimum 3 ms 5 K
Recommended 1 ms 20 K

The latency and IO operations per second are the most important dimensions. Performance below the minimum threshold will severely impair guaranteed messaging performance.

Compute Resource Requirements

The term Allocatable Core refers to the CPU capacity that the Kubernetes/Openshift clusters must be able to provide to the pods used for PubSub+ Cloud. For example, an m5.xlarge instance in AWS has 4000 mCore of CPU capacity. However, only 3500 mCore of CPU can be requested by pods. It is important to account for this when you are planning the capacity for a Kubernetes cluster.

The first of the following tables provides the compute resource requirements for high availability event broker services. The second table provides the compute resource requirements for standalone event broker services. Note that standalone event broker services are not available by default. Contact Solace for more information.

Compute Resources for High Availability Event Broker Services

Service Class

Pod Type
(HA Role)

CPU Request
(mCores)

Total CPU Request
(mCores)

CPU Limit
(mCores)

Total CPU Limit
(mCores)

Developer

Messaging

1,550

1,550

2,500

2,500

Enterprise 250

Primary Messaging

1,550

3,500

2,500

6,300

Backup Messaging

1,550

2,500

Monitoring

400

1,300

Enterprise 1K

Primary Messaging

1,550

3,500

2,500

6,300

Backup Messaging

1,550

2,500

Monitoring

400

1,300

Enterprise 5K

Primary Messaging

3,500

7,400

4,500

10,300

Backup Messaging

3,500

4,500

Monitoring

400

1,300

Enterprise 10K

Primary Messaging

3,500

7,400 

4,500

10,300

Backup Messaging

3,500

4,500

Monitoring

400 

1,300

Enterprise 50K

Primary Messaging

7,500

15,400

8,500

18,300

Backup Messaging

7,500

8,500

Monitoring

400

1,300

Enterprise 100K

Primary Messaging

7,500

15,400

8,500

18,300

Backup Messaging

7,500

8,500

Monitoring

400

1,300

Compute Resources for Standalone Event Broker Services

Service Class

Pod Type
(HA Role)

CPU Request
(mCores)

Total CPU Request
(mCores)

CPU Limit
(mCores)

Total CPU Limit
(mCores)

Developer

Messaging

1,550

1,550

2,500

2,500

Enterprise 250 Standalone

Messaging

1,550

1,950

2,500

3,800

Monitoring

400

1,300

Enterprise 1K Standalone

Messaging

1,550

1,950

2,500

3,800

Monitoring

400

1,300

Enterprise 5K Standalone

Messaging

3,500

3,900

4,500

5,800

Monitoring

400

1,300

Enterprise 10K Standalone

Messaging

3,500

3,900

4,500

5,800

Monitoring

400 

1,300

Enterprise 50K Standalone

Messaging

7,500

7,900

8,500

9,800

Monitoring

400

1,300

Enterprise 100K Standalone

Messaging

7,500

7,900

8,500

9,800

Monitoring

400

1,300

Memory Resource Requirements

The term Allocatable RAM refers to the amounts of memory that the Kubernetes/Openshift clusters must be able to provide to the pods used for PubSub+ Cloud. For example, an m5.xlarge instance in AWS has 16220452 KiB of RAM capacity. However, only 15069476 KiB of RAM can be allocated toward pods. It is important to account for this when you are planning the capacity for a Kubernetes cluster.

The first table provides memory resource requirements for

Enabling the Message Retain feature with a 2 GB memory buffer increases the memory requirement of each service class by 2048 MiB.

The first of the following tables provides the memory resource requirements for high-availability event broker services. The second table provides the memory resource requirements for standalone event broker services. Note that standalone event broker services are not available by default. Contact Solace for more information.

Memory Resources for High-Availability Event Broker Services

Service Class

Pod Type
(HA Role)

Instance Type Without Retain

Memory Request and
Memory Limit
(MiB)

Total Memory Request
and Limit
(MiB)

Ephemeral
Storage
Request
and Limit (GiB)

Total
Ephemeral
Storage
Request
and Limit (GiB)

Developer

Messaging

6,912.0 MiB

6,912.0 MiB

2.25 GiB

2.25 GiB

Enterprise 250

Primary Messaging

6,912.0 MiB

16,128 MiB

2.25 GiB

6.75 GiB

Backup Messaging

6,912.0 MiB

Monitoring

2,304 MiB

Enterprise 1K

Primary Messaging

6,912.0 MiB

16,128 MiB

2.25 GiB

6.75 GiB

Backup Messaging

6,912.0 MiB

Monitoring

2,304 MiB

Enterprise 5K

Primary Messaging

14,489.6 MiB

31,283.2 MiB

2.25 GiB

6.75 GiB

Backup Messaging

14,489.6 MiB

Monitoring

2,304 MiB

Enterprise 10K

Primary Messaging

14,489.6 MiB

31,283.2 MiB

2.25 GiB

6.75 GiB

Backup Messaging

14,489.6 MiB

Monitoring

2,304 MiB

Enterprise 50K

Primary Messaging

31,283.2 MiB

64,870.4 MiB

2.25 GiB

6.75 GiB

Backup Messaging

31,283.2 MiB

Monitoring

2,304 MiB

Enterprise 100K

Primary Messaging

31,283.2 MiB

64,870.4 MiB

2.25 GiB

6.75 GiB

Backup Messaging

31,283.2 MiB

Monitoring

2,304 MiB

Memory Resources for Standalone Event Broker Services

Service Class

Pod Type
(HA Role)

Instance Type Without Retain

Memory Request and
Memory Limit
(MiB)

Total Memory Request
and Limit
(MiB)

Ephemeral
Storage
Request
and Limit (GiB)

Total
Ephemeral
Storage
Request
and Limit (GiB)

Developer

Messaging

6,912.0 MiB

6,912.0 MiB

2.25 GiB

2.25 GiB

Enterprise 250 Standalone

Messaging

6,912.0 MiB

9,216 MiB

2.25 GiB

6.75 GiB

Monitoring

2,304 MiB

Enterprise 1K Standalone

Messaging

6,912.0 MiB

9,216 MiB

2.25 GiB

6.75 GiB

Monitoring

2,304 MiB

Enterprise 5K Standalone

Messaging

14,489.6 MiB

16,793.6 MiB

2.25 GiB

6.75 GiB

Monitoring

2,304 MiB

Enterprise 10K Standalone

Messaging

14,489.6 MiB

16,793.6 MiB

2.25 GiB

6.75 GiB

Monitoring

2,304 MiB

Enterprise 50K Standalone

Messaging

31,283.2 MiB

33,587.2 MiB

2.25 GiB

6.75 GiB

Monitoring

2,304 MiB

Enterprise 100K Standalone

Messaging

31,283.2 MiB

33,587.2 MiB

2.25 GiB

6.75 GiB

Monitoring

2,304 MiB

Message Spool Size Requirements

The following table lists default Message Spool size for each service class and the resulting persistent disk space required:

The Developer service class is a standalone messaging node (requires one pod). The Enterprise 250 High Availability and all larger High Availability Service Classes are HA groups, which require two messaging pods (Primary and Backup) and one monitoring pod. Standalone service classes only require one messaging pod, and one monitoring pod.

The first of the following tables provides the message spool size requirements for high availability event broker services. The second table provides the message spool size requirements for standalone event broker services. Note that standalone event broker services are not available by default. Contact Solace for more information.

Volume Size for High Availability Event Broker Services

Service Class HA Redundant Message Spool
Size
Persistent Disk
Space Requirement
Developer No 10 GB 40 GB
Enterprise 250 Yes 25 GB 70 GB x 2
Enterprise 1K High Availability Yes 50 GB 120 GB x 2
Enterprise 5K High Availability Yes 200 GB 420 GB x 2
Enterprise 10K High Availability Yes 300 GB 620 GB x 2
Enterprise 50K High Availability Yes 500 GB 1,020 GB x 2
Enterprise 100K High Availability Yes 500 GB 1,020 GB x 2

Volume Size for Standalone Event Broker Services

Service Class HA Redundant Message Spool
Size
Persistent Disk
Space Requirement
Developer No 10 GB 40 GB
Enterprise 250 Standalone No 25 GB 70 GB
Enterprise 1K Standalone No 50 GB 120 GB
Enterprise 5K Standalone No 200 GB 420 GB
Enterprise 10K Standalone No 300 GB 620 GB
Enterprise 50K Standalone No 500 GB 1,020 GB
Enterprise 100K Standalone No 500 GB 1,020 GB

Mission Control Agent Pod

The following table lists the resources that the worker node running the Mission Control Agent must provide to the Mission Control Agent's pod. Requirements very based on whether your environment is autoscaling or not.

Environment Cores Required RAM Required
Autoscaling 1 2 GB
Non-autoscaling 2 4 GB

Load Balancer Rules Per Service

If you choose to use NodePort, the port numbers may not map accordingly to the listed ports in the table. To see the ports, you must look at the service's connection information in the PubSub+ Cloud Console

By default, there are nine rules available per service when plain-text protocols are disabled. The nine protocol rules include all non-plain-text messaging protocols, SEMP, SEMP-TLS, and SSH. With all plain-text protocols enabled, the number of protocols enabled per service can be up to sixteen.

After the deployment of PubSub+ Cloud, the customer can modify the rules when they a create an event broker service (either via Cluster Manager in the PubSub+ Cloud Console or via the REST API). The customer can disable protocols/ports, change the ports numbers that are used, or enable additional protocols (such as the plain-text variants). For example, the customer can choose to enable plain-text REST (9000) and only that service has that port enabled. For more information, see Configuring Client Port Connections.

The following table lists the protocol ports, whether the port is enabled by default when a event broker service is created, the protocol, and the type of traffic passed with the protocol.

Port for Each Protocol

Enabled by Default for an Event Broker Service

Protocol and Description

Type of Traffic

8080 Yes/No (See note)

SEMP (plain-text)

This port is disabled by default on event broker services created after December 2020.

Management
22 Yes Secured Shell Management

443

Yes

Secured Web Transport TLS/SSL

Data

943

Yes

SEMP over TLS

Management

5671

Yes

Secured AMQP

Data

8443

Yes

WebSocket Secured MQTT

Data

8883

Yes

Secured MQTT TLS/SSL

Data

9443

Yes

Secured REST TLS / SSL

Data

55443

Yes

Secured SMF TLS/ SSL (without compression)

Data

80

No

Web Transport

Data

1883

No

MQTT (plain-text)

Data

5672

No

AMQP (plain-text)

Data

8000

No

MQTT / WebSockets (plain-text)

Data

9000

No

REST (plain-text)

Data

55003

No

SMF-compressed

Data

55555

No

Solace Message Format (SMF) - plaintext

Data