Resource Requirements for Kubernetes and Default Port Configuration

This section lists the various resources required to run PubSub+ Cloud in a Kubernetes environment. The resources consumed by event broker services are provided by worker nodes, which are part of the Kubernetes cluster.

The resources for each pod must be sufficient for the type of service it is running:

  • messaging—the pod is running a software event broker in messaging mode. In a high-availability (HA) configuration, the event broker takes either the primary or backup role.
  • monitoring—the pod is running a software event broker in monitoring mode (applies only to HA configurations)
  • Mission Control Agent—the pod is running the Mission Control Agent, which communicates with the PubSub+ Home Cloud to configure and manage event broker services. For more information, see Mission Control Agent.

When sizing worker nodes, it is important to provision more RAM than listed in the table. The table below lists the RAM required by pods and doesn't take into account the overhead taken by Kubernetes. Typically, this overhead is less than 2 GB but can reach 4 GB.

The Developer and Standalone service classes use a single messaging node (requires one pod for messaging). Enterprise 250 and larger Service Classes are High-Availability groups that require two messaging pods (Primary and Backup) and one monitoring pod. Standalone event broker services are only available after you have added them as a service class to your Service Limits. To use standalone event broker services, contact Solace or request a limit change.

Running Multiple Pods per Worker Node

Typically, Solace runs only one pod per worker node. This helps with cost management in Elastic Cloud platforms where Solace can add VMs as needed.

It is also possible to run multiple pods on a worker node. This configuration can be helpful in customer environments with less dynamic IaaS virtualization. In this case, ensure that the requirements listed below are multiplied by the number of pods that will be running on the worker node. The worker node will have to provide that much, plus more (i.e., an extra 2 GB RAM) for overhead taken by Kubernetes systems pods.

For example, a customer could create two worker nodes that have 32 GB of RAM, 8 cores each, with one worker node that has 4 cores with 10 GB of RAM (8 GB + extra 2 GB). The customer could then deploy four Enterprise 1K services on those three worker nodes. The 32 GB nodes would be running four pods each (Primary and Backup nodes), and the 10 GB node would be running four monitoring pods.

Multiple Mission Control Agents per Kubernetes Cluster

It is also possible to run more than one Mission Control Agent in one Kubernetes cluster. A customer may wish to have a single cluster serve multiple environments such as development, QA, production, and so on. Installing multiple Mission Control Agents in the same Kubernetes cluster allows these customer environments to reside together in the same cluster.

The Mission Control Agent requires one namespace to be fully dedicated to it. Thus to run multiple Mission Control Agents, multiple namespaces are required.

Each Mission Control Agent represents one data center in PubSub+ Cloud, which means that a Kubernetes cluster with multiple Mission Control Agents is hosting multiple data centers from the PubSub+ Cloud Console point of view. The worker nodes are shared amongst these Mission Control Agents, therefore it is important to have enough resources provided by the worker nodes to be able to schedule all the services created by the different Mission Control Agents. When worker nodes are sized so they can run multiple software broker pods, it is also possible for pods from different Mission Control Agents to get scheduled on the same node.

Dynamic Volume Provisioning

PubSub+ Cloud requires dynamic volume provisioning, which is requested through Persistent Volume Claims managed by Kubernetes StatefulSets. This requirement requires the infrastructure to provide a storage backend that is supported by one of the Kubernetes storage plugins. In addition, PubSub+ Cloud requires a block storage backend for dynamic volume provisioning (filesystem-based backends are not supported).

To accomplish data encryption at rest for the software event broker messages and configuration, the storage backend must provide encrypted volumes. The event broker service itself does not provide data encryption at rest.

Supported Storage Solutions

PubSub+ Cloud has been tested to work with the following storage solutions:

  • Portworx
  • Ceph
  • Cinder (Openstack)
  • vSphere storage for Kubernetes

Do not use the Network File System (NFS) protocol as part of your storage solution with PubSub+ Cloud.

Volume Performance Requirements

PubSub+ Cloud requires a storage backend that is capable of providing the following performance:

Requirement Latency IOPS
Minimum 3 ms 5 K
Recommended 1 ms 20 K

The latency and IO operations per second are the most important dimensions. Performance below the minimum threshold will severely impair guaranteed messaging performance.

Compute Resource Requirements

The term Allocatable Core refers to the CPU capacity that the Kubernetes/Openshift clusters must be able to provide to the pods used for PubSub+ Cloud. For example, an m5.xlarge instance in AWS has 4000 mCore of CPU capacity. However, only 3500 mCore of CPU can be requested by pods. It is important to account for this when you are planning the capacity for a Kubernetes cluster.

The first of the following tables provides the compute resource requirements for high availability event broker services. The second table provides the compute resource requirements for developer and standalone event broker services. Note that standalone event broker services are not available by default. Contact Solace for more information.

Compute Resources for High Availability Event Broker Services

The CPU Request and CPU Limit numbers in the table below include a 200 mCore requirement per monitoring agent. Each high-availability event broker service has three monitoring agents, one per broker that compose the service (primary, backup, monitoring).

Service Class Pod Type
(High-Availability Role)
CPU Request
(mCores)
Total CPU Request
(mCores)
CPU Limit
(mCores)
Total CPU Limit
(mCores)

Enterprise 250

Primary Messaging

1,250

2800

2,200

5,600

Backup Messaging

1,250

2,200

Monitoring

300

1,200

Enterprise 1K

Primary Messaging

1,250

2800

2,200

5,600

Backup Messaging

1,250

2,200

Monitoring

300

1,200

Enterprise 5K

Primary Messaging

3,200

6,700

4,200

9,600

Backup Messaging

3,200

4,200

Monitoring

300

1,200

Enterprise 10K

Primary Messaging

3,200

6,700

4,200

9,600

Backup Messaging

3,200

4,200

Monitoring

300

1,200

Enterprise 50K

Primary Messaging

7,200

14,700

8,200

17,600

Backup Messaging

7,200

8,200

Monitoring

300

1,200

Enterprise 100K

Primary Messaging

7,200

14,700

8,200

17,600

Backup Messaging

7,200

8,200

Monitoring

300

1,200

Compute Resources for Developer and Standalone Event Broker Services

The CPU Request and CPU Limit numbers in the table below include a 200 mCore requirement for the monitoring agent that is included with the standalone event broker service.

Service Class CPU Request
(mCores)
CPU Limit
(mCores)

Developer

1,250

2,200

Enterprise 250 Standalone

1,250

2,200

Enterprise 1K Standalone

1,250

2,200

Enterprise 5K Standalone

3,200

4,200

Enterprise 10K Standalone

3,200

4,200

Enterprise 50K Standalone

7,200

8,200

Enterprise 100K Standalone

7,200

8,200

Memory Resource Requirements

The term Allocatable RAM refers to the amounts of memory that the Kubernetes/Openshift clusters must be able to provide to the pods used for PubSub+ Cloud. For example, an m5.xlarge instance in AWS has 16220452 KiB of RAM capacity. However, only 15069476 KiB of RAM can be allocated toward pods. It is important to account for this when you are planning the capacity for a Kubernetes cluster.

Enabling the Message Retain feature with a 2 GB memory buffer increases the memory requirement of each service class by 2048 MiB.

The first of the following tables provides the memory resource requirements for high-availability event broker services. The second table provides the memory resource requirements for developer and standalone event broker services. Note that standalone event broker services are not available by default. Contact Solace for more information.

Memory Resources for High-Availability Event Broker Services

The memory request and memory limit numbers in the following table include memory requirements for the monitoring agent. High-availability event broker services have three monitoring agents, one per broker that compose the service (primary, backup, monitoring). The monitoring agent requirements are:

  • Memory request for all versions: 256 MiB

  • Memory limit for versions 10.5 and earlier: 256 MiB

  • Memory limit for versions 10.6 and later: 512 MiB

Service Class Pod Type
(High-Availability Role)
Instance Type Without Retain
Memory Request (MiB) Total Memory Request (MiB) Memory Limit (MiB) Total Memory Limit (MiB) Ephemeral

Storage

Request

and Limit per service (GiB) 
Total
Ephemeral
Storage
Request
and Limit (GiB)
 
Version 10.4
and Earlier
Version 10.5 Version 10.6
and Later
Version 10.4
and Earlier
Version
10.5
Version 10.6
and Later
Version
10.4
and Earlier
Version
10.5
Version
10.6
and Later
Version
10.4
and Earlier
Version
10.5
Version
10.6
and Later

Enterprise 250

Primary Messaging

6,912.0

7,471.0

16,128

17,246.0

6,912.0

7,727.0

16,128

18,014.0

2.25

6.75

Backup Messaging

6,912.0

7,471.0

6,912.0

7,727.0

2.25

Monitoring

2,304.0

2,304.0

2,560.0

2.25

Enterprise 1K

Primary Messaging

6,912.0

7,471.0

16,128

17,246.0

6,912.0

7,727.0

16,128

18,014.0

2.25

6.75

Backup Messaging

6,912.0

7,471.0

6,912.0

7,727.0

2.25

Monitoring

2,304.0

2,304.0

2,560.0

2.25

Enterprise 5K

Primary Messaging

14,489.6

21,555.2

24,985.0

31,283.2

47,871.24

52,274.0

14,489.6

21,555.2

25,241.0

31,283.2

47,871.24

53,042.0

2.25

6.75

Backup Messaging

14,489.6

21,555.2

24,985.0

14,489.6

21,555.2

25,241.0

2.25

Monitoring

2,304.0

2,304.0

2,560.0

2.25

Enterprise 10K

Primary Messaging

14,489.6

21,555.2

24,985.0

31,283.2

47,871.24

52,274.0

14,489.6

21,555.2

25,241.0

31,283.2

47,871.24

53,042.0

2.25

6.75

Backup Messaging

14,489.6

21,555.2

24,985.0

14,489.6

21,555.2

25,241.0

2.25

Monitoring

2,304.0

2,304.0

2,560.0

2.25

Enterprise 50K

Primary Messaging

31,283.2

40,396.8

43,475.0

64,870.4

81,458.4

89,254.0

31,283.2

40,396.8

43,731.0

64,870.4

81,458.4

90,022.0

2.25

6.75

Backup Messaging

31,283.2

40,396.8

43,475.0

31,283.2

40,396.8

43,731.0

2.25

Monitoring

2,304.0

2,304.0

2,560.0

2.25

Enterprise 100K

Primary Messaging

31,283.2

40,396.8

43,475.0

64,870.4

81,458.4

89,254.0

31,283.2

40,396.8

43,731.0

64,870.4

81,458.4

90,022.0

2.25

6.75

Backup Messaging

31,283.2

40,396.8

43,475.0

31,283.2

40,396.8

43,731.0

2.25

Monitoring

2,304.0

2,304.0

2,560.0

2.25

Memory Resources for Developer and Standalone Event Broker Services

The memory request and memory limit numbers in the following table include memory requirements for the monitoring agent that is included with the standalone event broker service The monitoring agent requirements are:

  • Memory request for all versions: 256 MiB

  • Memory limit for versions 10.5 and earlier: 256 MiB

  • Memory limit for versions 10.6 and later: 512 MiB

Service Class   Instance Type Without Retain
 Memory Request (MiB) Memory Limit (MiB) Ephemeral
Storage
Request
and Limit (GiB)
Version 10.4
and Earlier
Version 10.5 Version 10.6
and Later
Version 10.4
and Earlier
Version 10.5 Version 10.6
and Later
 

Developer

6,912.0

7,471.0

6,912.0

7,727.0

2.25

Enterprise 250 Standalone

6,912.0

7,471.0

6,912.0

7,727.0

2.25

Enterprise 1K Standalone

6,912.0

7,471.0

6,912.0

7,727.0

2.25

Enterprise 5K Standalone

14,489.6

21,555.2

24,985.0

14,489.6

21,555.2

25,241.0

2.25

Enterprise 10K Standalone

14,489.6

21,555.2

24,985.0

14,489.6

21,555.2

25,241.0

2.25

Enterprise 50K 50K Standalone

31,283.2

40,396.8

43,475.0

31,283.2

40,396.8

43,731.0

2.25

Enterprise 100K Standalone

31,283.2

40,396.8

43,475.0

31,283.2

40,396.8

43,731.0

2.25

Message Spool Size Requirements

The following table lists the default message spool sizes for each service class and the resulting persistent disk space required:

The Developer service class is a standalone messaging node (requires one pod). The Enterprise 250 High Availability and all larger High Availability Service Classes are HA groups, which require two messaging pods (Primary and Backup) and one monitoring pod.

The first of the following tables provides the message spool size requirements for high availability event broker services. The second table provides the message spool size requirements for standalone event broker services. Note that standalone event broker services are not available by default. Contact Solace for more information.

Volume Size for High-Availability Event Broker Services

Service Class Message Spool
Size
Persistent Disk
Space Requirement
Version 10.6.1 and earlier Version 10.7.1 and later Version 10.6.1 and earlier Version 10.7.1 and later
Enterprise 250 25 GB 50 GB 70 GiB x 2 65 GiB x2
Enterprise 1K 50 GB 200 GB 120 GiB x 2 260 GiB x2
Enterprise 5K 200 GB 400 GB 420 GiB x 2 520 GiB x2
Enterprise 10K 300 GB 600 GB 620 GiB x 2 780 GiB x2
Enterprise 50K 500 GB 800 GB 1,020 GiB x 2 1040 GiB x2
Enterprise 100K 500 GB 1000 GB 1,020 GiB x 2 1300 GiB x2

Volume Size for Standalone Event Broker Services

Service Class Message Spool
Size
Persistent Disk
Space Requirement
  Version 10.6.1 and earlier Version 10.7.1 and later Version 10.6.1 and earlier Version 10.7.1 and later
Developer 20 GB 25 GB 40 GB 35 GiB
Enterprise 250 Standalone 25 GB 50 GB 70 GB 65 GiB
Enterprise 1K Standalone 50 GB 200 GB 120 GB 260 GiB
Enterprise 5K Standalone 200 GB 400 GB 420 GB 520 GiB
Enterprise 10K Standalone 300 GB 600 GB 620 GB 780 GiB
Enterprise 50K Standalone 500 GB 800 GB 1,020 GB 1040 GiB
Enterprise 100K Standalone 500 GB 1000 GB 1,020 GB 1300 GiB

Mission Control Agent Pod

The Mission Control Agent has the following resource requirements:

Type Request Limit
CPU 750m 750m
Memory 1024MiB 1024MiB

Approximately once per week, Solace upgrades the Mission Control Agent using a rolling upgrade. The upgrade operation requires double the resources listed above to run successfully. When deployed in an auto-scaling environment, Kubernetes provides these resources as required. Deployments to non-auto-scaling environments must ensure they account for the additional resources required during the upgrade process. This includes ensuring that resource quotas applied to the namespace account for the rolling upgrade requirements.

Load Balancer Rules Per Service (Default Protocols and Port Configuration)

If you choose to use NodePort, the port numbers may not map accordingly to the listed ports in the table. To see the ports, you must look at the service's connection information in the PubSub+ Cloud Console

By default, there are nine rules available per service when plain-text protocols are disabled. The nine protocol rules include all non-plain-text messaging protocols and management protocols (SEMP, SEMP-TLS, and SSH). With all plain-text protocols enabled, the number of protocols enabled per service can be up to sixteen.

After the deployment of PubSub+ Cloud, the customer can modify the rules when they a create an event broker service (either via Cluster Manager in the PubSub+ Cloud Console or via the REST API). The customer can disable protocols/ports, change the ports numbers that are used, or enable additional protocols (such as the plain-text variants). For example, the customer can choose to enable plain-text REST (9000) and only that service has that port enabled. For more information, see Configuring Client and Management Ports.

Management Connectivity Protocols and Ports

The following table lists the protocol ports, indicates whether the port is enabled by default when a event broker service is created, the protocol used for Management Connectivity (management traffic).

Port for Each Protocol Enabled by Default for an Event Broker Service Protocol and Description
8080

Yes/No (See note)

SEMP (plain-text)

This port is disabled by default on event broker services created after December 2020.

22 Yes

Secured Shell

943

Yes

SEMP over TLS

Messaging Connectivity Protocols and Ports

The following table lists the protocol ports, indicates whether the port is enabled by default when a event broker service is created, the protocol used for Messaging Connectivity (Data traffic).

Port for Each Protocol Enabled by Default for an Event Broker Service Protocol and Description

443

Yes

Secured Web Transport TLS/SSL

5671

Yes

Secured AMQP

8443

Yes

WebSocket Secured MQTT

8883

Yes

Secured MQTT TLS/SSL

9443

Yes

Secured REST TLS / SSL

55443

Yes

Secured SMF TLS/ SSL (without compression)

80

No

Web Transport

1883

No

MQTT (plain-text)

5672

No

AMQP (plain-text)

8000

No

MQTT / WebSockets (plain-text)

9000

No

REST (plain-text)

55003

No

SMF-compressed

55555

No

Solace Message Format (SMF) - plaintext