Deploying PubSub+ Software Event Brokers in a Production Environment

The following sections are intended to help you successfully deploy highly available (HA) instances of PubSub+ software event brokers in a production environment.

Virtual Machine Deployment Concerns

PubSub+ software event brokers are commonly deployed into virtual machine (VM) environments. In these cases, you create a Linux VM according to your corporate standards using a standardized image based on one of the major distributions (Red Hat Enterprise Linux or Ubuntu are common). The VM is deployed using a hypervisor (VMware is most common, but the choice of hypervisor is not important as long as it can run Linux VMs). Corporate IT teams have experience managing and securing these types of systems.

PubSub+ Event Broker Scale

Prior to deploying the VM, you must consider the resource requirements of the event broker instance that is going to be deployed into it. An HA deployment requires three instances (primary, backup and monitor). The primary and backup instances are the message routing nodes that process the events and should be identical as either could be active at any time. The monitor node’s function is to provide a majority vote as to which of the message routing nodes should be active in case that they cannot communicate with each other (preventing a split-brain failure where both nodes become active). For more information, see High Availability for Software Event Brokers. The monitor node requires less resources than the message routing nodes.

To determine the resources that the VMs will require, Solace has provided the System Resource Calculator tool. Using this tool, you can create an event broker configuration and the tool outputs a set of resource requirements that the VM has to provide to the PubSub+ event broker instance. Once you have created an event broker configuration that matches the requirements of the deployment, several items must be recorded for later. From the outputs of the tool record the following:

System Resource Calculator Output Description Message Routing Node Container (Messaging) Monitor Node Container (Monitoring)

CPUs

The number of CPU cores that the PubSub+ event broker instance requires. These may be shared with the host operating system.

   

Host Virtual Memory

The amount of memory that must be available in the host system. This amount is in addition to the requirements of the host operating system (2GB of this may be swap).

   

Memory CGroup Limit

This is the amount of host virtual memory allocated to the container.

   

Posix Shared Memory (/dev/shm)

The amount of memory that must be allocated to /dev/shm. This is accounted for in the previous two memory limits but must be specified when creating the container.

   

Container Runtime Backing Store

This is storage that must be allocated to the container run-time for the purposes of storing container images and read-write layer of container instances.

   

Storage

This is the storage allocated to the storage-group of the event broker. It contains all of the state associated with the event broker instance. Often allocated on a storage volume that is not a part of the VM’s root filesystem.

   

Docker Compose Environment Variables

Environment variables are used to provide a bootstrap configuration to a new instance on first boot. The output of the System Resource Calculator contains some environment variables used to configure the size of storage elements and the system scaling parameters.

   

From the above, you can determine the characteristics of the VMs to be deployed. The critical resources that must be allocated at the time the VM is deployed are the number of CPU cores, amount of memory and additional storage volumes. The number of CPU cores allocated to the VM must be at least equal to the output from the System Resource Calculator. The amount of memory allocated to the VM must be at least the Host Virtual Memory output from the calculator plus the amount required by the host operating system. The best practice is to allocate the storage-group used by PubSub+ on a separate storage volume from the host’s root filesystem. The storage allocated to the storage-group must be at least equal to the size output from the calculator.

The characteristics of the storage used to host the storage-group will have a large effect on event broker performance. The storage-group must be hosted on a volume backed by solid state storage. Latency, throughput and IOPs will all affect the performance that PubSub+ can achieve.

The two message routing nodes have identical resource requirements for CPUs, memory, and storage. The monitor node has lower resource requirements. For details, see the Container (Monitoring) outputs from the calculator.

Configuring The Virtual Machine Host

Now that the VMs have been deployed, there is some configuration that you must perform in the host operating system (OS) to get them ready to deploy PubSub+ event brokers.

Configuring the Container Runtime

A container runtime is a software component that uses features of the Linux kernel (cgroups and namespaces) to provide isolation between applications. The applications managed by the container runtime are packaged as containers. A container image contains a Linux user space environment packaged with the application binaries. The images are stored locally in the union filesystem (installed and configured as a part of the container runtime). The union filesystem is a copy on write filesystem (of which there are several options; each host distribution / container runtime will have a preferred option). When a new container is created, the container runtime creates a new read-write layer on top of the container image and that becomes the root filesystem for processes running inside the container. The container runtime configures cgroups and namespaces to provide isolation between containers.

All of the major Linux distributions have packages for a container runtime in their standard repositories. Installing these packages configures everything you require to run a container. Podman is a good choice of container runtime supported by most major Linux distributions. The container runtime should not need additional configuration; the package installers do a good job of configuring them for the host OS. The container runtimes provide configuration files to enter any non-default configuration required.

Source of Time

The host operating system must provide an accurate source of time to the PubSub+ software. There are multiple ways to configure time synchronization between hosts connected to a network. The host must be synchronized with the rest of the network for proper operation of PubSub+. You can configure this according to corporate wide standards.

Securing the Host

The use of a VM image that is designed to be compliant with your corporate security policy makes it much easier to configure the host in a way that conforms to corporate security policies. This means that it can be managed in the same way as any other host on your network. The VM is expected to be pre-hardened. You can deploy patches for security vulnerabilities using standard operating procedures (without any reliance on Solace to provide patches). You can also manage user accounts in the host according to corporate policies. Security features of the PubSub+ event broker container and details on how to secure its deployment are covered in subsequent sections.

Configuring Storage

The installation of the container runtime using the native package manager configures the union filesystem used to store container images and the read-write layer associated with any container instances. The state associated with a PubSub+ event broker instance should use a filesystem mount for performance and ease of managing the instance in the future (as some of this data is long lived). A mount allows a file or directory in the host to be accessed at a location inside a container instance (bypassing the union filesystem). It is common to use a separate device from that which is used to host the operating system’s root filesystem. If a dedicated device was attached to the VM for this purpose, then it must have a filesystem and a mount location in the host’s filesystem. PubSub+ requires the filesystem hosting the storage-group to be built with XFS. Verify that the filesystem and its mount location in the host are known so that it can be mounted into the PubSub+ event broker container in subsequent steps.

Other Host Concerns

Core files are an important debugging tool in the event of a crash of one of the processes running inside the PubSub+ event broker container. The core files can be large; up to several gigabytes. The location to which the core files are written is configured in the host. When setting the ulimits for the container, it is recommended that –ulimits core=-1 be set to remove the limit on the size of core files that can be generated. You can configure the location of the core files to be a location in the filesystem or a script. If configuring a script, the contents of the core files could be compressed to save space in the filesystem. For details on how to configure the host’s core pattern, see Managing Core Files for Container Images.

Creating the PubSub+ Event Broker Container

This section outlines the steps towards creating a PubSub+ event broker container. It walks through the process of defining the command line that can be used with a container runtime to create a production ready instance of PubSub+. Podman and Docker are the two most popular container runtimes and their command-line syntax is largely compatible. The command lines created in this section are expected to work with either.

For example systemd unit files that contain sample command lines for Podman, see Sample Systemd Unit File . The following sections describe the concerns regarding all of the command line options employed by these samples.

Podman (and recent versions of Docker) support what is called rootless mode. While the PubSub+ event broker container runs without the need for any root privileges, there are some advantages to using sudo access to create your container (in particular as it relates to networking). This section assumes the use of a rootful container (a rootful container is created using sudo podman). An important point to note is that rootless or rootful containers have nothing to do with the privileges of the processes running inside the container. These terms refer to the privileges of the process that created the container. It is possible to have a rootful container that is running without any root privileges. The PubSub+ event broker container is an example of a container that can run in this mode. For more information about the issues related to rootless containers, see Rootless Containers.

Allocating Resources

While deploying the VM, you examined the resource requirements of the PubSub+ event broker using the System Resource Calculator and recorded the outputs from the tool. If the PubSub+ event broker is the only container that is going to be running on the instance, then contention for resources is not a large concern. If there are multiple containers running on the VM, then limiting the resources allocated to each container is recommended. The goal is to prevent resource contention that could adversely affect the function of PubSub+.

You can use the following command-line options with the container runtime to limit the resources that PubSub+ has access to. If not specified, a container may use all of the resources available in the VM.

Resource Command Line Option Notes

Memory:

PubSub+ memory consumption is largely static. From the output of the System Resource Calculator, you can specify the Memory CGroup Limit on the command line used to create the container.

--memory=xxG and –memory-swap=xxG
  • Can be used to restrict the amount of memory the processes inside the container are allowed to use.

  • These two numbers must add to at least the amount of memory specified in the Memory CGroup Limitoutput from the System Resource Calculator.

  • If PubSub+ is the only container to be running in the VM, there is no need to set these limits.

Shm:

PubSub+ requires some shared memory for communication between processes inside the container. The amount is specified as an output from the System Resource Calculator.

--shm-size=xxG
  • Must be specified; the default value is insufficient, resulting in an event broker that will not start (due to a POST failure).

  • The amount is indicated in the Posix Shared Memory output of the System Resource Calculator.

CPU:

PubSub+ requires access to a minimum number of CPU cores to start. The System Resource Calculator outputs specify the minimum number of CPU cores for a given event broker configuration. You can specify the CPUs or number of CPUs that a container has access to.

--cpus=x or –cpuset-cpus w,z,y,z
  • Must specify at least as many CPUs as indicated by the CPUs output of the System Resource Calculator.

  • If PubSub+ is the only container to be running in the VM, there is no need to set these limits.

Ulimits:

Used to set other resource limits within the PubSub+ container. It is recommended to set non-standard limits on the number of files that PubSub+ is allowed to have open and to remove the limit on the size of core files. The System Resource Calculator recommends values for these limits in the outputs.

--ulimit nofile=xxx:yyy
--ulimit cores=-1
 

User:

The PubSub+ container has a default UID:GID of 1000001:0. This is the UID that is used by processes inside the container unless otherwise specified on the command line you use to create the container. The PubSub+ container can be run with any user-id. Best practice is to use a user-id that is not used by any user configured in the host. If using a rootless container (with user namespace remapping) there are other considerations around the choice of user.

--user UID:GID

The values for UID:GID that you specify for the container are also required when setting the permissions of the storage-group. If running a rootless container, there are special considerations for the choice of UID:GID, see Rootless Containers for details.

Storage

In previous steps, consideration was given to amount of storage allocated to the storage-group and the location in the host’s filesystem. This directory is mounted into the PubSub+ container at /var/lib/solace. This directory must be writeable by the user specified in previous steps. For more information, see Managing Storage for Container Images .

The following command-line option can be used to mount a host location into the container at /var/lib/solace. This is the location for the storage-group where all of the event broker’s state is stored.

--mount type=bind,source=<directory-path-on-vm>,target=/var/lib/solace

Networking

There are two common networking options when deploying a container, host and bridge networking.

With host networking, the container is deployed in the host’s network namespace; processes running inside the container see the network the same as any process running on the host. Host networking is a good option if the container is the only container to be deployed on the host. If multiple containers are to be deployed on the host, port collisions become a concern (and some form of bridge networking is more appropriate). There are some security concerns around host networking, but so long as the containers being deployed in the host’s network namespace are from a trusted source, this poses a low risk (the risk is that processes in the container could access system resources).

With bridge networking, the container is deployed in a separate network namespace from the host. The container runtime configures forwarding rules to bridge traffic between the host and container namespaces. When using bridge networking, the ports that processes inside the container are listening to must be configured so that the container runtime can create rules to direct traffic bound for these ports to the container. Bridge networking poses some problems for rootless containers because a non-privileged user is not able to create the forwarding rules. For more information, see Container Networking.

The following command-line option specifies the network that the container is connected to:

--net=host or –net=bridge (or –net=slirp4netns if using rootless containers)

If you are using bridge networking, you must specify which host ports map to certain container ports:

--publish host_port:container_port

For a list of ports that a PubSub+ event broker listens to by default, see Default Port Numbers.

There are also ports that you can configure after the event broker is deployed. Consider these when creating the container, as ports cannot be published after the container is created (however the container can be recreated after the fact without a loss of configuration).

Logging

There are multiple ways to configure logging in a PubSub+ container. Most users configure PubSub+ event brokers to forward logging data to a remote syslog server via the Solace CLI . PubSub+ can also direct logging data to stdout to be compatible with logging facilities that are native to the container runtime (see Configuring Container Logging).

Hostname and DNS

It is best practice to assign a unique hostname (distinct from the host system) for the PubSub+ instance even if it resolves to the IP address of the host (which in most cases it would). A unique hostname for the PubSub+ instance will facilitate moving the container to a new host VM if required in the future (by updating the DNS entry to point to the new host’s IP address).

The following command-line option specifies the hostname available inside the container. To specify the hostname, the container must be in a UTS namespace (which is the default behavior).

--hostname=name

System Scaling Parameters

System scaling parameters are settings in PubSub+ that affect the scaling limits of the event broker. Increasing a scaling parameter requires more resources to be allocated to the container. The configuration of the scaling parameters is generally developed using the System Resource Calculator (as described in previous steps).

In a previous step, you recorded the environment variables from the docker-compose output of the System Resource Calculator. These environment variables will be used to configure the system scaling parameters of the event broker instance. You can specify environment variables on the command line used to create the PubSub+ container. PubSub+ can detect these variables on first boot and use them to create a bootstrap configuration for the new PubSub+ instance. For each variable from the docker-compose output from the System Resource Calculator, add an item to the command line.

The following command-line option creates an environment variable inside the container:

--env variable-name=value

Configuration Keys

Configuration keys work in a very similar way to the system scaling parameters. These keys are used to create a bootstrap configuration for the event broker instance on first boot. You can set them using environment variables as with the system scaling parameters. A complete list of configuration keys can be found here.

Creating a Default Account

The PubSub+ event broker requires a default admin account to be configured. You can configure this default account using configuration keys.

The following keys are used to configure a user account:

username/<name>/encryptedpassword

This configuration key creates user <name> and allows you to specify a SHA-512 salted hash of the user’s password.

username/<name>/globalaccesslevel

This configuration key specifies the global access level for the user <name>, at least one admin level user is required.

Server Certificate

To establish a secure connection to the event broker you must configure a server certificate. Configuring the server certificate is an important part of the bootstrap configuration because it is required to access PubSub+ Broker Manager over HTTPS (to securely complete the configuration of the event broker) and to secure the communication between the nodes of an HA group.

You can configure the server certificate using configuration keys. The best practice is to securely transfer the certificate file (including the private key) to a tmpfs (ram backed, so that the certificate is not written to storage) file and mount that file inside the container.

There are two configuration keys related to setting the server certificate:

tls/servercertificate/filepath
tls/servercertificate/passphrasefilepath
  • These keys are designed to be compatible with common ways to inject sensitive configuration into containers (commonly referred to as secrets). A secret can be used to put the certificate into the container and then the configuration keys tell the PubSub+ software where to access it.

  • The second configuration key is only required if the certificate file is password protected.

Redundancy Configuration

You can use configuration keys to configure redundancy so that an HA group of PubSub+ event brokers can be configured as a part of the bootstrap configuration. Once this is complete, you can complete the configuration of the event broker securely using PubSub+ Broker Manager with Config-Sync to synchronize the configuration between the two message routing nodes.

The following configuration keys are used to configure redundancy:

nodetype
redundancy/activestandbyrole
redundancy/group/node/<name>/nodetype
redundancy/group/node/<name>/connectvia
redundancy/authentication/presharedkey/keyfilepath
redundancy/matelink/tls/enable
configsync/enable
configsync/tls/enable
redundancy/enable

Creating Your Command Line

You can assemble the command-line to create the container by gathering up all of the options from the previous sections. An annotated systemd unit file is provided in Sample Systemd Unit File with the options described above. In subsequent sections, this unit file will be deployed by systemd to create the PubSub+ event broker container instance.

Now that a command-line has been generated to create the container, you can use the container runtime to create the container. Note that if this instance of PubSub+ is being migrated from an instance originally created from a machine image package, there are additional steps to migrate the data from the old system and to add this system into a running HA group of PubSub+ event brokers. See Migrating Machine Image Deployments for details.

Configuring the Container to Start Automatically

It is important for a production instance of PubSub+ to restart automatically in the case of a crash or if the host reboots. There are two cases to consider:

  • When the host starts up.

  • When the container stops.

Systemd is the best tool to start the container. A systemd unit file must be created and installed for this (you can use the sample unit file from Sample Systemd Unit File ).

When the container stops, it can be restarted by systemd or the container runtime. The selection generally depends on the choice of container runtime. Podman is designed to integrate with systemd, and a systemd service is most commonly used to start the container (when the host starts up) and to restart the container if it stops for any reason.

When using Docker a systemd service is required to automatically start the container when the host starts, but if the container stops, the docker daemon can be used to restart the container via the restart policy set for the container (--restart). In this case, the systemd service might use docker compose to create and start the container.

Podman has built-in support for systemd. Recent versions of Podman (starting with version 4.4) include a tool called Quadlet. Quadlet defines a template format that you can use to generate systemd unit files. The container unit files are stored in /etc/containers/systemd (if using rootful containers) or $HOME/.config/containers/system (if using rootless containers). The [Container] section of the unit file can easily be generated from the command-line options that would otherwise be used to create the container (See podman-systemd.unit for details).

Debugging the Container

There are two common debugging scenarios to consider:

  • There could be issues with the container failing to start up properly. These issues occur before the logging facilities of the container start.

  • There could be issues that arise after the container starts successfully. Debugging issues that come up after the event broker starts is done by examining the event broker’s logs or with the help of Solace Support using gather-diagnostics.

To debug an issue that is preventing the PubSub+ event broker container from starting, examine the logs collected by the container runtime. These logs are generated by the PubSub+ event broker container before its logging facilities have started up and the event broker is unable to log to its normally configured locations. Common reasons for the event broker to not start are issues with the resources; posix shared memory, memory available to the container, number of CPU cores (based on the value of the system scaling parameters, the event broker may need more resources allocated to it).

Problems that arise after the event broker has started are normally debugged by examining the event broker logs (stored by default inside the container at /var/lib/solace/jail/logs). Also included in the PubSub+ container is a script that gathers up all of the information needed to debug a PubSub+ instance. You can run the script from the Solace CLI or from the host. If run from the Solace CLI, the script is not able to gather as much data because processes running inside the container have limited access to the host. If you run gather-diagnostics from the host, the host will need python installed. For more information, see Gathering Diagnostics from Software Event Broker Containers.

Upgrading the Container

At some point in the future, the PubSub+ event broker instance will need to be upgraded to use a new version of PubSub+ or to install a maintenance release. It is important to create the instance in a way that will be upgradable in the future.

Things to consider when creating an instance that will be upgradeable are the host locations of the storage-elements and that the event broker shuts down cleanly before upgrade. All of the state associated with the PubSub+ instance should be in the storage-group. The storage-group will be mounted in the new container instance during the upgrade. The new container based on the target PubSub+ container image will get its configuration, message-spool and delivery state of the messages from the storage-group.

A lot of the work required to upgrade a PubSub+ event broker instance is done during shutdown. Depending on the complexity of the configuration, this may take several minutes. When shutting down the container using podman or docker stop, it is a best practice to set the time until the container runtime will forcibly stop the container. You can specify this time using the –time option with the stop command and the recommended time is 1200 seconds to be sure that the container has had enough time to shutdown cleanly. For example:

podman stop solace –-time 1200

For detailed instructions describing how to upgrade a PubSub+ event broker container instance, see Docker Upgrade.

Sample Systemd Unit File

The following is an example of a systemd unit file that you can use to create a PubSub+ event broker container instance:

#Sample Systemd Unit file template for a PubSub+ Event Broker
#This template is an example of a production ready node of an HA Group.
#This template is for the primary node, a similar container is required for
#the backup node. The monitor does not require the same resources.
#Edit the configuration to configure the instance.

[Unit]
Description=PubSub+ Container
After=local-fs.target
SourcePath=/etc/systemd/system/solace.service
RequiresMountsFor=%t/containers
#This is the location of the storage-group
RequiresMountsFor=/opt/solace/storage-group

[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

[Service]
EnvironmentFile=/etc/solace/solace.conf
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=always
ExecStop=/usr/bin/podman stop solace --time 1200
ExecStopPost=-/usr/bin/podman rm -f -i --cidfile=%t/%N.cid
ExecStopPost=-rm -f %t/%N.cid
Delegate=yes
Type=notify
NotifyAccess=all
SyslogIdentifier=%N
ExecStart=/usr/bin/podman run \
--env-file=/etc/solace/solace.env \
${PSP_COMMON_ARGS} \
${PSP_NAME} \
${PSP_CPUS} \
${PSP_MEMORY} \
${PSP_SHM_SIZE} \
${PSP_ULIMITS} \
${PSP_HOSTNAME} \
${PSP_USER} \
${PSP_MOUNT} \
${PSP_ROUTER_NAME} \
${PSP_ADMIN_USER} \
${PSP_ADMIN_PSWD} \
${PSP_NETWORK} \
${PSP_SSH_PORT} \
${PSP_MANAGER_HTTP} \
${PSP_MANAGER_HTTPS} \
${PSP_WEB_TRANSPORT_PLAIN} \
${PSP_WEB_TRANSPORT_TLS} \
${PSP_AMQP_TLS} \
${PSP_AMQP_PLAIN} \
${PSP_MQTT_PLAIN} \
${PSP_MQTT_WSPLAIN} \
${PSP_MQTT_WSTLS} \
${PSP_MQTT_TLS} \
${PSP_RESTPRODUCER_PLAIN} \
${PSP_RESTPRODUCER_TLS} \
${PSP_SMF_PLAIN} \
${PSP_SMF_COMPRESSED} \
${PSP_SMF_TLS} \
${PSP_SMF_ROUTING} \
${PSP_HA_MATELINK} \
${PSP_HA_SYNC0_UDP} \
${PSP_HA_SYNC1_UDP} \
${PSP_HA_SYNC2_UDP} \
${PSP_HA_SYNC0_TCP} \
${PSP_HA_SYNC1_TCP} \
${PSP_HA_SYNC2_TCP} \
${PSP_IMAGE}					

The following is an example configuration file for the above unit file:

#Solace PubSub+ Configuration
##############################################################
#
#. Required to be modified
#
##############################################################
PSP_COMMON_ARGS=--cidfile=%t/%N.cid --replace --rm --log-driver passthrough --runtime /usr/bin/crun --cgroups=split --sdnotify=conmon -d

#Name of the container instance
PSP_NAME=--name=solace

#Specify resource limits for the container;
#these are output from System Resource Calculator

PSP_CPUS=--cpus=2
#or cpusets

#PSP_MEMORY=--memory=6g
#not required if the only container running on the host

#--shm-size must be specified; output from System Resource Calculator
PSP_SHM_SIZE=--shm_size=1g

#ulimits set limits on other types of OS resources
#removes the limit on the size of core files
#nofile limits the number of files the container can have open
# --ulimit nofile=2448:10192
PSP_ULIMITS=--ulimits core=-1 --ulimit nofile=2448:10192

#Set the hostname of the container
PSP_HOSTNAME=--hostname=primary

#Set the UID of the user inside the container; default 1000001:0
#PSP_USER=--user UID:GID
#The user must be able to write to the /var/lib/solace mount

#The storage-group recommended size is output from the System Resource Calculator
#Note the host location of the storage-group
PSP_MOUNT=--mount type=bind,src=/mnt/solace/storage-group,target=/var/lib/solace:Z
#Environment variables inside the container
#PubSub+ receives its bootstrap config from here
PSP_ROUTER_NAME=--env 'routername=primary'

#Create the default admin account
#access-level for user "admin"
PSP_ADMIN_USER=--env 'username_admin_globalaccesslevel=admin'

#Give the user a password in the form of a SHA512 salted hash, <salted-hash>
PSP_ADMIN_PSWD=--env 'username_admin_encryptedpassword=<salted-hash>'

#Network Configuration
#Use host networking if only container on host
PSP_NETWORK=--network=host
#If using host networking then the port mappings are not required
#The default networking configuration (bridge) requires port mappings

#CLI access via SSH
#PSP_SSH_PORT=-p '2222:2222'

#HTTP access to PubSub+ Broker Manager
#PSP_MANAGER_HTTP=-p 8080:8080

#HTTPS access to PubSub+ Manager
#PSP_MANAGER_HTTPS=-p '1943:1943'

#SMF web-transport plain text
#PSP_WEB_TRANSPORT_PLAIN=-p '8008:8008'

#SMF web-transport over TLS
#PSP_WEB_TRANSPORT_TLS=-p '1443:1443'

#AMQP over TLS (default message-vpn)
#PSP_AMQP_TLS=-p '5671:5671'

#AMQP plain text (default message-vpn)
#PSP_AMQP_PLAIN=-p '5672:5672'

#MQTT plain text (default message-vpn)
#PSP_MQTT_PLAIN=-p '1883:1883'

#MQTT over websockets plain text (default message-vpn)
#PSP_MQTT_WSPLAIN=-p '8000:8000'

#MQTT over websockets TLS (default message-vpn)
#PSP_MQTT_WSTLS=-p '8443:8443'

#MQTT over TLS (default message-vpn)
#PSP_MQTT_TLS=-p '8883:8883'

#REST producer plain text (default message-vpn)
#PSP_RESTPRODUCER_PLAIN=-p '9000:9000'

#REST producer over TLS (default message-vpn)
#PSP_RESTPRODUCER_TLS=-p '9443:9443'

#SMF plain text port
#PSP_SMF_PLAIN=-p '55555:55555'

#SMF compressed port
#PSP_SMF_COMPRESSED=-p '55003:55003'

#SMF over TLS with or without compression
#PSP_SMF_TLS=-p '55443:55443'

#SMF routing
#PSP_SMF_ROUTING=-p '55556:55556'

#High Availability Mate-Link
#PSP_HA_MATELINK=-p '8741:8741'

#HA synchronization
#PSP_HA_SYNC0_UDP=-p '8300:8300/udp'
#PSP_HA_SYNC1_UDP=-p '8301:8301/udp'
#PSP_HA_SYNC2_UDP=-p '8302:8302/udp'
#PSP_HA_SYNC0_TCP=-p '8300:8300/tcp'
#PSP_HA_SYNC1_TCP=-p '8301:8301/tcp'
#PSP_HA_SYNC2_TCP=-p '8302:8302/tcp'

#The location of the container image
PSP_IMAGE=solace/solace-pubsub-standard:production

The configuration keys used to configure the container are stored in an environment file. For example:

#Solace PubSub+ Config-Key Configuration
##############################################################
#
#. Required to be modified
#
##############################################################
#Path to file where server certificate is stored
#If using a secret, /run/secrets or /var/run/secrets are common defaults
tls_servercertificate_filepath=/run/secrets/cert.pem

#Path to file container pass phrase protecting the server certificate
tls_servercertificate_passphrasefilepath=/run/secrets/pass-phrase

#Set the size of the event broker; check the license terms
#PubSub+ Standard supports 100, 1000 \
#PubSub+ Enterprise supports 100, 1000, 10000, 100000, 200000
system_scaling_maxconnectioncount=1000

#System Scaling Parameters (output from System Resource Calculator)
#Max-QueueMssages 100, 240 or 3000 (millions)
system_scaling_maxqueuemessagecount=240

#Message payload spool capacity based on the size of the spool storage element
messagespool_maxspoolusage=10000

#Redundancy Configuration
redundancy_enable=yes
configsync_enable=yes
configsync_tls_enable=yes

#DNS name of backup event broker
redundancy_matelink_connectvia=backup
redundancy_matelink_tls_enable=yes

#Primary or backup
redundancy_activestandbyrole=primary

#Key to be used to authenticate the mate-link, primary and backup
#must have the same key
redundancy_authentication_presharedkey_keyfilepath=/run/secrets/keyfile

#Primary event broker details
#Connect via the DNS name of primary
redundancy_group_node_primary_connectvia=primary
redundancy_group_node_primary_nodetype=message_routing

#Backup event broker details
#Connect via the DNS name of backup
redundancy_group_node_backup_connectvia=backup
redundancy_group_node_backup_nodetype=message_routing

#Monitor event broker details
#Connect via the DNS name of monitor
redundancy_group_node_monitoring_connectvia=monitoring
redundancy_group_node_monitoring_nodetype=monitoring