Connectivity Model for Kubernetes Deployments

There are two types of connectivity to consider in the Kubernetes connectivity model to deploy PubSub+ Cloud that include:

  • Management connectivity (external connectivity of workloads from the Kubernetes cluster)
  • Messaging connectivity (outbound connections from the event broker and messaging between client applications)

For more information, see Understanding the Types of Connectivity to Configure.

Configuring the external connectivity is important. Incorrect configuration causes issues when installing the Mission Control Agent later. Ensure that you review and implement the configuration changes required.

In environments that restrict access to the public Internet, you must configure the external connectivity for the workloads in Kubernetes cluster and we recommend you consider configuring the broker connectivity as well at the same time. The steps are:

  1. Configure the hosts, IP addresses, and ports to whitelist in environments for the external connectivity required for workloads in the cluster. For a complete list, see Summary of Connection Requirements for PubSub+ Cloud Components .

    Networks that restrict public Internet often use a HTTP proxy, but you can explicitly choose to use a whitelist. For more information, see Options when External Connectivity Is Limited.

  2. Ensure that the cluster has access to Solace's Container Registry (gcr.io) as this is required as described in Connectivity Model for Kubernetes Deployments.
    If your network connectivity doesn't permit access to gcr.io, then you must create a mirror repository of Solace's Container Repository. You'll require the registry credentials need to contact Solace for them.
  3. Also, consider configuring the ports required for the messaging connectivity. For more information, see Messaging Connectivity for Outbound Connections and Client Applications.

After you have completed the necessary connectivity configuration and installed your Kubernetes cluster, you are ready to move to the next step.

Understanding the Types of Connectivity to Configure


The following diagram shows the correct management traffic (for external connectivity of workloads from the cluster) and messaging connectivity (event broker outbound connections and client application connectivity to the event broker services) that need to flow in your data center.

 

PubSub+ Cloud must be configured to support the following:

Management Connectivity
 This is the management traffic that carries commands between various PubSub+ Cloud cloud components to configure the installation, operation, and monitoring of your services. Specifically, this is the external connectivity for workloads that run in the cluster.
You must configure this connectivity to permit components of PubSub+ Cloud to install, operate, and monitor event broker services). The installation of the Mission Control Agent requires this connectivity for communication with Solace Home Cloud
For more information, see Management Connectivity for External Connections.
Messaging Connectivity for Outbound Traffic for Event Broker Services
This is the messaging traffic and refers to the events and messages to your event broker serviceand also includes outbound connections ( for example, REST Delivery Points or RDPs) initiated by the event broker services. Solace recommends that you configure the messaging traffic for your event broker services at the time you request IP addresses and ports required for the workloads in your Kubernetes cluster.
You can specify the client ports used to connect to event broker services and any ports for outbound connections initiated by the event broker services to your VPC/VNet. For more information, see Messaging Connectivity for Outbound Connections and Client Applications

For security details about the Mission Control Agent, see Mission Control Agent.

For details about the specific connections required by various components, see Summary of Connection Requirements for PubSub+ Cloud Components .

 

Management Connectivity for External Connections

You must configure the connectivity for the Kubernetes cluster to handle the required management and control traffic to deployPubSub+ Cloud.

The connection details are required to install the Mission Control Agent (for the management of event broker services and connecting to the Solace Home Cloud) and monitoring of the event broker services (for Datadog). For a complete summary of the hosts and ports, see Summary of Connection Requirements for PubSub+ Cloud Components . For more information about Datadog, see Connectivity for the Monitoring of Event Broker Services).

Options when External Connectivity Is Limited

If your networking policies for your VPC/VNet prevent connectivity to the public Internet, Solace recommends that you configure your Kubernetes cluster and network using one of the following options:

HTTP Proxy Server and Open Ports for Solace Home Cloud
Use an HTTP proxy server and explicitly whilelist the required IP addresses and ports of Solace Home Cloud. In this case, you (the customer) must also provide details (URL, username, and password) of the HTTP/HTTPS proxy server Solace when you're ready to install the Mission Control Agent. You must explicitly whitelist the IP addresses and ports for the Solace Home Cloud because it uses Solace Message Format (SMF) as the protocol–not HTTP/HTTPS. For information, see Using HTTP/HTTPS Proxies.
Only Open Ports Required For PubSub+ Cloud:
Explicitly whitelist only the IP addresses and ports as described in Summary of Connection Requirements for PubSub+ Cloud Components .

Summary of Connection Requirements for PubSub+ Cloud Components

You must configure the connectivity for the Kubernetes cluster to handle the required management traffic to install and PubSub+ Cloud. This management traffic is required to install the Mission Control Agent (for the management of event broker services and connecting to the Solace Home Cloud), monitoring of the event broker services (Datadog), and to retrieve container images from Solace's Container Registry (gcr.io).

The following connection details are required for Kubernetes deployments, such as Azure Kubernetes Service (AKS), Google Kubernetes Engine for Google Cloud (GKE), and Amazon Elastic Kubernetes Service (EKS). These connections are required when you deploy PubSub+ Cloud to Customer-Controlled regions. 

For more information about the security architecture for Customer-Controlled deployments, see Deployment Architecture for Kubernetes and Security Architecture Considerations for Customer-Controlled Regions.

For some connections, there are different regional sites as indicated in the table below.

Connection
Host                       
IP Addresses
Port
Description
Mission Control Agent to Solace Home Cloud

 

Regional Site for US: production-ivmr.messaging.solace.cloud

Regional Site for US:

  • 34.233.110.233
  • 52.205.60.66
  • 54.204.227.82

55443

TLS encrypted SMF traffic between the Mission Control Agent and the Home Cloud.

Regional Site for AUS:

prod-aws-au-1-ivmr.messaging.solace.cloud

Regional Site: AUS:

  • 13.236.32.115
  • 3.106.10.188
  • 3.105.186.75
Datadog Agents to Datadog Servers
  • api.datadoghq.com
  • agent-http-intake.logs.datadoghq.com

  • *.agent.datadoghq.com

There are multiple IP addresses that must be configured for both the Mission Control Agent and the event broker services.

For the Mission Control Agent:

You must configure the addresses directly to Datadog. See https://ip-ranges.datadoghq.com/ for information.

For the event broker services: This is required for Monitoring Traffic to the central monitoring service (Datadog). You can decide to directly connect or use an proxy to connect to the external IP addresses. For more of an overview of the differences, see Centralized Monitoring Service and Datadog Agents.

For details about the external IP addresses, see Getting the Required IP addresses for Monitoring Traffic.

For direct and the Mission Control Agent: 443

For proxy, 3834, 3835, 3836, and 3837

Required for monitoring traffic and metrics.

TLS encrypted traffic between each Datadog agent (one per Solace pod, including Mission Control Agent) and Datadog server.

Note for the Mission Control Agent, you must configure the addresses directly .

Kubernetes to Google
Container Registry
gcr.io( storage.googleapis.com )

This is not a single fixed IP address but can be proxied.

443

Required to download Solace's Container images.

TLS encrypted traffic between each Kubernetes cluster and gcr.io.

Note:You do not require this host and port combination to whitelisted if you have chosen to configure an image respository in your data center to mirror Solace's Container Registry (gcr.io).

Mission Control Agent to Solace Home Cloud maas-secure-prod.s3.amazonaws.com

N/A

443

Required to download the certificate files for the created event broker service.

${bucket_name}.s3.amazonaws.com

N/A

443

This is a unique value for each private data center. Solace provides the name of the AWS S3 bucket (bucket_name) when you deploy PubSub+ Cloud.

This is required for gathering diagnostic information.

Connectivity for the Monitoring of Event Broker Services)

For the monitoring of event broker servicesm connectiviity is required for Datadog and if you plan to use PubSub+ Insights.

As shown in the diagram below, a proxy parameter (containing the proxy server to use) is added to the Helm chart (in the values.yaml file). When the Mission Control Agent is installed, its Datadog's sidecar container is created and configured to use that proxy server. The Mission Control Agent also receives the proxy server parameters via its properties files. Because the Mission Control Agent has this parameter configured, it configures every event broker service it launches to use the proxy server as well:

  • The Datadog sidecar containers in each software event broker pod are configured to use the proxy server.
  • The init.sh entry point script in each software event broker pod is configured to download certificates via the proxy server.

The Mission Control Agent gets its properties files from a ConfigMap instead of from the Solution-Config-Server.

Messaging Connectivity for Outbound Connections and Client Applications

You can configure the connectivity for outbound connections initiated by event broker service and client applications. Thought this configuration during deployment is not required, it is recommended because:

  • Much your network connectivity architecture is being defined during this deployment
  • You can define the necessary ports at one time with your security team.

Here are more details of the messaging connectivity:

Connectivity to permit messaging traffic between event broker services and client applications external to the Kubernetes cluster
The messaging traffic includes various messaging patterns (point-to-point, publish-subscribe, request-reply) for event messaging. The ports used depend on the protocol and typically go through a public load balancer for external clients. For example, client applications that connect to event broker services to publish and subscribe to event data would use this type of traffic. For more information, see Connectivity Between Event Broker Services and Client Applications.
Messaging client applications can connect from other VPCs/VNets or from the public Internet. Alternatively, you may decide not to permit clients from external IP address to connect. For example, because your messaging client applications reside in the same Kubernetes cluster. For details about these options, see Exposing Event Broker Services to External Traffic and Not Exposing Event Broker Services to External Traffic.

 

Connectivity for outbound traffic from event broker services to hosts outside the Kubernetes cluster
Outbound traffic to external hosts refers to where the connection is initiated by the event broker services. For private networks that require outbound traffic to external hosts, customers must use a NAT configured with a public-facing, static IP address.
For more details on the networking architecture, see using Using NAT with Static IP addresses for Outbound Connections.
For information about outbound connections, see Outbound Connections Initiated by Event Broker Services .