Using NodePort

The customer can expose the event broker services within a Kubernetes cluster to external traffic using NodePort. Since traffic needs to be routed to the cluster, it is necessary when using NodePort to front it with a load balancer. This solution opens one TCP port (called a NodePort) on all Worker nodes for each event broker service exposed. Traffic going to this port is transferred to the pod behind that NodePort service. With this option, the customer must implement the following:

  • The customer configures the Mission Control Agent to deploy the Kubernetes cluster with the Kubernetes Service of NodePort.
  • Each event broker service that's created has its own NodePort service that can receive traffic from any of the Worker nodes
  • The external ports that are exposed are no longer the PubSub+ Cloud defaults but are instead randomly chosen ports. PubSub+ Cloud reports the chosen ports back in the Cloud Console so that the customer can determine which ports are assigned to which event broker service.
  • Customers must manually provision a single load balancer in front of Kubernetes with a single address that receives the traffic for the cluster. This manual provisioning of a load balancer is typically done by a network administrator within the customer's private network. The load balancer forwards traffic to a default NodePort range of 30000-32767 (this can be modified on the API server if the --service-node-port-range setting is used to define the range), which is used by the Kubernetes cluster for its Target Pool. The load balancer's Target Pool should be configured to contain a list of the worker nodes within the Kubernetes cluster to ensure that traffic is routed reliably. This network load balancer will also need to be mapped to an IP address which clients outside of the Kubernetes cluster can reach.
  • When using NodePort, the Mission Control Agent must be setup with a k8s.serviceHostname. It must also be configured with a hostname that resolves to the load balancer. The Mission Control Agent reads the k8s.serviceHostname and uses it as the hostname for the event broker services. The Mission Control Agent will not generate a hostname via a DNS Agent when the K8s.serviceHostname is set. The hostname is typically the CNAME that is created by Solace in the message.solace.cloud domain that resolves to the load balancer.

 

In this solution, the customer must use an external network load balancer created by the themselves (typically by a network administrator) and that network load balancer must be configured with a single address that forwards all TCP traffic over the NodePort range to the worker nodes. By doing this, all event broker services can share the same hostname.

Public access outside of the customer's private network is optional. If required, an Internet gateway is required to route a public IP address to the appropriate private network IP address. If the customer's network blocks external traffic from the Internet, they must whitelsit the PubSub+ Home Cloud's IP address. In this case, the customer must provide details (URL, username, and password) of the HTTP/HTTPS proxy server to the Mission Control Agent during deployment.

If you must use a NodePort with an external network load balancer, these are the advantages and disadvantages over using an integrated load balancer:

Advantages

  • All services are available over the same endpoint so only one public IP address can be used to expose multiple event broker services, publicly.
  • The setup is easier to implement for on-premises environments as a tight load balancer integration with Kubernetes is not required.

Disadvantages

  • The TCP ports which the client connects to are arbitrarily chosen by Kubernetes over the NodePort range. A customer isn't able to specify custom TCP ports for brokers created from the Cloud Console. Specifying custom ports from the Cloud Console has no effect.
  • The load balancer must be manually configured by the customer, usually by a network administrator, whereas typically they are created dynamically.
  • You can have only one service per port.
  • When Worker nodes are decommissioned or commissioned (includes if the VM IP addresses change), the load balancer needs to be updated manually to have the Target Group match the worker nodes that currently exists.
  • The customer cannot specify custom TCP ports when creating an event broker service from the PubSub+ Cloud Console  - even if they try to specify a custom port from the Cloud Console , it has no effect. Instead, the NodePort numbers that are allocated are shown in Connect tab for the event broker service in Cluster Manager within the Cloud Console.
  • If you are upgrading from a cluster that was not configured for public connectivity, you will be unable to create public endpoints.

For more information, see Type NodePort in the Kubernetes documentation.