Azure Kubernetes Service Deployment Details

The following diagrams and component descriptions describe a typical PubSub+ Cloud deployment in Azure Kubernetes Service (AKS).

AKS Deployment

Azure Network Connectivity

This diagram shows the network connections for the various components.

Component Description

Managed Identity

  • Name: <datacenter-name>-aks-managed-id
  • Details: Managed identity provides the AKS Cluster with access to the subnet’s route tables to update them with kubenet routes.
  • Role Assignment:
    • Scope: The <datacenter-name>-infra-rg resource group.
    • Role: Contributor

Resource Group

Name: <datacenter-name>-infra-rg

Route Table

  • Name: <datacenter-name>-rt-private
  • Details: Used by the private subnet. Instances use the AKS public load balancer for public access. Theroute table is also under the control of kubenet AKS plugin.
  • Route: <VNET-CIDR> to vnetlocal

VNET

  • Name: <datacenter-name>-vnet
  • Default CIDR: 10.0.0.0/16
  • Details: The VNET is the virtual network that encapsulates the entire AKS Datacenter deployment.
  • Contains Private Subnet:
    • Name: <datacenter-name>-sn-private-0
    • Details: The subnet used by the AKS Cluster.
    • Default CIDR: 10.0.0.0/16

Bastion Host

  • Name: <datacenter-name>-bastion
  • OS Image: Ubuntu 18.04-LTS
  • VM Size: Standard_DS1_v2
  • Storage_os_disk
    • name: <datacenter-name>-bastion
    • Caching: ReadWrite
    • create_option: FromImage
    • managed_disk_type: Standard_LRS
  • Authentication: SSH keypair generated by Terraform and stored in Terraform state.
  • Network Security Rule:
    • Only allow SSH port inbound.
    • Associated with Bastion Host NIC
  • Static Public IP Name: <datacenter-name>-bastion-ip
  • NIC
    • Name: <datacenter-name>-bastion-nic
    • Subnet association: <datacenter-name>-sn-private-0
    • Configuration Name: <datacenter-name>-bastion-nic-cfg
      • Dynamic Private IP
      • Public IP Assigned: <datacenter-name>-bastion-ip

AKS Cluster

  • Name: <datacenter-name>-aks
  • K8S API Endpoint: Private
  • Network Profile
    • Plugin: Either kubenet or azure
    • Docker Bridge CIDR: 172.17.0.1/16
    • DNS Service IP: 10.2.0.10
    • Service CIDR: 10.2.0.0/16
    • Load Balancer SKU: Standard
      • 1 Outgoing Public IP
      • Outbound ports allocated: 1032 (Allows up to 62 worker nodes in the cluster).

Default Node Pool

  • Name: default
  • Node count: 2
  • VM Size: Standard_D2s_v3
  • OS Disk Size: 48 GB
  • OS Disk Type: Ephemeral
  • Subnet: <datacenter-name>-sn-private-0
  • Availability Zones: AZ 3

Prod1k Node Pool

  • Name: prod1k
  • Node count: 0
  • Max count: 50
  • VM Size: Standard_E2s_v3
  • OS Disk Size: 48 GB
  • OS Disk Type: Ephemeral
  • Subnet: <datacenter-name>-sn-private-0
  • Availability Zones: AZ 1 or AZ 2 or AZ 3 (see note)
  • Node Labels:
    • serviceClass = prod1k
    • nodeType = messaging
  • Node Taints:
    • serviceClass = prod1k:NoExecute
    • nodeType = messaging:NoExecute

Prod10k Node Pool

  • Name: prod10k
  • Node count: 0
  • Max count: 50
  • VM Size: Standard_E4s_v3
  • OS Disk Size: 48 GB
  • OS Disk Type: Ephemeral
  • Subnet: <datacenter-name>-sn-private-0
  • Availability Zones: AZ 1 or AZ 2 or AZ 3 (see note)
  • Node Labels:
    • serviceClass = prod10k
    • nodeType = messaging
  • Node Taints:
    • serviceClass = prod10k:NoExecute
    • nodeType = messaging:NoExecute

Prod100k Node Pool

  • Name: prod100k
  • Node count: 0
  • Max count: 50
  • VM Size: Standard_E8s_v3
  • OS Disk Size: 48 GB
  • OS Disk Type: Ephemeral
  • Subnet: <datacenter-name>-sn-private-0
  • Availability Zones: AZ 1 or AZ 2 or AZ 3 (see note)
  • Node Labels:
    • serviceClass = prod100k
    • nodeType = messaging
  • Node Taints:
    • serviceClass = prod100k:NoExecute
    • nodeType = messaging:NoExecute

Monitoring Node Pool

  • Name: monitoring
  • Node count: 0
  • Max count: 50
  • VM Size: Standard_E2s_v3
  • OS Disk Size: 48 GB
  • OS Disk Type: Ephemeral
  • Subnet: <datacenter-name>-sn-private-0
  • Availability Zones: AZ 1 or AZ 2 or AZ 3 (see note)
  • Node Labels:
    • nodeType = monitoring
  • Node Taints:
    • nodeType = monitoring:NoExecute

Resource Group (Managed by AKS)

  • Name: MC_<datacenter-name>-infra-rg_<datacenter-name>-aks_eastus2
    The name is derived from both the resource group name and the AKS cluster name.
  • Details: The resource group is generated by AKS automatically and contains all the resources created by AKS to host the AKS Cluster.

Public Load Balancer

  • Name: kubernetes
  • Details: Provides NAT to the kubernetes worker nodes over a public IP and provides public endpoint over public IPs to public access brokers running in AKS. There is only one public load balancer for the cluster. Rules are added to this load balancer as public access brokers are created.

Public IP Address for load balancer outgoing rules

  • Name: <GUID>
  • Details: Used by the public load balancer to provide NAT to its backend and internet access to brokers running in AKS. There is at least one public IP. Additional public IPs could be added to the outgoing rule pool to increase the possible number of instances.

Public IP Address for broker public access and load balancer ingress rules

  • Name: kubernetes-<GUID>
  • Details: Used by the public load balancer to provide a public IP to ingress rules that get traffic to a public access broker. There is is one public IP for each public access broker deployed in the AKS cluster.

Internal Load Balancer

  • Name: kubernetes-internal
  • Details: An internal load balancer is used only when private access broker are deployed. It provides a private endpoint in the AKS private subnet for each private access broker. Private endpoints are dynamically assigned a private IP from the subnet to the load balancer’s front end. There is only one internal load balancer for the cluster. This load balancer is present only if there is at least one private access broker deployed in AKS.

Network Security Group

  • Name: aks-agentpool-<id>-nsg
  • Details: Secures the Virtual Machine Scale Sets managed by AKS. It covers all worker nodes VMs. Inbound access is allowed only to the load balancers and to the VNET. Outbound access is allowed only to the VNET and Internet.

Network Interface

  • Name: kube-apiserver.nic.7549c48b-e241-4e2f-8c21-426e534b2ba0
  • Details: This network interface gives the AKS API private endpoint its private IP on the subnet.

Private Endpoint

  • Name: kube-apiserver
  • Details: This private endpoint gives access to AKS’s API REST endpoint. It is attached to a network interface connected to the private subnet. kubectl and other kubernetes management tools use this endpoint to manage the cluster.

Private DNS Zone

  • Name: c5acc6ae-98f2-4a0d-ae75-80ae51f161ff.privatelink.eastus2.azmk8s.io
  • Details: The private DNS zone gives a privately resolvable hostname to the private endpoint’s NIC.

Virtual Machine Scale Set (Default Node Pool)

  • Name: aks-default-18672195-vmss
  • Details: Node pool for the cloud-agent and other system pods
  • Instance Size: Standard_D2s_v3

Virtual Machine Scale Set (Prod1k Node Pool)

  • Details: Node pool for the messaging pods of service plans using the Prod1k tier:
    • Developer 100, Enterprise 250 and Enterprise 1K (Kilo)
  • Instance Size: Standard_E2s_v3

Virtual Machine Scale Set (Prod10k NodePool)

  • Name: aks-prod10k-18672195-vmss
  • Details: Node pool for the messaging pods of service plans using the Prod10k tier:
    • Enterprise 5K (Mega) and Enterprise 10K (Giga)
  • Instance Size: Standard_E4s_v3

Virtual Machine Scale Set (Prod100k Node Pool)

  • Name: aks-prod100k-18672195-vmss
  • Details: Node pool for the messaging pods of service plans using the Prod100k tier:
    • Enterprise 50K (Tera 50k) and Enterprise 100K (Tera)
  • Instance Size: Standard_E8s_v3

Virtual Machine Scale Set (Monitoring Node Pool)

  • Name: aks-monitoring-18672195-vmss
  • Details: Node pool for the monitoring pod of HA service plans:
    • Enterprise 250 and Enterprise 1K (Kilo), Enterprise 5K (Mega), Enterprise 10K (Giga), Enterprise 50K (Tera 50k) and Enterprise 100K (Tera)
  • Instance Size: Standard_E2s_v3

For high-availability event broker services, each node pool hosting an event broker service must be locked to a single availability zone. This allows the cluster autoscaler to function properly. Solace uses pod anti-affinity against the node pools' zone label to ensure that each pod in a high-availability event broker service is in a separate availability zone. See Node Pool Requirements for more information.