Cloud Deployment Examples

This section provides examples of deploying Solace Schema Registry on cloud-based Kubernetes platforms. While the examples focus on Google Kubernetes Engine (GKE) Autopilot, the concepts and steps can be adapted to other cloud providers such as Amazon EKS, Azure AKS, or other Kubernetes distributions based on your specific requirements and infrastructure preferences.

Example: Cloud Deployment with GKE Autopilot

This section provides a complete walkthrough of deploying Solace Schema Registry using Google Kubernetes Engine (GKE) Autopilot. GKE Autopilot is a fully managed Kubernetes service that automatically provisions and manages the cluster infrastructure.

Cloud Prerequisites

Before beginning the deployment, ensure you have:

  • Google Cloud account with billing enabled
  • Google Cloud SDK (gcloud) installed
  • kubectl installed
  • Helm installed
  • Access to Solace Schema Registry container images

Set Up Google Cloud SDK & Project

Authenticate with Google Cloud and set your project:

gcloud auth login
gcloud config set project <YOUR_PROJECT_ID>

Replace <YOUR_PROJECT_ID> with your actual GCP project ID.

Create GKE Autopilot Cluster

Create a fully managed Kubernetes cluster in Autopilot mode:

gcloud container clusters create-auto autopilot-cluster-1 --region=us-central1
  • create-auto provisions an Autopilot cluster, which manages nodes automatically
  • --region=us-central1 sets the region (change as needed)

This command creates a multi-zone cluster with automatic node provisioning, scaling, and management.

Connect kubectl to GKE

Fetch cluster credentials so kubectl can manage your GKE cluster:

gcloud container clusters get-credentials autopilot-cluster-1 --region=us-central1

Verify the connection:

kubectl cluster-info
kubectl get nodes

Configure Ingress Controller

Configure an ingress controller to expose services outside the cluster. This example uses NGINX Ingress Controller, but you can use other options such as Network Load Balancer (NLB), Application Load Balancer (ALB), or other ingress controllers based on your cluster configuration and requirements:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

helm install ingress-nginx ingress-nginx/ingress-nginx \
  --create-namespace --namespace ingress-nginx
Ingress controller selection is configurable at the cluster level. Choose the ingress solution that best fits your infrastructure requirements, such as cloud-native load balancers (NLB/ALB) or other Kubernetes ingress controllers.

Get External IP of Ingress Controller

Find the external IP address assigned to the ingress controller:

kubectl get svc -n ingress-nginx

Look for the EXTERNAL-IP under the ingress-nginx-controller service. You will use this IP for DNS and TLS certificate generation.

Example output:

NAME                                 TYPE           EXTERNAL-IP      PORT(S)
ingress-nginx-controller             LoadBalancer   34.123.45.67     80:31234/TCP,443:32345/TCP

Install Schema Registry

Follow the installation instructions in Installing Solace Schema Registry with Helm, ensuring you configure the following in your values.yaml:

  1. Set the storage class for GKE:
    database:
      storageClass: "standard"  # or "ssd" for better performance
  2. Configure ingress with the external IP (using nip.io for testing):
    ingress:
      enabled: true
      hostNameSuffix: ".34.123.45.67.nip.io"  # Replace with your external IP
      tls:
        enabled: true
        crt: ""  # Your certificate
        key: ""  # Your private key
  3. Configure image registry settings:
    dockerconfigjson: ""  # Base64-encoded registry credentials
    imagePullPolicy: IfNotPresent

Verify Deployment

After installation, verify all components are running:

kubectl get pods -n solace
kubectl get svc -n solace
kubectl get ingress -n solace

Access the UI at https://ui.34.123.45.67.nip.io (replace with your actual external IP).

Adapting to Other Cloud Providers

While this guide focuses on GKE Autopilot, the deployment process translates well to other cloud providers with some platform-specific adjustments.

Amazon EKS

To deploy on Amazon EKS, create your cluster using eksctl create cluster or the AWS Console. For database storage, EKS supports several storage classes including gp2, gp3, and io1—with gp3 generally offering the best balance of performance and cost for most deployments.

Configure kubectl access to your cluster with aws eks update-kubeconfig. For ingress, you can choose between the AWS Load Balancer Controller (which integrates natively with AWS Application Load Balancers) or NGINX Ingress depending on your requirements.

Azure AKS

Azure deployments begin with cluster creation using az aks create or the Azure Portal. AKS provides several storage options: the default storage class works for most scenarios, while managed-premium offers better performance for production workloads. The azurefile storage class is available when you need ReadWriteMany access patterns.

After creating your cluster, run az aks get-credentials to configure kubectl access. For ingress, the Application Gateway Ingress Controller provides deep Azure integration, though NGINX Ingress is also well-supported. If you're using Azure Entra ID as your identity provider, detailed integration steps are available in Authentication and Security.

Troubleshooting Cloud Deployments

Cloud deployments can encounter platform-specific issues. This section covers common problems and their solutions.

Cluster and Infrastructure Issues

If cluster creation fails, start by verifying your cloud provider credentials are configured correctly and check your account's quota limits—many deployment failures occur when accounts hit resource limits for CPU, memory, or IP addresses. Ensure the selected region supports the requested resources, and review cloud provider logs for detailed error messages that can point to the specific constraint.

Storage problems typically manifest during database pod startup. Run kubectl get storageclass to list available storage classes and verify the one you specified supports dynamic provisioning. If you're planning to use volume snapshots for backups, confirm the storage class supports CSI snapshots. Each cloud provider has recommended storage classes documented in their Kubernetes service guides.

Networking and Connectivity

When the ingress controller isn't working, check that its pods are running with kubectl get pods -n ingress-nginx and examine the logs using kubectl logs -n ingress-nginx <controller-pod>. Verify an external IP has been assigned with kubectl get svc -n ingress-nginx—if it's stuck in pending state, review your cloud provider's load balancer configuration and any associated service quotas.

Network connectivity issues often stem from firewall rules or network policies. Verify firewall rules allow traffic on ports 80 and 443, and check that network policies aren't blocking pod-to-pod communication within the cluster. Confirm DNS resolution is working for your ingress hostnames, then test external connectivity with curl -k https://ui.<your-domain>/ui/healthcheck.

Platform-Specific Considerations

GKE Autopilot enforces resource limits and pod security policies that can affect deployments. Verify your pod resource requests and limits fit within Autopilot's constraints, and check that your pod security contexts comply with the enforced policies. Note that node auto-provisioning may take several minutes during initial deployment as Autopilot provisions the appropriate node pools.

For performance issues, consider upgrading to premium or SSD-backed storage classes, particularly for the database. Monitor resource utilization to identify bottlenecks—if database pods are CPU or memory constrained, increase their resource allocations in your values.yaml. Enable your cloud provider's monitoring services and review their performance recommendations for Kubernetes workloads.

Cloud Deployment Best Practices

Production cloud deployments benefit from following cloud-native patterns and leveraging platform-specific capabilities.

  • Design for high availability by deploying across multiple availability zones—This ensures your Schema Registry remains operational even if an entire zone experiences an outage. Most cloud providers make multi-zone deployment straightforward through their Kubernetes services, with the platform automatically distributing pods across zones.
  • Implement comprehensive monitoring from the start—Enable your cloud provider's native monitoring services (CloudWatch for AWS, Cloud Monitoring for GCP, Azure Monitor for Azure) and consider integrating with Prometheus and Grafana for detailed metrics and customizable dashboards. This visibility helps you identify issues before they impact users and provides data for capacity planning.
  • Leverage your cloud provider's identity and access management systems for authentication—Rather than managing separate credentials, integrate with IAM (AWS), Cloud IAM (GCP), or Azure Active Directory. This centralized approach improves security and simplifies credential lifecycle management. For Schema Registry's internal authentication, the Azure Entra ID integration documented in Authentication and Security provides a good reference pattern.
  • Right-size your resources and enable auto-scaling to balance performance and cost—Start with the minimum recommended resources, monitor actual utilization, then adjust accordingly. Kubernetes Horizontal Pod Autoscaling can automatically adjust replica counts based on CPU or memory utilization, while cloud provider auto-scaling handles node provisioning.
  • For TLS certificates, use cert-manager with Let's Encrypt to automate certificate issuance and renewal—This eliminates manual certificate management and prevents outages from expired certificates. Most cloud environments support cert-manager well, and the initial setup investment pays dividends in reduced operational overhead.