Deploying and Configuring Solace Schema Registry with Kubernetes

This section outlines how to deploy Solace Schema Registry on a Kubernetes cluster using Helm. It includes steps for installing the registry via the official Helm chart, customizing deployment parameters using a values.yaml file, and configuring integration with external Identity Providers (IdPs). Using Kubernetes with Helm allows Solace Schema Registry to run as a scalable, containerized service with declarative configuration and automated lifecycle management.

Prerequisites

Before deploying Solace Schema Registry, ensure you have:

  • A Kubernetes cluster, version 1.21+ (1.19+ minimum supported).
  • Helm 3.8+ installed (3.0+ minimum supported).
  • kubectl configured to communicate with your cluster.
  • Access to Solace Schema Registry container images from the Solace Products website. An account is required to access these images.

Minimum System Requirements for Production

The following are the minimum CPU, memory and storage resources required to run Solace Schema Registry in a production environment:

  • Per-Component Resource Recommendations
    • Database Pods
      • CPU: 500m request/limit per pod
      • Memory: 256Mi request/limit per pod
      • Storage: Minimum 1Gi for production workloads
      • Replicas: Minimum 3 for high availability (1 primary + 2 standby)
    • UI, IDP, and Backend Pods
      • CPU: 1000m request/limit per pod recommended
      • Memory: 256Mi request/limit per pod recommended
      • Replicas: Minimum 2 for high availability
These resource recommendations can be configured in your values.yaml file. For production deployments, ensure you allocate sufficient resources based on your expected workload and high availability requirements.

Multi-Node Cluster Requirements

For high availability deployments, Solace Schema Registry requires a minimum of 3 nodes in your Kubernetes cluster. This ensures proper distribution of services and database replicas across multiple nodes for fault tolerance.

The registry runs in active-active mode but depends on PostgreSQL in active-standby configuration. A multi-node setup ensures proper failover capabilities.

Deployment Architecture

The Solace Schema Registry deployment on Kubernetes consists of the following components:

  • schema-registry-backend—Handles API requests and schema management.
  • schema-registry-ui —Provides the web interface for schema management.
  • schema-registry-db-cluster—Database that stores schema definitions and metadata.
  • schema-registry-idp—Identiy provider that manages authentication and authorization.

The diagram below illustrates the Kubernetes deployment architecture of Solace Schema Registry:

Key aspects of this architecture include:

  • Backend System—Deployed as a Kubernetes Deployment with multiple replicas for high availability. It connects to both the database for schema storage and the IdP for authentication. The Backend Service is exposed externally through an Ingress controller with path-based routing (typically at /apis/registry/v3).
  • Database—PostgreSQL is deployed by the Helm chart using the CloudNative PostgreSQL Operator. This provides:
    • Persistent storage through Kubernetes PersistentVolumeClaims
    • High availability with one read-write primary and multiple read-only replicas
    • Automatic failover managed by the operator

    The database deployment requires a properly configured StorageClass in your Kubernetes cluster. After deployment, you are responsible for database management tasks (upgrades, backups, etc.) through the CloudNative PostgreSQL Operator.

  • External Identity Provider (IdP)—Connection to an existing external IdP, for example Microsoft Entra ID or Okta.

For HA deployments, each site contains almost identical stacks with the main difference being the PostgreSQL configuration. The Solace Schema Registry itself runs in an active-active configuration, while the database operates in an active-standby mode with the Kubernetes operator handling failover in case of site failure.

Helm Chart

The helm chart is available as part of the Solace Schema Registry on the Solace Products website. An account is required to access these files. The chart consists of the following components:

  • templates—Directory containing Kubernetes manifest templates.
  • Chart.yaml—Contains chart metadata such as version, name, and description.
  • values.yaml—Default configuration values for the chart.

The Helm chart creates all necessary Kubernetes resources including deployments, services, configmaps, namespace, secrets, and ingress rules to run Solace Schema Registry in your Kubernetes cluster.

Helm Chart Installation

You can deploy Solace Schema Registry using the provided Helm chart. This chart automates the deployment of all necessary components and configures them to work together. To install Solace Schema Registry using Helm, follow these steps:

  1. Retrieve Solace Schema Registry from the Solace Products website. Follow the instructions in the included README.md to load the Docker images into your container registry
  2. Install the CloudNative PostgreSQL Operator for database management:
    kubectl apply --server-side -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.26/releases/cnpg-1.26.0.yaml
  3. Open the helm-charts directory and update the values.yaml file with your environment-specific configuration. This file should contain your deployment-specific settings. Replace the example values with your actual configuration:
    # Example custom values for your deployment
    
    # Docker image registry credentials for pulling images
    dockerconfigjson: "" # Base64-encoded Docker config.json with registry credentials
    
    # Database configuration
    database:
      superuserSecret: ""    # Name of the secret containing the superuser password, if not provided a new secret will be created with a randomly generated password
      logLevel: info         # PostgreSQL log level: debug, info, warning, error, trace. Higher levels provide more detailed logging for troubleshooting
      storageClass: ""       # Kubernetes storage class for database persistent volumes. Use cloud-specific classes like 'gp2' (AWS), 'standard' (GCP), or 'nfs' for local development
      size: 1Gi              # Size of the main database storage. Minimum 10Gi recommended for production workloads to handle schema registry data growth
      replicas: 3            # Number of PostgreSQL instances in the cluster. Minimum 3 for high availability: 1 primary + 2 standby replicas for automatic failover
      enableMonitoring: true # Enable Prometheus monitoring for PostgreSQL metrics. Set to true in production for observability
      tolerations: []        # Optional tolerations for database pods. Example: [{"key": "database", "operator": "Equal", "value": "true", "effect": "NoSchedule"}]
      resources:
        requests:
          cpu: 500m     # CPU request for the database pods - increased for better performance
          memory: 256Mi # Memory request for the database pods - increased for PostgreSQL
        limits:
          cpu: 500m     # CPU limit for the database pods - increased for initialization
          memory: 256Mi # Memory limit for the database pods - increased for stability
    
    # Identity Provider configuration
    idp:
      registryOidcClientSecret: "" # Secure client secret for OIDC authentication
      developerPassword: ""        # Password for developer role access
      readonlyPassword: ""         # Password for read-only role access
    
    # Ingress and TLS Configuration
    ingress:
      port: 443          # or your NodePort for HTTPS
      hostNameSuffix: "" # Your domain or use nip.io with cluster IP for testing
      tls:
        enabled: true
        crt: "" # Your certificate content
        key: "" # Your private key content
  4. Install Solace Schema Registry using Helm:
    helm upgrade --install schema-registry ./solace-schema-registry
  5. Verify the deployment:
    kubectl get pods -n solace

    You should see pods for the backend service, UI service, database, and identity provider all in the Running state.

  6. Access the deployed services. Replace <ingress.hostNameSuffix> with the actual hostname or IP address you configured for your ingress:
    • UI Service: https://ui.<ingress.hostNameSuffix>

    When using the internal Identity Provider, log in with one of the following credentials:

    • sr-developer:<devPassword>—For developer role access
    • sr-readonly:<roPassword>—For read-only access

    Replace <devPassword> and <roPassword> with the values you set in the idp.developerPassword and idp.readonlyPassword fields in your values file. For more information, see Internal Identity Provider (IdP) Configuration.

Configuration Options

The following are key configuration options in the values.yaml file that you can customize for your deployment. These settings control various aspects of Solace Schema Registry deployment, from basic scaling parameters to network configuration and security settings.

General Configuration

These settings control the basic deployment parameters such as the number of replicas for high availability and the namespace where you deploy Solace Schema Registry. For production environments, Solace recommends that you set at least 2 replicas to ensure high availability.

# Namespace for deployment
namespace: solace

# For high availability, set replicas to 2 or more
replicas: 2

Database Configuration

The Solace Schema Registry requires a PostgreSQL database to store schema definitions and metadata. These settings configure the database connection parameters, credentials, and replication for high availability. For production deployments, make sure you set secure credentials and configure the appropriate number of database replicas.

database:
  superuserSecret: ""    # Name of the secret containing the superuser password, if not provided a new secret will be created with a randomly generated password
  logLevel: info         # PostgreSQL log level: debug, info, warning, error, trace. Higher levels provide more detailed logging for troubleshooting
  storageClass: ""       # Kubernetes storage class for database persistent volumes. Use cloud-specific classes like 'gp2' (AWS), 'standard' (GCP), or 'nfs' for local development
  size: 1Gi              # Size of the main database storage. Minimum 10Gi recommended for production workloads to handle schema registry data growth
  replicas: 3            # Number of PostgreSQL instances in the cluster. Minimum 3 for high availability: 1 primary + 2 standby replicas for automatic failover
  enableMonitoring: true # Enable Prometheus monitoring for PostgreSQL metrics. Set to true in production for observability
  tolerations: []        # Optional tolerations for database pods. Example: [{"key": "database", "operator": "Equal", "value": "true", "effect": "NoSchedule"}]
  resources:
    requests:
      cpu: 500m     # CPU request for the database pods - increased for better performance
      memory: 256Mi # Memory request for the database pods - increased for PostgreSQL
    limits:
      cpu: 500m     # CPU limit for the database pods - increased for initialization
      memory: 256Mi # Memory limit for the database pods - increased for stability

Storage Class Configuration

For cloud deployments, you need to specify the appropriate storage class based on your cloud provider. This ensures proper persistent volume provisioning for the database:

database:
  storageClass: "standard"  # Choose based on your cloud provider
  # Examples:
  # GKE: "standard" or "ssd"
  # EKS: "gp2" or "gp3"
  # AKS: "default" or "managed-premium"
For local development with Minikube, you can use the default storage class or specify "standard".

Ingress and TLS Configuration

For production deployments, you'll need to configure ingress and TLS certificates in your values.yaml file as shown in step 3 of the Helm Chart Installation section.

For testing purposes, you can use nip.io domains with your cluster's external IP address, for example *.192.168.1.100.nip.io, to avoid DNS configuration.

Authentication

Solace Schema Registry supports two authentication methods: an internal identity provider for simple deployments and external identity providers for production environments. Choose the method that best fits your security requirements and infrastructure.

The internal and external identity provider configurations are mutually exclusive. You must choose one authentication method and configure only the variables for that method.

Internal Identity Provider (IdP) Configuration

Solace Schema Registry uses a lightweight internal IdP (node-oidc-provider) for basic authentication. This is suitable for development environments or smaller deployments where integration with an external IdP is not required. Configure the internal IdP using the following settings in values.yaml:

When using the embedded (internal) OIDC provider in your Kubernetes deployment, Solace Schema Registry requires HTTPS. This is because the embedded provider's reliance on secure communication protocols to protect authentication data.
Configuration Key Description Required
idp.registryOidcClientSecret Secure client secret for OIDC authentication Yes
idp.registryIdpKey A Base64-encoded private key for signing JWTs Yes
idp.developerPassword Password for developer role access Yes
idp.readonlyPassword Password for read-only role access Yes
idp.service.port Port for the internal IdP service No (default: 3000)

Example Internal Authentication Configuration

idp:
  registryOidcClientSecret: ""  # Set a secure client secret
  registryIdpKey: ""            # Set a secure IdP key
  developerPassword: ""         # Password for developer role
  readonlyPassword: ""          # Password for readonly role
  service:
    name: schema-registry-idp-service
    port: 3000
  image:
    name: solace-schema-registry-login
    tag: latest

External Identity Provider (IdP) Configuration

For enterprise deployments, Solace Schema Registry supports integration with external OIDC providers. This section provides comprehensive configuration guidance including general OIDC setup and provider-specific instructions.

OIDC Configuration Variables

An external IdP allows you to leverage your existing identity management infrastructure and provides more advanced security features such as multi-factor authentication and single sign-on. To configure an external IdP, you need to set the following variables in the values.yaml file:

Configuration Key Description
externalIdp.enabled Set to true to enable external identity provider authentication
externalIdp.uiIssuer Token issuer URL for UI authentication
externalIdp.apiIssuer Token issuer URL for API authentication
externalIdp.apiRolesClaimPath Path to roles claim in API tokens (for example, roles)
externalIdp.uiRolesClaimPath Path to roles claim in UI tokens (for example, groups)
externalIdp.authRolesDeveloper Role or group ID that grants developer access
externalIdp.authRolesReadonly Role or group ID that grants read-only access
externalIdp.oidcDiscoveryEnabled Set to true to enable OIDC discovery
externalIdp.oidcResolveTenantsWithIssuer Set to true for multi-tenant environments (required for Azure)
externalIdp.oidcAuthServerUrl Base URL of your OIDC provider (for example, https://your-keycloak.com/auth/realms/registry)
externalIdp.clientId Client ID registered with the OIDC provider
externalIdp.apiSecret Client secret for API authentication
externalIdp.authnBasicScope Authentication scope for basic auth (for example, api://clientId/.default for Azure)

Example External Authentication Configuration

externalIdp:
  enabled: true                        # Enable external IdP integration
  uiIssuer: ""                         # Token issuer URL for UI authentication
  apiIssuer: ""                        # Token issuer URL for API authentication
  apiRolesClaimPath: "roles"           # Path to roles claim in API tokens
  uiRolesClaimPath: "groups"           # Path to roles claim in UI tokens
  authRolesDeveloper: ""               # Group/role ID that grants developer access
  authRolesReadonly: ""                # Group/role ID that grants read-only access
  oidcDiscoveryEnabled: "true"         # Enable OIDC discovery protocol
  oidcResolveTenantsWithIssuer: "true" # Required for multi-tenant environments like Azure
  oidcAuthServerUrl: ""                # Base URL of your OIDC provider
  clientId: ""                         # Client ID registered with the OIDC provider
  apiSecret: ""                        # Client secret for API authentication
  authnBasicScope: ""                  # Authentication scope for basic auth

When using an external Identity Provider (IdP), the deployment does not include or configure an internal IdP. The Solace Schema Registry expects role-based authorization information in the groups claim by default, but this can be customized using the registryOidcRoleClaimKey setting.

Supported Identity Providers

While the OIDC configuration variables above work with any compatible provider, some identity providers require additional configuration steps, have specific limitations, or need workarounds. This section provides detailed guidance for Microsoft Entra ID.

Getting Started with Microsoft Azure Entra ID

When configuring Azure application registrations to interact with the Registry REST API, there are additional configuration requirements and key limitations around role support.

Limitations

  1. The REST API only supports one role type, read-only OR developer, not both simultaneously.
  2. Users and app registrations cannot share the same role configuration values.

Azure Entra ID Configuration

The following configuration variables must be set in your values.yaml file for Azure Entra ID integration:

externalIdp:
  enabled: true
  oidcResolveTenantsWithIssuer: "true"
  uiIssuer: "https://login.microsoftonline.com/<tenant ID>/v2.0"
  apiIssuer: "https://sts.windows.net/<tenant ID>/"
  uiRolesClaimPath: "groups"
  apiRolesClaimPath: "roles"
  authRolesDeveloper: "<group object ID>"
  authRolesReadonly: "<group object ID>"
  authnBasicScope: "api://<clientId of app registration>/.default"

Matching Group IDs and App Role Values

Azure Entra ID presents a unique challenge: Solace Schema Registry uses a single set of role configuration values (externalIdp.authRolesDeveloper and externalIdp.authRolesReadonly), but Azure handles authentication differently for users versus applications:

  • Interactive users—Azure returns group membership IDs in the groups claim. These are fixed Azure group object IDs that you cannot modify.
  • App registrations (service principals)—Azure includes custom app role values in the roles claim. These are values you define when creating app roles.

As a workaround, when you create app roles in your Azure app registration, set the app role Value field to match your existing Azure group object IDs. This ensures both users and applications can use the same role configuration values in Solace Schema Registry. For example, if your developer Azure group has object ID 12345678-1234-1234-1234-123456789abc, create an app role with Value = 12345678-1234-1234-1234-123456789abc.

App Role Selection

Because only one externalIdp.authnBasicScope is supported, you must choose which role your REST API clients will have:

  • Choose Developer role if REST clients need to create/modify artifacts (for example, creating schemas with references)
  • Choose Read-only role if REST clients should only have read access to the registry
This role selection limitation only affects REST API clients. Web UI users can still use different roles and access the registry without issue.

Example Configuration

Your existing Azure groups:

  • Company Developers group ID: bbdb4071-920e-4704-94da-e43f930a7f96 (should have read-only access)
  • Schema Administrator group ID: 44decace-9a22-4338-b929-e30e4fcf0479 (should have write access)

To configure Azure Entra ID with the required workarounds, follow these steps:

  1. Configure role mappings using your group IDs in your values.yaml file:
    externalIdp:
      authRolesDeveloper: "44decace-9a22-4338-b929-e30e4fcf0479"
      authRolesReadonly: "bbdb4071-920e-4704-94da-e43f930a7f96"
  2. Create Azure app registration for REST API access:
    1. Create app registration with identifier: api://8b2f9d9d-2ba4-486e-bcf5-5320c90ff0a4
    2. Important: Create an app role named sr-readonly with value bbdb4071-920e-4704-94da-e43f930a7f96 (this matches the group ID from Step 1)
    3. Grant this role to the app registration
  3. Configure the scope in your values.yaml file:
    externalIdp:
      authnBasicScope: "api://8b2f9d9d-2ba4-486e-bcf5-5320c90ff0a4/.default"

Role-Based Access Control

The Solace Schema Registry supports role-based access control through the IdP's group claims. This allows you to control what actions different users can perform based on their assigned roles. The following roles are available:

  • sr-developer—Can create and manage schemas but cannot modify global settings. When using the internal IdP, access with username sr-developer and the password set in idp.developerPassword.
  • sr-readonly—Read-only access to schemas with no modification privileges. When using the internal IdP, access with username sr-readonly and the password set in idp.readonlyPassword.

These default roles and their permissions can be customized or mapped to your organization's own identity provider roles as needed. Ensure your IdP is configured to provide these roles in the claims specified by externalIdp.uiRolesClaimPath and externalIdp.apiRolesClaimPath. The default values are groups for UI and roles for API.

Example: Cloud Deployment with GKE Autopilot

This section provides an example of deploying Solace Schema Registry using Google Kubernetes Engine (GKE) Autopilot. This is one of many cloud deployment options available. You can adapt these steps to other cloud providers or Kubernetes distributions based on your specific requirements and infrastructure preferences.

Cloud Prerequisites

  • Google Cloud account with billing enabled
  • Google Cloud SDK (gcloud) installed
  • kubectl installed
  • Helm installed

Set Up Google Cloud SDK & Project

Authenticate with Google Cloud and set your project:

gcloud auth login
gcloud config set project <YOUR_PROJECT_ID>

Replace <YOUR_PROJECT_ID> with your actual GCP project ID.

Create GKE Autopilot Cluster

Create a fully managed Kubernetes cluster in Autopilot mode:

gcloud container clusters create-auto autopilot-cluster-1 --region=us-central1
  • create-auto provisions an Autopilot cluster, which manages nodes automatically
  • --region=us-central1 sets the region (change as needed)

Install Helm Chart

Follow the instructions in Helm Chart Installation.

Connect kubectl to GKE

Fetch cluster credentials so kubectl can manage your GKE cluster:

gcloud container clusters get-credentials autopilot-cluster-1 --region=us-central1

Configure Ingress Controller (Example: NGINX)

Configure an ingress controller to expose services outside the cluster. This example uses NGINX Ingress Controller, but you can use other options such as Network Load Balancer (NLB), Application Load Balancer (ALB), or other ingress controllers based on your cluster configuration and requirements:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

helm install ingress-nginx ingress-nginx/ingress-nginx \
  --create-namespace --namespace ingress-nginx
Ingress controller selection is configurable at the cluster level. Choose the ingress solution that best fits your infrastructure requirements, such as cloud-native load balancers (NLB/ALB) or other Kubernetes ingress controllers.

Get External IP of Ingress Controller

Find the external IP address assigned to the ingress controller:

kubectl get svc -n ingress-nginx

Look for the EXTERNAL-IP under the ingress-nginx-controller service. You will use this IP for DNS and TLS certificate generation.

Troubleshooting

If you encounter issues during deployment, follow these troubleshooting steps to identify and resolve common problems:

  • Pod startup failures:
    • Check pod logs with: kubectl logs -n solace <pod-name>
    • Check for any failure events with: kubectl describe pod <pod-name> -n solace
    • Verify environment variables are correctly set
    • Check for resource constraints (CPU, memory) that might prevent pods from starting
  • Authentication issues:
    • Verify IdP configuration
    • Check OIDC client credentials
    • Ensure redirect URIs are correctly configured in both Solace Schema Registry and IdP
    • Verify that the required roles are properly configured in your IdP
  • Database connection issues:
    • Verify database credentials
    • Check database service is running: kubectl get svc -n solace
    • Ensure network policies allow communication between Solace Schema Registry and database pods
  • Service verification:
    • UI Healthcheck: curl -k https://ui.<your-domain>/ui/healthcheck
    • Backend Healthcheck: curl -k https://apis.<your-domain>/apis/registry/v3
    • IDP OIDC Configuration: curl -k https://idp.<your-domain>/.well-known/openid-configuration

Enhanced Deployment Verification

Use the following command to verify your deployment status by checking all pods in the namespace and waiting for services to become ready:

kubectl get pods -n solace