Configuring the Helm Chart

The following provides documentation for different configuration options available in the values.yaml file for the Solace Schema Registry Helm chart. These settings control various aspects of the deployment, from basic scaling parameters to advanced backup and recovery configurations.

Helm Chart Structure

The Helm chart is available as part of the Solace Schema Registry distribution on the Solace Products website. An account is required to access these files. The chart consists of the following components:

  • templates—Directory containing Kubernetes manifest templates.
  • Chart.yaml—Contains chart metadata such as version, name, and description.
  • values.yaml—Default configuration values for the chart.

The Helm chart creates all necessary Kubernetes resources including deployments, services, configmaps, namespace, secrets, and ingress rules to run Solace Schema Registry in your Kubernetes cluster.

General Configuration

These settings control the basic deployment parameters such as the number of replicas for high availability and the namespace where you deploy Solace Schema Registry. For production environments, Solace recommends that you set at least 2 replicas to ensure high availability.

# Namespace for deployment
namespace: solace

# For high availability, set replicas to 2 or more
replicas: 2

Image Configuration

The Solace Schema Registry consists of three main components, each with configurable Docker image settings. You can specify the image names, tags, and pull policy for each component.

Image Pull Configuration

Configure how Kubernetes pulls container images:

# Image pull policy - use IfNotPresent for local registries
imagePullPolicy: IfNotPresent

# Base64-encoded Docker config.json with registry credentials
dockerconfigjson: ""
The dockerconfigjson field should contain a base64-encoded Docker configuration file that includes credentials for accessing your container registry. This is required if your images are stored in a private registry.

Component Image Tags and Names

Each component has configurable image name and tag settings:

Component Configuration Key Default Value
Identity Provider idp.image.name solace-schema-registry-login
Identity Provider idp.image.tag latest
Backend API backend.image.name solace-registry
Backend API backend.image.tag latest
Web UI ui.image.name solace-registry-ui
Web UI ui.image.tag latest

Example Configuration:

idp:
  image:
    name: your-registry.com/project/solace-schema-registry-login
    tag: v1.0.0

backend:
  image:
    name: your-registry.com/project/solace-registry
    tag: v1.0.0

ui:
  image:
    name: your-registry.com/project/solace-registry-ui
    tag: v1.0.0

Database Configuration

The Solace Schema Registry requires a PostgreSQL database to store schema definitions and metadata. The Helm chart deploys PostgreSQL using the CloudNative PostgreSQL Operator. These settings configure the database connection parameters, credentials, and replication for high availability.

Core Database Options

database:
  superuserSecret: ""    # Name of the secret containing the superuser password
                         # If not provided, creates new secret with random password
  logLevel: info         # PostgreSQL log level: debug, info, warning, error, trace
                         # Higher levels provide more detailed logging for troubleshooting
  storageClass: ""       # Kubernetes storage class for database persistent volumes
                         # Use cloud-specific classes: 'gp2' (AWS), 'standard' (GCP), 'nfs' (local)
  size: 1Gi              # Size of the main database storage
                         # Minimum 10Gi recommended for production workloads
  replicas: 3            # Number of PostgreSQL instances in the cluster
                         # Minimum 3 for HA: 1 primary + 2 standby replicas for automatic failover
  enableMonitoring: true # Enable Prometheus monitoring for PostgreSQL metrics
                         # Set to true in production for observability
  tolerations: []        # Optional tolerations for database pod scheduling on tainted nodes
  resources:
    requests:
      cpu: 500m          # CPU request for the database pods
      memory: 256Mi      # Memory request for the database pods
    limits:
      cpu: 500m          # CPU limit for the database pods
      memory: 256Mi      # Memory limit for the database pods
WAL (Write-Ahead Logging) storage uses the same disk as the database storage. This simplifies configuration while providing the necessary transaction logging for database recovery.

Database Bootstrap Configuration

Configure the initial database setup:

bootstrap:
  initdb:
    database: ""  # Application database name
    user: ""      # Application database user
    secret: ""    # If not provided, creates new secret with random password

Database Tolerations

Tolerations allow database pods to be scheduled on nodes with specific taints. This is useful for isolating database workloads on high-performance or dedicated infrastructure.

Example Configuration:

# Taint nodes with:
# kubectl taint nodes <node-name> database-workload=true:NoSchedule

database:
  tolerations:
    - key: "database-workload"
      operator: "Equal"
      value: "true"
      effect: "NoSchedule"

Storage Class Configuration

For cloud deployments, you need to specify the appropriate storage class based on your cloud provider. This ensures proper persistent volume provisioning for the database:

database:
  storageClass: "standard"  # Choose based on your cloud provider
  # Examples:
  # GKE: "standard" or "ssd"
  # EKS: "gp2" or "gp3"
  # AKS: "default" or "managed-premium"
For local development with Minikube, you can use the default storage class or specify "standard".

Ingress and TLS Configuration

Configure how external traffic reaches your Solace Schema Registry deployment:

ingress:
  enabled: true                       # Enable or disable ingress resources
  port: ""                            # Port for HTTPS traffic (or your NodePort)
  hostNameSuffix: ""                  # Your domain or use nip.io with cluster IP for testing
                                      # Example: ".example.com" or ".192.168.1.100.nip.io"
  annotations: {}                     # Additional annotations for ingress resources
                                      # Example: kubernetes.io/ingress.class: "nginx"
  tls:
    enabled: true                     # Enable TLS for ingress
    crt: ""                           # Your certificate content (PEM format)
    key: ""                           # Your private key content (PEM format)
For testing purposes, you can use nip.io domains with your cluster's external IP address, for example *.192.168.1.100.nip.io, to avoid DNS configuration.

Example with Annotations:

ingress:
  enabled: true
  hostNameSuffix: ".example.com"
  annotations:
    kubernetes.io/ingress.class: "nginx"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
  tls:
    enabled: true
    crt: |
      -----BEGIN CERTIFICATE-----
      ...your certificate...
      -----END CERTIFICATE-----
    key: |
      -----BEGIN PRIVATE KEY-----
      ...your private key...
      -----END PRIVATE KEY-----

Authentication Configuration

Authentication is configured through values.yaml using either the internal identity provider or external OIDC providers. For detailed authentication setup procedures, see Authentication and Security.

Quick Reference:

  • Internal IdP: Configure idp.registryOidcClientSecret, idp.developerPassword, idp.readonlyPassword
  • External IdP: Set externalIdp.enabled: true and configure OIDC parameters

See Authentication and Security for complete configuration details and provider-specific setup instructions.

Monitoring and Metrics

In versions 1.1.0 and later, Solace Schema Registry provides built-in monitoring by exposing Prometheus metrics at the /metrics endpoint of the service. These metrics provide insights into registry performance, health, and operational status.

To access the metrics endpoint in your Kubernetes deployment:

  • Backend metrics: https://apis.<ingress.hostNameSuffix>/metrics

Replace <ingress.hostNameSuffix> with the actual hostname or IP address you configured for your ingress.

The user is responsible for providing their own Prometheus collector/server. Solace only exposes the metrics endpoint.

For information about audit logs and monitoring registry operations, see Monitoring Solace Schema Registry.

Troubleshooting Configuration Issues

If you encounter configuration-related issues, try these troubleshooting steps:

  • Database configuration issues:
    • Verify storage class exists: kubectl get storageclass
    • Check if storage class supports volume snapshots (if using backup.volumeSnapshot)
    • Verify database credentials are correctly set
    • Check database pod logs: kubectl logs -n solace <db-pod-name>
  • Image pull issues:
    • Verify dockerconfigjson is correctly base64-encoded
    • Check image names and tags match your registry
    • Verify imagePullPolicy is appropriate for your setup
    • Test registry access: docker pull <your-registry>/<image-name>:<tag>
  • Ingress configuration issues:
    • Verify ingress controller is installed and running
    • Check ingress resource: kubectl get ingress -n solace
    • Verify DNS or nip.io resolution
    • Check TLS certificate validity
  • Backup configuration issues:
    • For volume snapshots: Verify storage class supports CSI snapshots
    • For object store: Verify credentials and endpoint URL are correct
    • Check backup job logs: kubectl logs -n solace <backup-job-pod>