Converting from Multiple Mount Points to a Single Mount Point

In versions of SolOS prior to 9.12, storage-elements were typically externalized to separate storage volumes and mounted to the container using separate mount points. In version 9.12 and later, these storage-elements are now collected under a single object called a storage-group, which uses a single mount point.

Before you use these procedures, ensure that you calculate the storage requirements for your container. You must make sure your external storage is sufficient accommodate:

  • the minimum required storage, PLUS
  • additional storage for higher values of system scaling parameters, PLUS
  • space for spooled messages

For details, see:

If you have upgraded from SolOS version 9.11 or earlier to SolOS 9.12 or later, you can use the following procedures to convert your software event broker(s) from using multiple mount points to using a single mount point:

Standalone

To update a standalone container image to use a single mount point instead of multiple mount points, do the following:

These steps assume you are using Docker Engine. Similar commands are available for other container runtimes.

  1. Make sure all the required data are in the volumes. To double check what volumes you have in the environment, run the following command:
    docker volume ls

    The response should look like the following:

    DRIVER    VOLUME NAME
    local     adb
    local     adbBackup
    local     diagnostics
    local     spool
    local     jail
    local     var
  2. Stop the Solace container:
    docker stop --time 1200 solace
  3. Remove the container: 
    docker rm solace
  4. On the host operating system, create new directories in the location where you want the new storage-group to reside.

    You can use any name you want for the top-level directory, but the storage-element directories must be as shown in the example (spool, spool-cache, var, spool-cache-backup, jail, and diagnostics)

    The following example uses /mnt/solace as the top-level directory:

    mkdir /mnt/solace/spool
    mkdir /mnt/solace/spool-cache
    mkdir /mnt/solace/var
    mkdir /mnt/solace/spool-cache-backup  
    mkdir /mnt/solace/jail   
    mkdir /mnt/solace/diagnostics
  5. Set the owner and group of the new directories to the container owner and container group by running the following command, replacing <container-user> and <container-group> with the actual values:

    chown -R <container-user>:<container-group> /mnt/solace/

    For example, if you use the default values for the container owner (1000001) and group (0), run the following command:

    chown -R 1000001:0 /mnt/solace/
  6. To determine what the mount point was for the old volumes, run the following command:
    docker volume inspect adb

    The response should look like the following. The "Mountpoint" field shows where the adb volume was mounted:

    [
        {
            "CreatedAt": "0001-01-01T00:00:00Z",
            "Driver": "local",
            "Labels": {},
            "Mountpoint": "/var/lib/docker/volumes/adb/_data",
            "Name": "adb",
            "Options": {},
            "Scope": "local"
        }
    ]
  7.  Copy the folder contents corresponding to each volume to the new location (note the differences between the old and new storage-element names):
    shopt -s dotglob
    mv /var/lib/docker/volumes/adb/_data/* /mnt/solace/spool-cache
    mv /var/lib/docker/volumes/internalSpool/_data/* /mnt/solace/spool
    mv /var/lib/docker/volumes/diagnostics/_data/* /mnt/solace/diagnostics
    mv /var/lib/docker/volumes/var/_data/* /mnt/solace/var
    mv /var/lib/docker/volumes/adbBackup/_data/* /mnt/solace/spool-cache-backup
    mv /var/lib/docker/volumes/jail/_data/* /mnt/solace/jail
  8. Run the following command to create a new container that mounts the new storage-group directory (/mnt/solace) as a bind mount:
     docker create --network=host --uts=host --shm-size=1g --ulimit core=-1 \
    --ulimit memlock=-1 --ulimit nofile=2448:42192 --env 'username_admin_globalaccesslevel=admin' \
    --env 'username_admin_password=admin' --name=solace --mount type=bind,source=/mnt/solace,destination=/var/lib/solace,ro=false \
    solace-pubsub-enterprise:<version>
  9.  If you used the Docker volume API to create the old named volumes, you can now remove them by running the following command (they are no longer used):
    docker volume rm adb adbBackup diagnostics internalSpool jail var

High Availability

To update the software event brokers in a high-availability group to use a single mount point instead of multiple mount points, do the following:

In the steps that follow, the primary messaging node is referred to as solace-primary and the backup messaging node is referred to as solace-backup.

  1. Verify that the redundancy status is correct by running the show redundancy command on each messaging node:
    • On both messaging nodes, ensure Redundancy Configuration Status is Enabled and Redundancy Status is Up
    • On the primary node, the Message Spool Status should be AD-Active
    • On the backup node, the Message Spool Status should be AD-Standby
  2. On the backup node, perform the steps listed above for a standalone image.
  3. On the backup node, run the show redundancy command to ensure that the Redundancy Status is Up:
    solace-backup> show redundancy
    Configuration Status     : Enabled
    Redundancy Status        : Up
  4. On the backup node, if config-sync was active, ensure it is active again:
    solace-backup> show config-sync
    Admin Status            : Enabled
    Oper Status             : Up
  5. On the primary node, release activity to the backup node:
    solace-primary> enable
    solace-primary> configure
    solace-primary> redundancy release-activity
  6. On the backup node, run the show redundancy command to ensure that the Message Spool Status is now AD-Active.
  7. On the primary node, perform the steps listed above for a standalone image.
  8. On the primary node, re-claim activity:
    solace-primary> enable
    solace-primary> configure
    solace-primary> no redundancy release-activity
  9. On the primary node, run the show redundancy command to ensure that the Redundancy Status is Up:
    solace-primary> show redundancy
    Configuration Status     : Enabled
    Redundancy Status        : Up

    At this point, the backup node is still handling messaging traffic. If required, you can manually force the backup node to release activity to the primary node.

  10. On the primary node, if config-sync was active, ensure it is active again:
    solace-primary> show config-sync
    Admin Status            : Enabled
    Oper Status             : Up
  11. On the monitoring node, perform the steps listed above for a standalone image.