Machine Upgrade

Here you'll find procedures to upgrade your Solace PubSub+ event broker Machine images to version 9.9.0 or later (the current version is 10.7.1).

Upgrade a Standalone Machine Image from Version 9.8.1+

Distribution of PubSub+ packaged as a virtual machine broker image will cease as of release 10.8.1 (June 2024). For more details, see the Deprecated Features list.

Solace recommends that you transition to an alternate method of deploying PubSub+ prior to June 2024.  Containers offer a flexible way to deploy PubSub+ in a number of environments, virtual machines,  and other container platforms. For information about:

An event broker upgrade to a new version preserves the configuration and the spooled messages. The event broker upgrade procedure is performed in place. When upgrading, the upgrade procedure is performed in the existing virtual machine instead of creating a new virtual machine for the upgraded event broker. The same upgrade procedure is followed with or without external block devices.

The following procedure describes an Enterprise to Enterprise upgrade.

On non-Enterprise event brokers, the following upgrades are also supported:

  • from Evaluation to Enterprise
  • from Standard to Standard
  • from Standard to Enterprise

For the upgrades between other editions, some prompts will have a different edition included and one must take care to change the file name and the solacectl upgrade (search for the word ‘enterprise’ and change as needed).

Procedure

To upgrade a standalone machine image from version 9.8.1+ to version 9.9.0 or later:

  1. Log into the event broker as sysadmin.
  2. Copy the tar file to the event broker:
    [sysadmin@solace ~]$ scp [<username>@]<ip-addr>:solace-pubsub-enterprise-<version>-upgrade.tar.gz /tmp

    Where:

    <username>, <ip-addr>, and solace-pubsub-enterprise-<version>-upgrade.tar.gz correspond to the access information of where the new SolOS software is located.

  3. Switch to root user:
    [sysadmin@solace ~]$ sudo su -

    If you are using MQTT retained messages in your deployment, the next step clears the contents of each retain cache. If this content is stored somewhere else in the network, for example another DMR or MNR node, the content will be retrieved when this node comes back online.

  4. Stop the solace service:
    [root@solace ~]# solacectl service stop
  5. Set the timezone:
    [root@solace ~]# timedatectl set-timezone <TIMEZONE>

    To see the list of available timezones, use the timedatectl list-timezones command.

  6. Run the upgrade script:
    [root@solace ~]# solacectl upgrade /tmp/solace-pubsub-enterprise-<version>-upgrade.tar.gz
  7. The output of the upgrade command will indicate whether further steps are required. Follow any additional steps as described in the output of the upgrade command.

  8. If you have Solace Geneos Agent installed, set the umask to 0022. The default umask is 0077, which prevents Solace Geneos Agent from reading the system log files. To set the umask to 0022, refer to Preparing Software Event Broker for Solace Geneos Agent Installation and follow steps 3 to 5.

  9. Restart the event broker:
    [root@solace ~]# reboot

    When the event broker restarts, it will be running the configuration and message-spool from the previous version.

  10. Log into solace and confirm that it is running the new version:
    solace> show version

You have completed this procedure.

Upgrade a Redundant Machine Image Group from Version 9.8.1+

Distribution of PubSub+ packaged as a virtual machine broker image will cease as of release 10.8.1 (June 2024). For more details, see the Deprecated Features list.

Solace recommends that you transition to an alternate method of deploying PubSub+ prior to June 2024.  Containers offer a flexible way to deploy PubSub+ in a number of environments, virtual machines,  and other container platforms. For information about:

A Redundant event broker Group can be upgraded in-service. This upgrade preserves the configuration, including Redundancy, the spooled messages while providing service. The event broker upgrade procedure is performed in place. When upgrading, the upgrade procedure is performed in the existing virtual machine instead of creating a new virtual machine for the upgraded event broker. The same upgrade procedure is followed with or without external block devices.

User accounts (username/passwords) for support/root container users are not transferred during the upgrade procedure and must be manually applied to the new event broker instance.

The following procedure describes an Enterprise to Enterprise upgrade.

On non-Enterprise event brokers, the following upgrades are also supported:

  • from Evaluation to Enterprise
  • from Standard to Standard
  • from Standard to Enterprise

For the upgrades between other editions, some prompts will have a different edition included and one must take care to change the file name and the docker create command (search for the word ‘enterprise’ and change as needed).

For the following procedure, we will refer ‘solace-primary’ as Primary Node, ‘solace-backup’ as Backup Node, and ‘solace-monitor’ as Monitoring Node.

It is important to reboot the three software event brokers one at time. If the Monitoring and Backup Nodes are offline at the same time, the Primary Node will automatically reboot.

 

To upgrade a redundant machine image group from version 9.8.1+ to version 9.9.0 or later, perform the following steps:

Step 1

Check the redundancy configuration on each node.

  1. Log into each node as an admin user.

  2. Ensure Redundancy Configuration Status is Enabled and Redundancy Status is Up on each node. On solace-primary the Message Spool Status should be AD-Active, and on solace-backup the Message Spool Status should be AD-Standby:

    solace-primary> show redundancy 
    Configuration Status     : Enabled
    Redundancy Status        : Up
    Operating Mode           : Message Routing Node
    Switchover Mechanism     : Hostlist
    Auto Revert              : No
    Redundancy Mode          : Active/Standby
    Active-Standby Role      : Primary
    Mate Router Name         : solace-backup
    ADB Link To Mate         : Up
    ADB Hello To Mate        : Up
    
                                   Primary Virtual    Backup Virtual
                                   Router             Router
                                   ---------------    ---------------
    Activity Status                Local Active       Shutdown
    Routing Interface              intf0:1            intf0:1
    Routing Interface Status       Up                 
    VRRP Status                    Initialize         
    VRRP Priority                  250                
    Message Spool Status           AD-Active          
    Priority Reported By Mate      Standby   
    
    solace-backup> show redundancy 
    Configuration Status     : Enabled
    Redundancy Status        : Up
    Operating Mode           : Message Routing Node
    Switchover Mechanism     : Hostlist
    Auto Revert              : No
    Redundancy Mode          : Active/Standby
    Active-Standby Role      : Backup
    Mate Router Name         : solace-primary
    ADB Link To Mate         : Up
    ADB Hello To Mate        : Up
    
                                   Primary Virtual    Backup Virtual
                                   Router             Router
                                   ---------------    ---------------
    Activity Status                Shutdown           Mate Active
    Routing Interface              intf0:1            intf0:1
    Routing Interface Status                          Up
    VRRP Status                                       Initialize
    VRRP Priority                                     100
    Message Spool Status                              AD-Standby
    Priority Reported By Mate                         Active
    
    solace-monitor> show redundancy
    Configuration Status     : Enabled
    Redundancy Status        : Up
    Operating Mode           : Monitoring Node
    Switchover Mechanism     : Hostlist
    Auto Revert              : No
    

Step 2

Perform the following steps on the monitoring node.

  1. Log out and log back into the monitoring node as sysadmin.

  2. Copy the tar file to solace-monitor:

    [sysadmin@solace-monitor ~]$ scp [<username>@]<ip-addr>:solace-pubsub-enterprise-<version>-upgrade.tar.gz /tmp

    Where:

    <username>, <ip-addr>, and solace-pubsub-enterprise-<version>-upgrade.tar.gz correspond to the access information of where the new SolOS software is located.

  3. Switch to root user:

    [sysadmin@solace-monitor ~]$ sudo su -
  4. Stop the solace service:

    [root@solace-monitor ~]# solacectl service stop
  5. Set the timezone:
    [root@solace ~]# timedatectl set-timezone <TIMEZONE>

    To see the list of available timezones, use the timedatectl list-timezones command.

  6. Run the upgrade script.

    [root@solace-monitor ~]# solacectl upgrade /tmp/solace-pubsub-enterprise-<version>-upgrade.tar.gz
  7. The output of the upgrade command will indicate whether further steps are required. Follow any additional steps as described in the output of the upgrade command.

  8. If you have Solace Geneos Agent installed, set the umask to 0022. The default umask is 0077, which prevents Solace Geneos Agent from reading the system log files. To set the umask to 0022, refer to Preparing Software Event Broker for Solace Geneos Agent Installation and follow steps 3 to 5.

  9. Restart solace-monitor:

    [root@solace-monitor ~]# reboot

    When solace-monitor restarts, it will be running the configuration from the previous version.

  10. Log into solace-monitor and confirm that it is running the new version:

    solace-monitor> show version
  11. Ensure Redundancy Status is Up on the monitoring node:

    solace-monitor> show redundancy 
    Configuration Status     : Enabled
    Redundancy Status        : Up
    

Step 3

Perform the following steps on the backup node.

  1. Log into the backup node as an admin user.

  2. Ensure Redundancy Configuration Status is Enabled and Redundancy Status is Up on the backup node:

    solace-backup> show redundancy 
    Configuration Status     : Enabled
    Redundancy Status        : Up
    
  3. If using Config-sync, ensure that it is in sync.

    solace-backup> show config-sync
    Admin Status            : Enabled
    Oper Status             : Up
    
  4. Log out and log back into the backup node as sysadmin.

  5. Copy the tar file to solace-backup:

    [sysadmin@solace-backup ~]$ scp [<username>@]<ip-addr>:solace-pubsub-enterprise-<version>-upgrade.tar.gz /tmp

    Where:

    <username>, <ip-addr>, and solace-pubsub-enterprise-<version>-upgrade.tar.gz correspond to the access information of where the new SolOS software is located.

  6. Switch to root user:

    [sysadmin@solace-backup ~]$ sudo su -
  7. If you are using MQTT retained messages in your deployment, verify that your retain cache instances are synchronized. For more information, refer to Verifying Retain Cache Redundancy.

  8. Stop the solace service:

    [root@solace-backup ~]# solacectl service stop
  9. Set the timezone:
    [root@solace ~]# timedatectl set-timezone <TIMEZONE>

    To see the list of available timezones, use the timedatectl list-timezones command.

  10. Run the upgrade script:

    [root@solace-backup ~]# solacectl upgrade /tmp/solace-pubsub-enterprise-<version>-upgrade.tar.gz
  11. The output of the upgrade command will indicate whether further steps are required. Follow any additional steps as described in the output of the upgrade command.

  12. If you have Solace Geneos Agent installed, set the umask to 0022. The default umask is 0077, which prevents Solace Geneos Agent from reading the system log files. To set the umask to 0022, refer to Preparing Software Event Broker for Solace Geneos Agent Installation and follow steps 3 to 5.

  13. Restart solace-backup.

    root@solace-backup ~]# reboot

    When solace-backup restarts, it will be running the configuration and message-spool from the previous version.

  14. Log into solace-backup and confirm that it is running the new version:

    solace-backup> show version
  15. Ensure Redundancy Status is Up on solace backup node:

    solace-backup> show redundancy 
    Configuration Status     : Enabled
    Redundancy Status        : Up
  16. If using Config-sync, ensure backup node is in sync:

    solace-backup> show config-sync
    Admin Status            : Enabled
    Oper Status             : Up
    
  17. If the backup node provides AD service, ensure the Message Spool Status is AD-Standby:

    solace-backup> show redundancy 
    Message Spool Status             AD-Standby
    
  18. Log into the primary node as an admin user.

  19. Release activity from the primary node to the backup.

    solace-primary> enable
    solace-primary# configure
    solace-primary(configure)# redundancy release-activity
    
  20. On the primary, ensure Redundancy Configuration Status is Enabled-Released and Redundancy Status is Down.

    solace-primary> show redundancy 
    Configuration Status     : Enabled-Released
    Redundancy Status        : Down
    
  21. On the backup, ensure Redundancy Configuration Status is Enabled, Redundancy Status is Down and the Activity Status on the Backup Virtual Router is Local Active.

    solace-backup> show redundancy 
    Configuration Status     : Enabled
    Redundancy Status        : Down
                                      Primary Virtual    Backup Virtual
                                      Router             Router
                                      ---------------    ---------------
    Activity Status                   Shutdown           Local Active
    

Step 4

Perform the following steps on the primary node.

  1. Log out and log back into the primary node as sysadmin.

  2. Copy the tar file to solace-primary:

    [sysadmin@solace-primary ~]$ scp [<username>@]<ip-addr>:solace-pubsub-enterprise-<version>-upgrade.tar.gz /tmp

    Where:

    <username>, <ip-addr>, and solace-pubsub-enterprise-<version>-upgrade.tar.gz correspond to the access information of where the new SolOS software is located.

  3. Switch to root user:

    [sysadmin@solace-primary ~]$ sudo su -
  4. If you are using MQTT retained messages in your deployment, verify that your retain cache instances are synchronized. For more information, refer to Verifying Retain Cache Redundancy.

  5. Stop the solace service:

    [root@solace-primary ~]# solacectl service stop
  6. Set the timezone:
    [root@solace ~]# timedatectl set-timezone <TIMEZONE>

    To see the list of available timezones, use the timedatectl list-timezones command.

  7. Run the upgrade script.

    [root@solace-primary ~]# solacectl upgrade /tmp/solace-pubsub-enterprise-<version>-upgrade.tar.gz
  8. The output of the upgrade command will indicate whether further steps are required. Follow any additional steps as described in the output of the upgrade command.

  9. If you have Solace Geneos Agent installed, set the umask to 0022. The default umask is 0077, which prevents Solace Geneos Agent from reading the system log files. To set the umask to 0022, refer to Preparing Software Event Broker for Solace Geneos Agent Installation and follow steps 3 to 5.

  10. Restart solace-primary.

    [root@solace-primary ~]# reboot

    When solace-primary restarts, it will be running the configuration and message-spool from the previous version.

  11. Log into solace-primary and confirm that it is running the new version:

    solace-primary > show version
  12. No release activity from the primary node:

    solace-primary> enable
    solace-primary# configure
    solace-primary(configure)# no redundancy release-activity
  13. Ensure Redundancy Status is Up on solace-primary:

    solace-primary> show redundancy 
    Configuration Status     : Enabled
    Redundancy Status        : Up
    
  14. If using Config-sync, ensure that it is in sync:

    solace-primary> show config-sync
    Admin Status            : Enabled
    Oper Status             : Up
    
  15. If the node provides AD service, ensure the Message Spool Status is AD-Standby:

    solace-primary> show redundancy 
    Message Spool Status             AD-Standby
    

You have completed this procedure.

The backup is now active and the primary is standby.