Replacing a LUN and Migrating the Disk Spool Files for a Standalone Appliance

The procedures discussed in this section are applicable only to PubSub+ appliances.

This section describes how to replace a logical unit number (LUN) for a standalone PubSub+ appliance, and then migrate the disk spool files from the old LUN to the new LUN without losing spooled Guaranteed messages. Replacing a LUN is often done to provide a larger LUN to so that the message-spool size can be increased.

If you require further assistance, or have any questions regarding this procedure, contact Solace.

Procedure

This procedure is for a standalone appliance. If you want to migrate the LUN for a redundant appliance pair, see Replacing a LUN and Migrating the Disk Spool Files for a Redundant Appliance Pair.

  • This procedure disables service on the appliance.
  • To prevent any loss of configuration, do not make any additional unrelated configuration changes to the standalone appliance while performing this procedure.
  • We recommend that the new LUN be the same size or larger than the old LUN. If the new LUN is smaller, ensure that there is sufficient space available for all files to be copied from the old LUN—the files must be copied successfully for this procedure to succeed.
  • To replace a LUN, first ensure that the current system still works with the old LUN, then check to see if the new LUN is provisioned. Both LUNs must be present in the system before migrating the disk spool files to the new LUN. For details, refer to Configuring an External Disk Array for Guaranteed Messaging.
  • The GDisk utility is used to create and modify partitions on a LUN. New partitions use an ext4 filesystem.
  • Operations listed below that state that they can only be performed by root can also be performed by a Sysadmin User:

    • To elevate to root, enter:
      [support@solace-primary ~]$ su -
      Password:

    • To elevate to a sysadmin username myAdmin, enter:
      [support@solace-primary ~]$ su - myAdmin
      Password:

To replace a LUN for a standalone appliance and migrate the disk spool files, perform the following steps:

  1. Ensure that the appliance is in the correct state.
    Run the show message-spool detail command and verify the following:
    • Config Status is Enabled (Primary).
    • Operational status is AD-Active.

    For example:

    solace> show message-spool detail
    Config Status:                                    Enabled (Primary)
     
    . . .
    
    Operational Status:                               AD-Active
    
  2. Identify the WWN (World Wide Name) of the old LUN.

    Enter the following command.

    solace> show message-spool
    Config Status:                                    Enabled (Primary)
    Maximum Spool Usage:                              10000 MB
    Spool While Charging:                             No
    Spool Without Flash Card:                         No
    Using Internal Disk:                              No
    Disk Array WWN:                                   60:06:01:60:4d:30:1c:00:8e:29:1b:b6:a6:d6:e8:11
    
    . . .

    The examples in this procedure use 60:06:01:60:4d:30:1c:00:8e:29:1b:b6:a6:d6:e8:11 as the of the old LUN.

  3. To detect the new LUN on the appliance, perform the following steps:
    1. Enter the following command to elevate to the support user, and then enter the support user's password when prompted.
      solace# shell standaloneLunMigration
      login: support
      Password:
    2. Enter the following command to elevate to the root user or a sysadmin user, and then enter the password for that user when prompted:
      [support@solace ~]$ su -
      Password:
    3. Check that the current LUN is visible.
      [root@solace ~]# multipath -ll
      3600601604d301c008e291bb6a6d6e811 dm-0 DGC     ,RAID 0 size=300G features='2 queue_if_no_path
      
      . . .
      
      [root@solace ~]#
    4. Rescan the SCSI bus to add the new LUN.
      [root@solace ~]# rescan-scsi-bus.sh --nosync -f -r -m
      [root@solace ~]# rescan-scsi-bus.sh -a
    5. If the previous step failed, enter these commands:
      [root@solace ~]# rescan-scsi-bus.sh -i -a
      [root@solace ~]# rescan-scsi-bus.sh --nosync -f -r -m
      [root@solace ~]# rescan-scsi-bus.sh -a
    6. Check that the new LUN has been added.
      [root@solace ~]# multipath -ll
      3600601604d301c008e291bb6a6d6e811 dm-0 DGC     ,RAID 0 size=300G features='2 queue_if_no_path
      
      . . .
      
      360014057d24f4b77681435faf684d587 dm-3 LIO-ORG ,sdc1-4 size=800G features='1 queue_if_no_path' hwhandler='0' wp=rw
      
      . . .
      
      [root@solace ~]#

      The examples in this procedure use 60:01:40:57:d2:4f:4b:77:68:14:35:fa:f6:84:d5:87 as the WWN of the new LUN.

      If the new LUN doesn't appear, confirm that the SAN is properly configured and the HBA port is registered for the new LUN, then re-run both the rescan-scsi-bus.sh --nosync -f -r -m and rescan-scsi-bus.sh -a scripts. If the new LUN doesn't appear, you must reboot the appliance.

    7. Return to the CLI:
      [root@solace ~]# exit
      [support@solace ~]$ exit
  4. Partition and create the filesystem on the new LUN. For more information, see Configuring an External Disk Array for Guaranteed Messaging.
  5. Run the following command to confirm that the new external disk LUN is available and referred to by the correct WWN (obtain the WWN of the new LUN from your storage administrator):
    solace> show hardware details

    Don't proceed further until the new LUN is visible.

  6. If the Config-Sync feature is in use, enter the following command to confirm that Config-Sync is operationally up:
    solace> show config-sync
    Admin Status:                                    Enabled
    Oper Status:                                     Up

    To prevent loss of configuration, do not proceed further if Config-Sync is operationally down.

  7. To stop providing service to applications, enter the following commands:
    solace> enable
    solace# configure
    solace(configure)# service msg-backbone shutdown
    All clients will be disconnected.
    Do you want to continue (y/n)? y
  8. Run the show message-spool detail command to confirm that:
    • the appliance is still active, up, and synchronized (indicated by the Operational, Datapath, and Synchronization statuses)
    • the appliance has successfully cleaned up all flows (indicated by zeroes in the Currently Used column)

  9. Ensure that message-spool defragmentation is not active.

    Enter the following command.

    solace> show message-spool
    
    . . .
                
    Defragmentation Status:                   Idle
    
    . . .

    If the message spool defragmentation status is not Idle, wait for the defragmentation process to complete before proceeding.

  10. To stop Guaranteed Messaging, enter the following commands:
    solace(configure)# hardware message-spool shutdown
    All message spooling will be stopped.
    Do you want to continue (y/n)? y
    solace(configure)# end
  11. Perform the following steps to migrate the LUN data:

    If you do not perform this step, you must reset the message spool on the appliance after you edit the maximum spool usage later in this procedure. Resetting the message spool causes all guaranteed messaging data to be lost.

    1. Enter the following command to elevate to the support user, and then enter the support user's password when prompted:
      solace# shell standaloneLunMigration
      login: support
      Password:
    2. Enter the following command to elevate to the root user or a sysadmin user, and then enter the password for that user when prompted:
      [support@solace ~]$ su -
      Password:
    3. Migrate the AD keys within the partition (p) of the old LUN to the new LUN, using the adkey-tool script.

      This partition is located in /dev/mapper/ and is named <wwn><p#>.

      [root@solace ~]# adkey-tool migrate --src-device /dev/mapper/<old LUN wwn>p --dest-device /dev/mapper/<new LUN wwn>p

      For example:

      [root@solace ~]# adkey-tool migrate --src-device /dev/mapper/3600601604d301c008e291bb6a6d6e811p --dest-device /dev/mapper/360014057d24f4b77681435faf684d587p
      

      The LUN's WWN might be prefixed by a 3 in /dev/mapper.

      For example, from earlier in this procedure, the WWN of the new LUN is 60:01:40:57:d2:4f:4b:77:68:14:35:fa:f6:84:d5:87. This WWN appears as /dev/mapper/360014057d24f4b77681435faf684d587p in the example above.

    4. Create the following temporary directories:
      [root@solace ~]# mkdir -p /tmp/old_lun_p
      [root@solace ~]# mkdir -p /tmp/new_lun_p
    5. Mount the partitions (p) of both the old and new LUNs.
      [root@solace ~]# mount /dev/mapper/<old LUN wwn>p /tmp/old_lun_p
      [root@solace ~]# mount /dev/mapper/<new LUN wwn>p /tmp/new_lun_p

      For example:

      [root@solace ~]# mount /dev/mapper/3600601604d301c008e291bb6a6d6e811p /tmp/old_lun_p
      [root@solace ~]# mount /dev/mapper/360014057d24f4b77681435faf684d587p /tmp/new_lun_p
    6. Copy all directories and files within p of the old LUN to the new LUN:
      [root@solace ~]# cp -a /tmp/old_lun_p/* /tmp/new_lun_p/
      
    7. Unmount p of both the old and new LUNs:
      [root@solace ~]# umount /tmp/old_lun_p
      [root@solace ~]# umount /tmp/new_lun_p
    8. Return to the CLI:
      [root@solace ~]# exit
      [support@solace ~]$ exit
  12. Enter the following commands to configure the message spool to use the new external disk LUN:
    solace# configure
    solace(configure)# hardware message-spool disk-array wwn <new LUN wwn>

    Where <new LUN wwn> is the WWN of the new LUN, as shown earlier in this procedure.

  13. To start Guaranteed Messaging and message spooling, enter the following command:
    solace(configure)# no hardware message-spool shutdown primary
  14. To start providing service to applications, enter the following commands:
    solace(configure)# no service msg-backbone shutdown
  15. If the Config-Sync feature is in use, enter the following command to confirm that Config-Sync is operationally up:
    solace> show config-sync
    Admin Status:                                    Enabled
    Oper Status:                                     Up

    If Config-Sync does not come up, either there were configuration changes performed beyond what is described in this procedure, or one or more steps did not complete as expected. To prevent the configuration from diverging further, immediately investigate and resolve this issue.

  16. Optional: To edit the maximum spool usage for the new LUN, enter the following command:
    solace(configure)# hardware message-spool max-spool-usage <size>

    Where <size> is the maximum spool usage in megabytes.

    • See Configuring Max Spool Usage to set the maximum spool usage.
    • If you used the defaults while creating the partitions and filesystems earlier in this procedure, the LUN will be split into two partitions. The second partition is still required but can be as small as 100MB. If you plan to add a second appliance to create a redundant pair later, consider setting the partition sizes for the standalone appliance as if it's already in a redundant pair (1.1 times the max-spool usage)."
  17. At this point, the LUN migration has succeeded. After the storage administrator has deprovisioned the original LUN, the show hardware details command shows that the old LUN has a state of Down.

    Also, the kernel log of the appliance might contain entries reflecting that the old LUN is no longer visible.

    Example Log Entries:

    2016-03-01T02:43:04+0000 <daemon.notice> solace multipathd: 3600601604d301c008e291bb6a6d6e811: sdd - emc_clariion_checker: Logical Unit is unbound or LUNZ
    2016-03-01T02:43:04+0000 <daemon.notice> solace multipathd: 3600601604d301c008e291bb6a6d6e811: sdf - emc_clariion_checker: Logical Unit is unbound or LUNZ
    

    There is no operational impact from these log entries and the Down state of the attached device.

  18. Remove the original LUN from the appliance.
    1. Enter the following command to elevate to the support user, and then enter the support user’s password when prompted.
      solace# shell standaloneLunMigration
      login: support
      Password:
    2. Enter the following command to elevate to the root user or a sysadmin user, and then enter the password for that user when prompted:
      [support@solace ~]$ su -
      Password:
    3. Check that both the original and new LUNs are visible.
      [root@solace ~]# multipath -ll
      3600601604d301c008e291bb6a6d6e811 dm-0 DGC     ,RAID 0 size=300G features='1 retain_attached_hw_handler' hwhandler='1
      
      . . .
      
      360014057d24f4b77681435faf684d587 dm-3 LIO-ORG ,sdc1-4 size=800G features='1 queue_if_no_path' hwhandler='0' wp=rw
      
      . . .
    4. Rescan the SCSI bus to remove the original LUN.
      [root@solace ~]# rescan-scsi-bus.sh -r
    5. Check that the original LUN is removed.
      [root@solace ~]# multipath -ll
      360014057d24f4b77681435faf684d587 dm-3 LIO-ORG ,sdc1-4 size=800G features='1 queue_if_no_path' hwhandler='0' wp=rw
    6. Return to the CLI:
      [root@solace ~]# exit
      [support@solace ~]$ exit