Configuring an External Disk Array for Guaranteed Messaging

This section describes how to configure a customer-provided external disk storage array for use with either a standalone Solace PubSub+ appliance using Guaranteed Messaging or a high-availability (HA) redundant Solace PubSub+ appliance pair using Guaranteed Messaging. For information on how to configure external block devices for use with Solace PubSub+ software event brokers, refer to Managing Software Event Broker Storage.

A customer supplied external disk storage array is required to use Guaranteed Messaging with standalone or redundant pairs of Solace PubSub+ appliances. An appliance must also have a physical ADB and a physical HBA installed.

  • The configuration and provisioning of the external disk storage array itself is beyond the scope of this document as it depends on the disk manufacturer chosen for use. Therefore, proper configuration of an external disk storage array is the responsibility of the customer and is not described in this section. If required, contact Solace for assistance.
  • If you are using Guaranteed Messaging with an HA pair of redundant Solace PubSub+ appliances, the configuration procedures in HA Configuration for Appliances must be successfully completed first before performing the procedures in this section.

Array Requirements

The appliance HBA must be connected to the external disk storage array. Each HBA should have a fibre channel link path to two disk array controllers.

The two optical fiber channel ports on the HBA allow for redundant fibre channel cable links to the external disk storage array through standard multi‑mode optical fiber cable equipped with LC-type optical connectors.

When using a pair of PubSub+ appliances using Active/Standby redundancy, the ADB links must be directly connected together.

For an external disk storage array to be used with an ADB and its associated HBA, the external disk storage array must have:

  • RAID 1+0, four disks minimum.
  • Up to a 15 tebibyte (TiB) LUN.
  • Access allowed to HBA ports. When using HA redundancy, provide access to the ports of both HBAs on the redundant appliance pair.
  • Fiber channel connectivity.

Contact Solace for:

  • Acquiring root access to the Solace PubSub+ appliances
  • Information on use of other disk storage array models or products with the ADB and HBA

As of Solace PubSub+ appliance release 8.2.0, operations below that state they can only be performed by root, can now also be performed by a Sysadmin User. For information on configuring Sysadmin Users, refer to Configuring Multiple Linux Shell Users.

Step 1: Register the Appliance HBA with the External Array

The WWNs of the HBA used by a Solace PubSub+ appliance can found on the Packing List provided upon delivery of the appliance. They can also be provided by Solace upon request.

You can use the show hardware detail User EXEC CLI command to display the HBA port and node names for an appliance. For example:

solace> show hardware detail

. . .    

Slot 1/3: Host Bus Adapter Blade    
  Product #: HBA-0204FC-02-A    
  Serial #: M54687    
  Model Name: QLE2462    
  Model Description: PCI-Express to 4Gb FC, Dual Channel    
  Driver Version: 8.01.07-k1    
            
  Fibre-Channel 1    
    State: Link Up - F_Port (fabric via point-to-point)    
    Speed: 2 Gbit    
    Port Id: 0x031f00    
 Port Name: 0x210000e08b931f25     
    Port Type: NPort (fabric via point-to-point)    
 Node Name: 0x200000e08b931f25

Step 2: Provision a File System on the LUN

To configure a file system, it is recommended that you run the Solace-provided script, provision-lun-for-ad, to automatically configure and provision the LUN on a standalone appliance. If you are using a pair of HA redundant appliances, only run the script on one of the appliances—the Config-Sync facility will apply the configured file system to its mate appliance. (Refer to Provisioning a File System on the LUN with the Automated Script.)

Alternatively, if the script fails to successfully provision a file system, you can use the GDisk utility to manually configure and provision the partitions on the LUN. (Refer to Provisioning a File System on the LUN with GDisk.)

  • Both the procedure to configure a file system with the Solace-provided script and the procedure to configure a file system through the GDisk utility are only to be used for a LUN that does not have any existing, provisioned partitions.
  • Solace PubSub+ appliances running software version 7.1 or greater support both ext3 and ext4 file systems. However, any new partitions that are created should use an ext4 file system.

Provisioning a File System on the LUN with the Automated Script

  1. Enter the show hardware detail User EXEC CLI command on the appliance to confirm the new external disk LUN is available according to the new WWN.

    Example:

    solace> show hardware detail
    
    . . .
    
    Slot 1/2: Host Bus Adapter Blade
      Product #: HBA-0204FC-01-A
      Serial #: H64544
      Model Name: QLA2462
      Model Description: PCI-X 2.0 to 4Gb FC, Dual Channel
      Driver Version: 8.01.07-k1
      
    . . .
    
      Attached devices
         LUN 0
            State:          Ready
            Size:           80G
            WWN:            60:06:01:60:e8:60:1c:00:c6:3e:7b:a8:ad:53:e3:11
         LUN 1
            State:          Ready
            Size:           800G
            WWN:            60:06:01:60:e8:60:1c:00:2c:42:35:c1:ad:53:e3:11
  2. If there are changes to an existing LUN configuration, or the addition of a new LUN, it can be registered on a PubSub+ appliance by executing (in this order):
    • The rescan-scsi-bus.sh --nosynch -f -r -m script
    • The rescan-scsi-bus.sh -a script
    • The rescan-scsi-bus.sh -i if the previous two scripts fail (the -i option causes the link to all external disks to go down, and will affect PubSub+ Cache if it is running on this node)

     

    If the new LUN doesn't appear:

    • You can confirm that the SAN is properly configured
    • You can confirm that the HBA port is registered for the new LUN,
    • Re-issue rescan-scsi-bus.sh --nosync -a, rescan-scsi-bus.sh --nosync -f -r -m, and rescan-scsi-bus.sh -a scripts.
    • If the new LUN still doesn't appear, then a reload is required.

    Restarting a Solace PubSub+appliance will cause a disruption to service.

  3. Run the Solace-provided script.

    If you do not have root access, with the "support" user account, you can use "sudo" from the Linux shell to configure and provision the LUN with the Solace‑provided provision-lun-for-ad script.

    Example:

    [support@solace ~]# sudo provision-lun-for-ad --lun=3600140590e5103e56f3436d97032e61e
    Script  : /usr/sw/loads/soltr_7.1.0.1523/supported/provision-lun-for-ad
    User    : root@solace  Jan 20 15 15:21:06
    Logfile : /usr/sw/jail/diags/support.provision-lun-for-ad.2015-01-20,15.21.06
     
    List of visible LUNs:
    36001405f6bd73860d76412eab3a4a006
    3600140590e5103e56f3436d97032e61e
     
    mke4fs 1.41.12 (17-May-2010)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    10491008 inodes, 1953234688 blocks
    97661734 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=0
    59608 block groups
    32768 blocks per group, 32768 fragments per group
    176 inodes per group
    Superblock backups stored on blocks: 
              32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
              4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
              102400000, 214990848, 512000000, 550731776, 644972544, 1934917632
     
    Writing inode tables: done
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done
     
    This filesystem will be automatically checked every 28 mounts or 180 days, whichever comes first. Use tune4fs -c or -i to override.
    mke4fs 1.41.12 (17-May-2010)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    10491008 inodes, 1953234683 blocks
    97661734 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=0
    59608 block groups
    32768 blocks per group, 32768 fragments per group
    176 inodes per group
    Superblock backups stored on blocks: 
             32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
             4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
             102400000, 214990848, 512000000, 550731776, 644972544, 1934917632
     
    Writing inode tables: done
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done
     
    This filesystem will be automatically checked every 20 mounts or 180 days, whichever comes first. Use tune4fs -c or -i to override.
    LUN 3600140590e5103e56f3436d97032e61e has been provisioned for Assured Delivery.

    The --lun flag takes in the WWN of the LUN you are provisioning.

  4. If you are using a redundant pair of Solace PubSub+ appliances, restart the appliance on which the step 3 was not performed.

Provisioning a File System on the LUN with GDisk

To configure and provision the LUN using the GDisk utility, root access to the Solace PubSub+ appliance is required.

To manually configure and provision a file system on a LUN, perform the following steps on a standalone appliance, or on one appliance in an HA redundant pair:

  1. Calculate 2048 sector partition boundaries.
  2. To calculate sector boundaries for the two partitions, first determine how many sectors are on the disk. Use the gdisk command on the /dev/mapper entry for the LUN to determine the number of sectors. Use the "p" command to print the details.

    In the example below, the 3600140590e5103e56f3436d97032e61e is referring to the WWN of the LUN.

    Example:

    [root@solace mapper]# gdisk /dev/mapper/3600140590e5103e56f3436d97032e61e
    GPT fdisk (gdisk) version 0.8.9
     
    Partition table scan:
      MBR: protective
      BSD: not present
      APM: not present
      GPT: present
     
    Found valid GPT with protective MBR; using GPT.
     
    Command (? for help): p
    Disk 3600140590e5103e56f3436d97032e61e: 31251759104 sectors, 14.6 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): D96B0E77-9E54-41D2-88F3-7A8C89A7DDB6
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 31251759070
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 31251759037 sectors (14.6 TiB)
     
    . . .

    Once you have determined how many sectors are on the disk (the LUN in the above example has 31251759037 sectors), you can then determine the size of each partition as a multiple of 2048 sectors:

    total sectors minus 2048 31251759037 – 2048 = 31251756989
    divide by 2 31251756989 / 2 = 15625878494.5
    round down to nearest integer 15625878494
    divide by 2048 15625878494 / 2048 = 7629823.4833
    round down to nearest integer 7629823
    multiply by 2048 7629823 * 2048 = 15625877504

    When you know the size of each partition in the sectors (15625877504) and that the first sector of the first partition is always 2048, you can then determine the start and end sectors for each partition:

    1st start sector: 2048
    1st end sector: partition size + 2047 = 15625879551
    2nd start sector: 15625879551 + 1 = 15625879552
    2nd end sector: 15625879551 + 15625877504 = 31251757055

    Using the above numbers, you can then create partitions using the GDisk utility.

  3. On the active and ready disk array, run the GDisk utility to create two partitions aligned to a 2048-sector partition boundary on the device.
    • For a standalone appliance, the second partition that is created can be smaller (as small as 100MB).
    • For a redundant pair of appliances, two equally sized partitions are required. The partitions can be up to 7.5 TiB each (for a total of 15 TiB). However, the partition sizes required depend on the maximum disk space configured for the message spool using the max-spool-usage Message Spool VPN CONFIG command. When using a redundant pair of appliances, each partition should be 1.1 times the max-spool usage .
    1. Enter the following at the prompt:
    2. gdisk /dev/mapper/<device>

      Example:

      [root@solace ~]# gdisk /dev/mapper/3600140590e5103e56f3436d97032e61e
    3. To create new partition tables, enter the "n" command, and then "1" for first partition.
    4. To align the partition tables on 2048 sector boundaries, do the following:
      • For the first partition, enter "2048" at the "First sector..." prompt, then at the "Last sector..." prompt, enter the value you calculated in step 1 for the end of the first sector.
      • For the second partition, enter the value you calculated in step 1 for the start of the sector for that partition, then at the "Last sector..." prompt, enter the value you calculated in step 1 for the end of the sector for the second partition.
    5. Enter the "w" command to save the resulting partition table.

    In the example below, two equal-sized partitions are created for a redundant pair of appliances:

    [root@solace mapper]# gdisk /dev/mapper/3600140590e5103e56f3436d97032e61e
    GPT fdisk (gdisk) version 0.8.9
     
    Partition table scan:
      MBR: protective
      BSD: not present
      APM: not present
      GPT: present
     
    Found valid GPT with protective MBR; using GPT.
     
    Command (? for help): p
    Disk 3600140590e5103e56f3436d97032e61e: 31251759104 sectors, 14.6 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): D96B0E77-9E54-41D2-88F3-7A8C89A7DDB6
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 31251759070
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 31251759037 sectors (14.6 TiB)
     
    Number  Start (sector)  End (sector)  Size  Code  Name
     
    Command (? for help): n
    Partition number (1-128, default 1): 1
    First sector (34-31251759070, default = 2048) or {+-}size{KMGTP}: 2048
    Last sector (2048-31251759070, default = 31251759070) or {+-}size{KMGTP}: 15625879551
    Current type is 'Linux filesystem'
    Hex code or GUID (L to show codes, Enter = 8300): 8300
    Changed type of partition to 'Linux filesystem'
     
    Command (? for help): n
    Partition number (2-128, default 2): 2
    First sector (34-31251759070, default = 15625879552) or {+-}size{KMGTP}: 15625879552
    Last sector (15625879552-31251759070, default = 31251759070) or {+-}size{KMGTP}: 31251757055
    Current type is 'Linux filesystem'
    Hex code or GUID (L to show codes, Enter = 8300): 8300
    Changed type of partition to 'Linux filesystem'
     
    Command (? for help): p
    Disk 3600140590e5103e56f3436d97032e61e: 31251759104 sectors, 14.6 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): D96B0E77-9E54-41D2-88F3-7A8C89A7DDB6
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 31251759070
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 4028 sectors (2.0 MiB)
     
    Number   Start (sector)     End (sector)     Size    Code  Name
         1             2048      15625879551  7.3 TiB    8300  Linux filesystem
         2      15625879552      31251757055  7.3 TiB    8300  Linux filesystem
     
    Command (? for help): w
     
    Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
    PARTITIONS!!
     
    Do you want to proceed? (Y/N): y
    OK; writing new GUID partition table (GPT) to 3600140590e5103e56f3436d97032e61e.
    Warning: The kernel is still using the old partition table.
    The new table will be used at the next reboot.
    The operation has completed successfully.
  4. After the partitions are created, add them to the device mapping. To do this, determine device mapping for the external disk:
  5. [root@solace ~]# ls -l /dev/mapper
    total 0
    brw      1 root root 252,   0 Jan 20 14:07 3600140590e5103e56f3436d97032e61e
    crw-rw   1 root root  10, 236 Jan 20 06:05 control
  6. Reread the partition table and create device maps for the newly created partitions:
  7. [[root@solace ~]# kpartx -a -p p /dev/mapper/3600140590e5103e56f3436d97032e61e

    The two partitions are listed in /dev/mapper as p1 and p2:

    [[root@solace ~]# ls -l /dev/mapper/
    total 0
    brw-------  1 root root 252,   0 Jan 20 14:07 3600140590e5103e56f3436d97032e61e
    brw-------  1 root root 252,   1 Jan 20 11:05 3600140590e5103e56f3436d97032e61ep1
    brw-------  1 root root 252,   2 Jan 20 11:05 3600140590e5103e56f3436d97032e61ep2
    crw-rw----  1 root root  10, 236 Jan 20 06:05 control
  8. Create an ext4 file system on each partition using the mkfs.ext4 command with each partition in the /dev/mapper listing:
  9. If the disk is greater than 200 GB, you should use the -N 10000000 option to limit the number of inodes and speed up formatting.

    [[root@solace ~]# mkfs.ext4 -N 10000000 /dev/mapper/3600140590e5103e56f3436d97032e61ep1
    mke4fs 1.41.12 (17-May-2010)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    10491008 inodes, 1953234688 blocks
    97661734 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=0
    59608 block groups
    32768 blocks per group, 32768 fragments per group
    176 inodes per group
    Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
        102400000, 214990848, 512000000, 550731776, 644972544, 1934917632
     
    Writing inode tables: done                            
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done
     
    This filesystem will be automatically checked every 28 mounts or
    180 days, whichever comes first. Use tune4fs -c or -i to override.
    
    [root@solace ~]# mkfs.ext4 -N 10000000 /dev/mapper/3600140590e5103e56f3436d97032e61ep2
    mke4fs 1.41.12 (17-May-2010)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    10491008 inodes, 1953234688 blocks
    97661734 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=0
    59608 block groups
    32768 blocks per group, 32768 fragments per group
    176 inodes per group
    Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
        102400000, 214990848, 512000000, 550731776, 644972544, 1934917632
     
    Writing inode tables: done                            
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done
     
    This filesystem will be automatically checked every 35 mounts or 180 days, whichever comes first. Use tune4fs -c or -i to override.
  10. If you are using a redundant pair of appliances, restart the appliance on which the above procedures were not made.

Step 3: Configure the Message Spool to Use the External Disk

Config-Sync will not automatically synchronize this object or property. Therefore, if the event broker is being used in a high-availability (HA) redundant configuration or in a replicated site, you must manually configure this object/property on each mate event broker or replicated Message VPN.

To determine whether an object/property is synchronized by Config-Sync, look up the command used to configure the object/property in the CLI Command Reference or type the command in the Solace CLI, ending the command with "?". The Help lists whether the object/property is synchronized.

To enable Guaranteed message spooling using the external disk storage array for a standalone appliance, or the primary appliance in a redundant pair, perform the following steps:

  1. Enter the User EXEC CLI command to display which external disks are available:
  2. solace1> show hardware detail
    			
    . . .
    				
    Slot 1/2: Host Bus Adapter Blade
      Product #: HBA-0204FC-01-A
      Serial #: H64544
      Model Name: QLA2462
      Model Description: PCI-X 2.0 to 4Gb FC, Dual Channel
      Driver Version: 8.01.07-k1
    				
    . . .
    				
      Attached devices
       LUN 0
          State: Ready
          Size: 80 GB 
          WWN: 60:06:01:60:e8:60:1c:00:ec:7b:6d:3f:5c:db:de:11
  3. Configure the Guaranteed message spool to use the external disk, according to its WWN:
  4. solace1(configure/hardware/message-spool)# disk-array wwn <wwn #>

    Where:

    <wwn #> is the WWN displayed by the show hardware detail command output above.

  5. Enable the Guaranteed message spool on the primary appliance:
  6. solace1(configure/hardware/message-spool)# no shutdown primary

To enable Guaranteed message spooling using the external disk storage array for the backup appliance in a redundant pair, perform the following steps:

For information on configuring redundant appliances, refer to HA Configuration for Appliances.

  1. Enter the following CLI command to display which external disks are available:
    solace2> show hardware detail
    
    . . .
    
    Slot 1/2: Host Bus Adapter Blade
      Product #: HBA-0204FC-01-A
      Serial #: H64544
      Model Name: QLA2462
      Model Description: PCI-X 2.0 to 4Gb FC, Dual Channel
      Driver Version: 8.01.07-k1
      
      . . .
      
      Attached devices
      LUN 0
      State: Ready
      Size: 80 GB 
      WWN: 60:06:01:60:e8:60:1c:00:ec:7b:6d:3f:5c:db:de:11
  2. Configure the Guaranteed message spool to use the external disk, according to its WWN:
    solace2(configure)# hardware message-spool
    solace2(configure/hardware/message-spool)# disk-array wwn <wwn #>

    Where:

    <wwn #> is the WWN displayed by the show hardware detail command output above.

  3. Enable the Guaranteed message spool on the backup appliance:
    solace2(configure/hardware/message-spool)# no shutdown backup

Step 4: Verify the Guaranteed Message Spool Configuration

For a standalone appliance, perform the following steps:

  1. Enter the show message-spool User EXEC CLI command to verify the message spool is correctly configured. The appliance should show a configuration status of "Enabled (Primary)", an operational status of "AD-Active", and a datapath status of "Up".
  2. Example:

    solace1> show message-spool
      
    Config Status:                            Enabled (Primary)
                
    Maximum Spool Usage:                      60000 MB
    Spool While Charging:                     No
    Spool Without Flash Card:                 No
    Using Internal Disk:                      No
    Disk Array WWN:             60:06:01:60:e8:60:1c:00:ec:7b:6d:3f:5c:db:de:11
     
    Operational Status:                       AD-Active
    Datapath Status:                          Up
                
    . . .
    				
                                              ADB       Disk      Total
    Current Persistent Store Usage (MB)    0.0000     0.0000     0.0000
    Number of Messages Currently Spooled        0          0          0
  3. If either output does not show the proper status, enter the show message-spool detail User EXEC CLI command to view system details as an aid in troubleshooting.

For a pair of redundant appliances, perform the following steps:

  1. Enter the show message-spool User EXEC CLI command on the primary appliance to verify its message spool is correctly configured. It should show a configuration status of "Enabled (Primary)", an operational status of "AD‑Active", and a datapath status of "Up".
  2. Example:

    solace1> show message-spool
      
    Config Status:                            Enabled (Primary)
                
    Maximum Spool Usage:                      60000 MB
    Spool While Charging:                     No
    Spool Without Flash Card:                 No
    Using Internal Disk:                      No
    Disk Array WWN: 60:06:01:60:e8:60:1c:00:ec:7b:6d:3f:5c:db:de:11
      
    Operational Status:                       AD-Active
    Datapath Status:                          Up
                
    . . .
    				
                                              ADB       Disk      Total
    Current Persistent Store Usage (MB)    0.0000     0.0000     0.0000
    Number of Messages Currently Spooled        0          0          0
  3. Enter the show message-spool User EXEC CLI command on the backup appliance to verify its message spool is correctly configured. It should show a configuration status of "Enabled (Backup)", an operational status of "AD‑Standby", and a datapath status of "Down".
  4. Example:

    solace2> show message-spool
      
    Config Status:                            Enabled (Backup)
                
    Maximum Spool Usage:                      60000 MB
    Spool While Charging:                     No
    Spool Without Flash Card:                 No
    Using Internal Disk:                      No
    Disk Array WWN: 60:06:01:60:e8:60:1c:00:ec:7b:6d:3f:5c:db:de:11
       
    Operational Status:                       AD-Standby
    Datapath Status:                          Down
                
    . . .
    
                                              ADB    Disk   Total
    Current Persistent Store Usage (MB)    0.0000  0.0000  0.0000
    Number of Messages Currently Spooled        0       0       0
  5. If either the primary or backup appliance do not show the proper status, enter the show message-spool detail User EXEC CLI command to view system details as an aid in troubleshooting.
  6. Once the message spools on the primary and backup appliance show the proper status, enter the release-activity Redundancy CONFIG CLI command on the primary appliance to surrender activity to the backup appliance and verify the backup appliance can take activity.
  7. Once it is confirmed that the backup appliance has taken activity, enter the no release-activity Redundancy CONFIG CLI command on the primary appliance to enable it to take activity again.
  8. If auto-revert is disabled, enter the revert-activity Redundancy ADMIN CLI command on the backup appliance to revert activity back to the primary appliance.