Configuring Cluster Links with Replication
When you create cluster links to the members of a replication group, the data channels are shared between those two links. This creates a situation where the link settings can conflict with each other. This is explained in detail below.
In version 9.6, the internal naming of channels and queues changed to support replication with DMR. Brokers older than version 9.6 cannot learn the new naming, so cluster links between a version 9.6 broker and an older broker will fail.
Do not create any DMR cluster links (in either direction) between a version 9.6 broker and an older broker.
If you are upgrading brokers from version 9.5 or earlier that are part of a DMR network, do not add any new DMR cluster links anywhere in the network until all brokers in the network have been upgraded to version 9.6 or later. Links that existed before the upgrade will continue to function properly.
Shared Data Channels
As described in Configuring Cluster Links, a DMR cluster link is composed of:
- one control channel
- one client profile
- one data channel per Message VPN. A data channel is made up of a bridge and a queue.
If a cluster link goes to a node that is part of a replication group, then that link's data channels are shared with the link to the other node of the replication group. Because of this, the pair of links share a single bridge per Message VPN, and single link queue per Message VPN.
Consider the example shown in the following diagram:
- Each region (depicted by a gray oval) is a separate DMR cluster.
- In each region there are two nodes, each of which consists of one High-Availability (HA) pair.
- Each node in the network is connected to every other node by a DMR link:
- Nodes between clusters (regions, in this example) are connected by external DMR links
- Nodes within the same cluster (for example, Seattle and San Jose) are connected by internal DMR links.
- The cluster links from San Jose to Toronto (red) and Montreal (blue) share the same set of data channels (green; one per message VPN), but each link has its own control channel and client profile. This is illustrated in the green detail view.
Configuring Links with Shared Data Channels
Because a channel is not directly configured, but rather is constructed based on the settings in its parent link, a shared data channel must be set up based on the combined configurations of the two parent links. In many cases the two configurations can be combined in a compatible manner. However, this not true for all settings, and it is possible to have conflicts.
If there is a conflict between the links, the links will be operationally DOWN. The show cluster <cluster-name-pattern> link *
command will indicate the conflicting attribute in the Reason
field.
The following table details the settings where conflicts can occur:
Link Setting | Details |
---|---|
authenticationScheme |
The bridge as a whole must have a single authentication scheme. That is, both links must use the same authentication scheme. |
initiator |
The initiator must be consistent between both links, so that initiation happens the same way regardless of which DR mate is active. |
span | The topology relationship (internal orexternal ) for both links between a node and a remote DR-pair must be the same. |
queueDeadMsgQueue queueEventSpoolUsageThreshold queueMaxDeliveredUnackedMsgsPerFlow queueMaxMsgSpoolUsage queueMaxRedeliveryCount queueMaxTtl queueRejectMsgToSenderOnDiscardBehavior queueRespectTtlEnabled |
These values are associated with the single shared queue, and therefore must be the same for both links. |
egressFlowWindowSize |
This value is associated with the single shared queue; this setting must be the same for both links. |