PubSub+ Cache Components
PubSub+ Cache uses a distributed structure so that it can be scaled as necessary. Therefore, as message rates and topic space grows, the topic space can be divided amongst multiple Cache Clusters.
As a distributed caching solution, PubSub+ Cache is composed of the following components:
A Designated Router is a specific event broker through which a Distributed Cache and all of its associated Cache Clusters and PubSub+ Cache Instances are configured and managed. An event broker can act as a Designated Router when a Message VPN configured on that router has distributed-cache-management services enabled (refer to Configuring Message VPNs).
The Designated Router is the central repository and management point for the Distributed Cache configuration. A network operator can perform PubSub+ Cache configuration, management, and monitoring tasks on the Designated Router through the event broker Command Line Interface (CLI).
The Designated Router uses an internal client, known as the Cache Manager, to automatically provide the operations, administration, and management (OA&M) functionality required to manage many Cache Clusters and their associated PubSub+ Cache Instances. For example, the Cache Manager is responsible for tasks such as:
- Propagating configuration changes to a Cache Cluster, and ensuring that each PubSub+ Cache instance in the Cache Cluster has a consistent configuration
- Disseminating topic space information throughout the Distributed Cache, so that each Cache Cluster has an up-to-date list of topics for its PubSub+ Cache Instances to listen for and knows of the topics that PubSub+ Cache Instances in all other Cache Clusters are listening for
- Resynchronizing of PubSub+ Cache Instances that disconnect and subsequently reconnect to the network
A Distributed Cache is a collection of one or more Cache Clusters that belong to the same Message VPN.
Each Cache Cluster in a Distributed Cache is configured to subscribe to a different set of topics. This effectively divides up the configured topic space, to provide scaling to very large topic spaces or very high cached message throughput.
A Distributed Cache and all of its associated Cache Clusters are configured from the same Designated Router. The Cache Manager automatically ensures that each PubSub+ Cache instance in a Cache Cluster gets configured with:
- The list of topics that the Cache Cluster (and subsequently its PubSub+ Cache Instance) is responsible for
- The lists of topics that are served by other Cache Clusters in the Distributed Cache, and the names of the Cache Clusters serving up those topics
Allowing Cache Clusters to know of each other’s assigned topic sets ensures that when a cache request is made to either the Distributed Cache or a specific Cache Cluster in the Distributed Cache, any matching cached messages in the Distributed Cache are returned, regardless of what Cache Cluster they are cached in.
The following two figures show simple Distributed Cache examples. In these examples, Cache Cluster
bob has been configured to handle “animals/cats/>”, Cache Cluster
joe has been configured to handle “animals/bears/>”, and Cache Cluster
fred has been configured to handle “animals/dogs/>”. Client applications have been set up to always send cache requests to
bob, although they could just as easily send their requests to
The first example shows a scenario in which all three Cache Clusters contain topics that match the cache request.
Distributed Cache Example 1
The next example shows a scenario in which the topic space requested is fully contained in Cache Cluster
Distributed Cache Example 2
A Cache Cluster is a collection of one or more PubSub+ Cache Instances that subscribe to exactly the same topics.
PubSub+ Cache Instances are grouped together in a Cache Cluster for the purpose of fault tolerance and load balancing. As published messages are received, the event broker message bus sends these live data messages to the PubSub+ Cache Instances in the Cache Cluster. This enables client cache requests to be served by any of PubSub+ Cache Instances in the Cache Cluster.
The message bus load-balances client cache requests amongst these PubSub+ Cache Instances as determined by the priorities assigned to the individual PubSub+ Cache Instances through the configuration file.
Each Cache Cluster in a Distributed Cache must use a different set of topic subscriptions; that is, the subscriptions assigned to each Cache Cluster must not overlap.
Each Cache Cluster can be configured through the Designated Router’s CLI or SEMP interface (refer to Configuring Cache Clusters). The Designated Router ensures that the configuration is propagated to all PubSub+ Cache Instances in the Cache Cluster. Configuration parameters for the Cache Cluster are stored persistently in the Designated Router’s internal, non-volatile database, and are backed up and restored along with all the other configuration information for that router.
A PubSub+ Cache Instance is a single PubSub+ Cache process that belongs to a single Cache Cluster. At least one PubSub+ Cache Instance is required for a Cache Cluster, although up to 16 can be used.
PubSub+ Cache Instances are installed on standalone Linux systems through a Solace installation package provided by Solace, and a corresponding PubSub+ Cache Instance object that is created through the Solace CLI.
The initial configuration for a PubSub+ Cache Instance is provided by a configuration file that is used when the PubSub+ Cache Instance is started. The Designated Router that the PubSub+ Cache Instance establishes a connection to also disseminates configuration information to the PubSub+ Cache Instance.
PubSub+ Cache Instances listen for and cache live data messages that match the topic subscriptions configured for their parent Cache Cluster.
Each PubSub+ Cache Instance in a Cache Cluster caches a published live data message if all the following are true:
- its topic matches a topic subscription configured for the Cache Cluster (wild card topics are supported)
- the PubSub+ Cache Instance is not administratively shutdown
- it does not violate configured constraints such as maximum memory or maximum number of topics
- when an Ingress Message Plug-in is being used, the Plug-in function returns an operation code that directs the PubSub+ Cache Instance to cache the message. For information, refer to Using Ingress Message Plug-Ins.
Client cache requests are load-balanced amongst the PubSub+ Cache Instances in a Cache Cluster.
Each PubSub+ Cache Instance uses a configuration file that provides parameters required to establish a connection with a host event broker (for example, username, password, event broker host to connect to). This configuration information is required on start up for the PubSub+ Cache Instance to connect to and register with the Designated Router. (For more information on the content of the PubSub+ Cache configuration file, refer to the configuration file template (
sampleConfig.txt) provided with the
If the PubSub+ Cache Instance successfully establishes a connection, the Designated Router’s Cache Manager downloads additional configuration information and topic information to the PubSub+ Cache Instance.
: If the Designated Router for a PubSub+ Cache Instance goes offline or restarts, the PubSub+ Cache Instance continues to run with the configuration that it last received from the Designated Router. Once the Designated Router comes back online, it resends the configuration information to the PubSub+ Cache Instances that it is responsible for managing.
A PubSub+ Cache Instance has the following additional interactions with the Designated Router:
- CLI: Administrative and configuration changes made through the CLI on the Designated Router are sent between the Cache Manager and PubSub+ Cache Instances.
- Event: Events are generated on PubSub+ Cache Instances and sent back to the Designated Router for reporting through the message bus.
- Heartbeats: A heartbeat request is periodically sent by a PubSub+ Cache Instance to the Designated Router to confirm the presence of each other. If three or more heartbeat requests are lost, then a PubSub+ Cache Instance must reconnect and resynchronize its configuration with the Designated Router. It does not delete any of its cache content through this process unless it learns of topics no longer required to be cached. When it is trying to reconnect to the Designated Router, the PubSub+ Cache Instance stays “in service” (that is, continues to service cache requests).
Messages are removed from Cache Instances where they are cached when any of the following conditions arise:
- A scheduled delete operation occurs.
- An administrative delete message <topic> operation is issued from the Designated Router.
- Configured limits, such as the lifetime or maximum number of messages per topic, are exceeded.
- A change to the topic set defined on the Cache Cluster occurs at which time messages for topics cached outside of the configured topics set are deleted.
- A PubSub+ Cache Instance or its parent Cache Cluster or Distributed Cache is deleted.
- Any termination of the PubSub+ Cache Instance process. That is, the cache contents are volatile and are lost if the PubSub+ Cache Instance process dies or is reset.
Cached messages are not removed from a PubSub+ Cache Instance when the PubSub+ Cache Instance or its parent Cache Cluster or Distributed Cache is shutdown. However, when the PubSub+ Cache Instance is subsequently enabled, or if an administrative start is performed (refer to Starting PubSub+ Cache Instances), and there is another active PubSub+ Cache Instance in the Cache Cluster, the restarted PubSub+ Cache Instance’s cached messages are removed and replaced with those of the PubSub+ Cache Instance with which it is synchronized.
Cache requests made using the Java, Java RTO, C, and .NET APIs can either be synchronous or asynchronous. If the request is synchronous, then the API call blocks until the response is received (or a time-out occurs).
If a cache request is made to a Distributed Cache or a Cache Cluster, the request is delivered to one of the PubSub+ Cache Instances in a Cache Cluster that are configured to listen for the same topics, and that single PubSub+ Cache Instance responds to the cache request.
Cache responses are addressed back to the originating client on an automatically generated peer-to-peer topic contained within the request.
The result of a cache request is indicated by return codes or events provided by the Cache Instance. Any messages that are returned for the request are handled through message receive callback or delegate associated with the session that the cache session is created in.
For information on how to design an application to make cache requests and handle cache responses, refer to the following sections:
- Working with the C API
- Working with the Java API
- Working with the Java RTO API
- Working with the .NET API
Solace PubSub+ allows customer applications to publish messages to and receive messages over using the following standardized, non-Solace-specific technologies:
- Open Middleware Agnostic Messaging API (OpenMAMA)
- Message Queuing Telemetry Transport (MQTT) protocol
- Representational State Transfer (REST) messaging service
Of these non-Solace technologies, only client applications using the OpenMAMA API may use PubSub+ Cache to make cache requests for topics—applications using REST or MQTT cannot (although MQTT publish messages may be cached).
The focus of this guide is on using PubSub+ Cache with applications that use Solace APIs. For information on how to implement the PubSub+ Cache facility with OpenMAMA applications, refer to Solace OpenMAMA .