C API Best Practices

The following are some of the best practices for the PubSub+ Messaging API for C. The practices are divided into the following categories:

General Best Practices

Tuning Guidelines for Guaranteed Messaging

Reductions in the rate at which clients receive messages can occur when a high volume of Guaranteed messages (particularly when they are large messages) is received over many Flows. In this situation, the number of Flows used and the Guaranteed window size used for each Flow affects the buffer usage of the per‑client priority queues that the event broker uses for Guaranteed messages. These queues, called the G-1 queues, hold Guaranteed messages for the clients that are waiting for delivery out of the event broker, or have been sent but are waiting for acknowledgment from the clients.

Each G-1 queue is allocated a maximum depth buffer. This maximum depth is measured in work units, whereby a work unit represents 2,048 bytes of a message. (By default, each G-1 queue is given a maximum depth of 20,000 work units.)

To address slow Guaranteed message delivery rates caused by high demands on the buffer allocated by G-1 queues, you should reduce the Guaranteed message window size used for each Flow and, when possible, reduce the number of Flows used.

If it's not possible to reduce the Guaranteed message window size, or the number of flows, you can also effectively increase the G-1 queue size by adjusting the min-msg-burst size used by the event broker.

Reapply Subscriptions

If enabled, the API maintains a local cache of subscriptions and reapplies them when the subscriber connection is reestablished. Reapply Subscriptions will only apply direct topic subscriptions upon a Session reconnect. It will not reapply topic subscriptions on durable and non-durable endpoints.

Number of Flows and Guaranteed Message Window Size

The amount of buffers used by a client for receiving Guaranteed messages is primarily determined by the number of Flows used per Session * the Guaranteed Message window size of each Flow. To limit a client’s maximum buffer use, you can reduce the number of Flows used and/or reduce the Guaranteed Message window size of each Flow. (The Guaranteed Message window size for each Flow is set through the Flow properties; refer to Important Flow (Message Consumer) Properties.)

Consider, for example, a client using Flows with a window size of 255 to bind to 10 Queues, and the Guaranteed messages from those Queues have an average size of 20kB. In this scenario, the Flow configuration for the client is not appropriately sized, as the client’s maximum buffer usage (approximately 24,902 work units) exceeds that offered by the event broker (20,000 work units). However, if the Flows are reconfigured with a window size of 25, then the client’s maximum buffer usage will fall within an acceptable range (approximately 2,441 work units).

Work units are fixed size buffers on the event broker that are used to process messages according to set queue depths. A work unit represents 2,048 bytes of a message.

Minimum Message Burst Size

If you can't reduce the number of Flows, or the Guaranteed Message window size, you can adjust the size of the G-1 queue. The simplest way to increase the queue is to adjust the min-msg-burst size. The min-msg-burst size specifies the number of messages that are always allowed entry into the queue. The min‑msg‑burst size is set on a per-client basis through client profiles.

Under normal operating conditions it's not necessary to change the default min‑msg-burst value for the G-1 queue. However, in situations where a client is consuming messages from multiple endpoints, it's important that the min‑msg‑burst size for the G-1 queue is at least equal to the sum of all of the Guaranteed message window sizes used for the Flows that the client consumes messages from. For example, if the client connects to 1,000 endpoints, and the Flows have a window size of 50, then the min-msg-burst size should be set to 50,000 or more.

Tuning the min-msg-burst size in this manner ensures that the NAB holds enough messages to fill the client’s combined Guaranteed message window size when it comes online. If there aren't enough messages held, messages that aren't delivered to the client can be discarded, then another delivery attempt is required. This process of discarding, then resending messages results in a slow recovery for a slow subscriber (that is, a client that doesn't consume messages at a quick rate).

For information on how to set the min-msg-burst size, refer to Configuring Egress Per-Client Priority Queues.

Threading

Selecting a Threading Model

Recommendation

  • Use an API provided Context thread whenever possible unless it is unable to match the required performance, and in such cases the application-provided Context thread technique should be considered.
  • Use the 'One Session, One Context' threading model whenever possible. The 'Multiple Sessions, One Context' and 'Multiple Sessions, Multiple Contexts' models can potentially increase message processing throughput, but at the expense of additional processing caveats.

The C API uses Contexts for organizing communications with Solace PubSub+ event brokers. Each client application that uses the C API must contain a minimum of one Context; and each Context can contain one or more Sessions.

By default, the C API provides an internal Context thread for processing work that is suitable for the most common application models and architectures. This thread can handle application timers and file descriptors as registered by the application through the API.

If you want to automatically create the Context thread, instead of relying on the application to create and destroy the Context thread, enable the SOLCLIENT_CONTEXT_PROP_CREATE_THREAD Context property. The API-provided Context thread blocks in solClient_context_processEvents(...) and is the thread the application is called from for all received messages and received events. The API-provided Context thread automatically exits cleanly when a Context is destroyed.

Application-Provided Threads

Optionally, when the SOLCLIENT_CONTEXT_PROP_CREATE_THREAD Context property is disabled, an application can provide a Context thread and manage file descriptors itself. When this configuration is used, the API requires the application to provide thread processing time.

The ability for the application to use its own thread and event loop processing (with the requirement that part of that event loop includes a call to the processEvents()) function offers developers much flexibility.

When relying on application-provided threads, how to configure your Sessions in Contexts is an important consideration in application design, as it directly impacts factors such as CPU usage on application host machines, message latency, and throughput.

The table below describes the threading models that can be used for application provided threads and how they affect application design and performance.

Threading Model Considerations

Threading Model Description Implementation Considerations

One Session, One Context Thread

In this scenario, a single Session is used with a single Context on the application-provided thread.

This approach forces all message and event processing into this Context thread instead of forwarding some or all processing downstream to other application threads.

This straight forward model allows for easier design and debugging, and it is ideal for applications that function as either a publisher or a consumer.

For the majority of cases, the 'One Session, One Context' model is sufficient for publisher and consumer application design.

Multiple Sessions, One Context Thread

In this scenario, multiple Sessions are serviced using one Context on the application provided thread. You can process all messages received by the same message and event callback functions, or create additional callbacks.

This approach puts considerable processing stress on the Context thread, as all Sessions must wait in the select loop before being processed.

Depending on message volume, it could be necessary to forward messages to downstream Queues for processing by additional application threads.

An application designer may want to move to 'Multiple Sessions, One Context' if there is a need to prioritize messages where higher value messages maybe sent/received across different Sessions, for example through different TCP connections. This approach can potentially increase throughput as well. This means that it may be necessary to forward received messages to downstream application internal queues such that messages are processed by additional application message processing threads. All received messages can be processed by the same message and event callback functions, or Session specific ones by creating additional callbacks.

Multiple Sessions, Multiple Contexts Threads

In this scenario, the application provides a number of threads, each containing a single Context, and each Context can contain one or more Sessions.

This approach allows you to separate each Session connection into its own Context thread, which allows all processing to occur in each application-provided thread.

With 'Multiple Sessions, Multiple Contexts', a designer can reduce the Context Thread processing burden of the 'Multiple Sessions, One Context' model where all Sessions must wait in the select loop before being processed. In this model, each Session can be separated into its own Context thread, and therefore enhance the processing performances that OS multi-threading provides. However, due to the increased number of threads, this approach requires expansive thread context switching, hence places more burden on the CPU and is more resource intensive.

  • The application must call solClient_context_timerTick() for each Context.
  • For a given Context, the messaging API’s file descriptor callbacks must be called from the same thread that calls solClient_context_timerTick() for that Context.

Context Thread Affinity

Recommendation

  • Use SOLCLIENT_CONTEXT_PROP_THREAD_AFFINITY_CPU_LIST to pin the API generated Context thread to a CPU.

When the context thread is automatically generated by the C API, the thread affinity can be set for the context thread through the SOLCLIENT_CONTEXT_PROP_THREAD_AFFINITY_CPU_LIST parameter during context creation. When you set the thread affinity, it dedicates a specific CPU for processing improvement and prevents the context thread from being interrupted by other processes. By default, the thread affinity for the auto-created context thread is not set, allowing your operating system to optimally schedule the context thread on available CPUs. The expected string value is a comma-separated list that can be:

  • numbers—base-10 non-negative integers between 0 and the number of CPUs in the system
    and/or

  • ranges—two numbers with a dash character between them

The following example shows how to set thread affinity for a list of CPUs using numbers and ranges:

const char* contextProps_simple[3] = {SOLCLIENT_CONTEXT_PROP_THREAD_AFFINITY_CPU_LIST, "0,1,2,4,8-10,13-15", NULL};

The default value of SOLCLIENT_CONTEXT_PROP_THREAD_AFFINITY_CPU_LIST is an empty string, which results in no thread affinity setting.

This property has no effect if the application creates the context thread itself.

For more information and details about usage, see the C API Reference and the solclient.h header file.

File Descriptor Management

The following table lists the management modes that can be used in C API to handle file descriptors.

File Descriptor Management Modes

Management Mode Description

API Management of API File Descriptors

The default management mode is for the C API to internally manage its own file descriptor events. When an application’s processing Context calls solClient_context_processEvents(...), the API waits for internal file descriptor events. Control returns to the calling application processing thread when at least one event occurs or a time-out occurs.

Application Management of API File Descriptors

The application manages the event generation logic that is normally managed by the C API. Within a processing Context, the application provides file descriptor event register and unregister functions that the API uses to ask for events for its file descriptors. The application is then responsible for polling the file descriptor for events. When events occur on file descriptors owned by the API, the application event generation logic must call the routines that the API has registered for its file descriptors.

API Management of Application File Descriptors

The application registers its own file descriptors for events, such as read or write events, within a C API processing Context. When an application file descriptor is registered, the application provides a callback routine and a pointer to application data. The application then calls solClient_context_processEvents(...), or relies on the internal API Context thread, as in the default management mode, which can cause event generation on the registered application file descriptors and on API file descriptors.

File Descriptor Limits

File descriptor limits in Linux and Solaris platforms restrict the number of files that can be managed per process to 1,024. Because the C API uses select(2) to poll devices, these file descriptor limits prevent the API from managing any single file descriptor that has a numerical value greater than 1,023.

An application should not create more than 1,023 Sessions per process. The possible number of Sessions can be further reduced by any other files not managed by the API that the application has open.

Similarly, on Windows platforms, a single Context cannot manage more than 63 Sessions that are connected to an event broker. However, unlike an application on a Linux platform, an application on a Windows platform can work around the OS limitations by creating many Contexts within a process.

An application that registers its own file descriptor handlers (by providing non-null function pointers in solClient_context_createRegisterFdFuncInfo_t) is not limited in the API, but it might have its own limitations.

An application that provides its file descriptors to the API to manage (by calling solClient_context_registerForFdEvents()) further reduces the number of Sessions that can be handled in a single Context (Windows platform) or process wide (Linux and Solaris platforms).

Support for Solaris/SunOS is now deprecated and the last release was v7.23.0 (September 2022). For more details, see the Deprecated Features list on the Product Lifecycle Policy page.

Initializing Data Structures with Provided Macros

The C API provides static initializing macros for callback function data structures. Using the provided macros with accompanying explicit data structure initialization code is a good programming practice because it initializes the entire data structure and provides appropriate values for new fields. This ensures that your application will continue to compile without unexpected errors even if changes are made to the API callback function data structures.

Static initializing macros are provided for the following callback function data structures:

solClient_context_createFuncInfo_t – initializer for SOLCLIENT_CONTEXT_CREATEFUNC_INITIALIZER

solClient_session_createFuncInfo_t – initializer for SOLCLIENT_SESSION_CREATEFUNC_INITIALIZER

solClient_flow_createFuncInfo_t - initializer for SOLCLIENT_FLOW_CREATEFUNC_INITIALIZER

For example, you could initialize solClient_session_createFuncInfo_t as follows:

solClient_session_createFuncInfo_t sessionFuncInfo = SOLCLIENT_SESSION_CREATEFUNC_INITIALIZER;
sessionFuncInfo.rxInfo.callback_p     = rxLogCallbackFunc;
sessionFuncInfo.rxInfo.user_p         = userPtr;
sessionFuncInfo.eventInfo.callback_p  = eventLogCallbackFunc;
sessionFuncInfo.eventInfo.user_p      = userPtr;

The code snippet above is equivalent to and preferable to the immediate initialization in a declaration, as shown in the following example:

solClient_session_createFuncInfo_t sessionFuncInfo = {
            {rxMsgCallback, userPtr},
            {rxEventCallbac, userPtr},
            {NULL, NULL} };

Initializing in this manner without using the provided macros could require code changes to your program if future enhancements and changes are made to the C API.

Memory Management

The following sections discuss how to manage memory in the C API and provide some guidelines for optimizing the performance of the API.

Message Abstraction

Applications using the C API use the solClientMsg interface, an abstract data structure stored in internal memory buffers with accessors and modifiers for each of the message parts.

The solClientMsg interface provides the following functionality:

  • An internal memory pool to avoid heap allocation and fragmentation.
  • Add and get functions for structured data in the binary data payload (that is, the binary attachment) of the message.
  • Add and get functions for unstructured data in the XML data and user-data payloads of the message.
  • Add and get functions for Solace-defined and user-defined message headers.

Modifying Global Pool Buffer Sizes

The solClientMsg interface is a message buffer API, which utilizes heap memory allocations. The C API allocates specific sized buffers from its own pools and maintains them internally. Buffers are allocated from heap storage and used for saving messages in the application space until they are released by the application.

When the C API is initialized, you can optionally modify the default global data buffer sizes for the five pools that are used.

When you call the solClient_initialize function to initialize the C API, you can use SOLCLIENT_GLOBAL_PROP_DBQUANTASIZE_<0-4> “GLOBAL_DBQUANTA_SIZE” to specify the size (in bytes) of the data buffers for each of the five pools.

For more information, see the PubSub+ Messaging API C reference.

Configuring Message Buffer Sizes

When creating a Session, an application can configure the following memory and resource allocation-related Session property parameters:

  • SOLCLIENT_SESSION_PROP_BUFFER_SIZE

    The Session buffer size used for transmitting messages for the TCP Session. This parameter specifies the maximum amount of messages to buffer (as measured in bytes). For maximum performance, when sending small messages, the Session buffer size should be set to multiple times the typical message size.

    The C API always accepts at least one message to transmit. So even if the size of a single message exceeds the set buffer size, it is accepted and transmitted, as long as the current buffered data is zero. However, no further messages are accepted until the amount of data buffered is reduced below the set buffer size.

  • SOLCLIENT_SESSION_PROP_SOCKET_RCV_BUF_SIZE

    The receive buffer size (in bytes) for the subscriber data socket. A default value of 150,000 is used. If this property is set to 0, the receive buffer size uses the operating system default.

    On Windows platforms the receive socket buffer size must be much larger than the send socket buffer sizes to prevent data loss when sending and receiving messages. For example, the default send socket and internal buffer sizes are set to 90,000, and the default receive socket buffer size is set to 150,000. If you change the default sizes, it is recommended that you maintain a similar sizing ratio.

  • SOLCLIENT_SESSION_PROP_SOCKET_SEND_BUF_SIZE

    This parameter allows the send buffer size (in bytes) for the publisher data socket to be set by the application. A default value of 90,000 is used. If this property is set to 0, the send buffer size uses the operating system default.

Managing Memory When Publishing Messages

To ensure a high level of operational performance when publishing messages, avoid unnecessary memory moving and copying. To reduce the processing cycles used when moving and copying memory, consider the following guidelines:

  • When the payload or message already exists in the application, use solClient_msg_setXmlPtr or solClient_msg_setBinaryAttachmentPtr functions to set the payload pointers directly to the message. The companion functions solClient_msg_setXml and solClient_msg_setBinaryAttachment copy the message into internally-allocated memory and should only be used when the message is being built and saved for some reason.
  • If the Topic or Queue name destination already exists, use solClient_msg_setTopicPtr or solClient_msg_setQueueNamePtr.

Using structured data type (SDT) message containers always involves memory copies. Therefore, to conserve memory when using containers, consider the following guidelines:

  • If you are sending a few headers that describe large content, consider setting the headers in the USER_PROPERTY map (set through solClient_msg_createUserPropertyMap(...)) and add the content using solClient_msg_setBinaryAttachmentPtr().
  • When building a container, always try to accurately estimate the required size. The container could be a user property map (created through createUserPropertyMap()), a map (created in a message through solClient_msg_createBinaryAttachmentMap()) or a stream (created in a message through solClient_msg_createBinaryAttachmentStream(...)).
  • When building a complex container that uses a submap or substream, write the submap or substream completely and call solClient_container_createStream(...) to finish the submap or substream before adding more to the main container.

    When a binary attachment already exists in the application, you can use solClient_msg_setBinaryAttachmentContainerPtr(...) to avoid a memory copy. When the message is sent, the binary attachment contents are copied directly from the application memory to the transmit socket or buffer. Note that when this function is used, modifying the container or releasing the memory it references before the message is sent can corrupt the contents.

Managing Memory When Receiving Messages

Message buffers received by the callback are owned by the C API, and they must not be released. However, to take ownership of these message buffers, the application can return SOLCLIENT_CALLBACK_TAKE_MSG to the API for each message. In this case, the application must call solClient_msg_free(...) when it is finished with the messages to release the memory.

TCP Send and Receive Buffer Size

Recommendation

  • Adjust the TCP send and receive buffer sizes to optimize TCP performance, particularly when publishing large messages or for WAN performance optimization.

For TCP, the bandwidth-delay product refers to the product of a data link’s capacity and its round-trip delay time. The result expresses the maximum amount of data that can be on the network at any given time. A large bandwidth-delay product is expected for a WAN environment due to the intrinsic long round-trip delay, and as such TCP can only achieve optimum throughput if a sender sends a sufficiently large quantity of data to fill the maximum amount of data that the network can accept. This means that the TCP send and receive buffer size needs to be adjusted.

Specific to Windows platform, the receive socket buffer size must be much larger than the send socket buffer size to prevent data loss when sending and receiving messages. The recommended ratio is 3 parts send buffer to 5 parts receive buffer.

TCP’s socket send and receive buffer sizes can be configured through the API’s session properties setting. The session property parameters and default values are shown below. If the value of zero is used for setting these properties, the operating system’s default size is used.

  • SOLCLIENT_SESSION_PROP_SOCKET_RCV_BUF_SIZE; 150,000 bytes
  • SOLCLIENT_SESSION_PROP_SOCKET_SEND_BUF_SIZE; 90,000 bytes

Session Establishment

Blocking Connect

Recommendation

  • Connect blocking calls serializes each and every session connect and increases session establishment delay. If serialization is not necessary, disable blocking connect to improve overall session connect speed.

Enable blocking connect serializes each and every individual session connect when there are many sessions configured within a Context. Doing so will increase the connection time. If serialization is not important, consider disabling blocking connect.

The blocking connect property is SOLCLIENT_SESSION_PROP_CONNECT_BLOCKING.

Host Lists

Recommendation

As a best practice, use host lists (see note below). Host lists are applicable when you use replication for failover support and the software event broker's hostlist High Availability (HA) support.

Host Lists should not be used in Active/Active Replication deployments.

For replication failover support, client applications must be configured with a host list of two addresses, one for each of the Guaranteed Messaging enabled virtual router at each site. If a connection fails for one host, it would the client would then try to connect to the to-be-active replication host before retrying the same host. For that reason, it's recommended to set the reconnect retires per host to 1.

Host lists must not be used in an active/active replication deployment where client applications are consuming messages from endpoints on the replication active message VPN on both sites.

Similarly, for software event broker HA failover support, if the switchover-mechanism is set to hostlist instead of IP address-takeover, the client application must provide a host list of two addresses.

For more details on hostlist configuration, see HA Group Configuration.

Client API Keep-alive

Recommendation

  • The Client Keep-alive interval should be set to the same order of magnitude as the TCP Keep-alive setting on the client profile.

There are two types of keep-alive mechanisms between the client application and the event broker.

There is the TCP Keep-alive that operates at the TCP level that is sent by the event broker to the client application. This is the TCP Keep-alive mechanism described in RFC 1122. The client application’s TCP stack responds to the event broker’s TCP Keep-alive probe. By default, the event broker sends out a keep-alive message after it detects idle connection for 3 seconds. It then sends 5 probes at the interval of 1 keep-alive probe per second. Hence, the event broker will flag a client to have failed TCP keep-alive if it receives no response after 8 seconds.

There is also the Client API Keep-alive that occurs concurrently to the TCP Keep-alive. This is the API’s built-in keep-alive mechanism, and operates on top of TCP at the API level. This is sent from the API to the event broker. By default, a Client Keep-alive is sent at the interval of once every 3 seconds, and up to 3 keep-alive responses can be missed before the API declares that the event broker is unreachable; that is, after 9 seconds.

These keep-alive mechanisms exist so that they will be able to advise the application or the event broker that its peer has died before it's able to notify the corresponding party. The keep-alive mechanism is also used to prevent disconnection due to network inactivity. However, if either mechanism is set much more aggressively, that is, with a shorter detecting time, than the other, the connection can be prematurely disconnected. For example, if the Client API Keep-alive is set at a 500 ms interval with 3 keep-alive responses while the TCP Keep-alive remains unchanged at the default, then the client API Keep-alive will trigger aggressive disconnection.

High Availability Failover and Reconnect Retries

Recommendation

  • The reconnect duration should be set to last for at least 300 seconds when designing applications for High Availability (HA) support.

When using a High Availability (HA) appliance setup, a failover from one appliance to its mate will typically occur within 30 seconds. However, applications should attempt to reconnect for at least 5 minutes. Below is an example of setting the reconnect duration to 5 minutes using the following session property values:

  • connect retries: 1
  • reconnect retries: 20
  • reconnect retry wait: 3,000 ms
  • connect retries per host: 5

Refer to Configuring Connection Time-Outs and Retries for instructions on setting connect retries, reconnect retries, reconnect retry wait and connect retires per host parameters.

Replication Failover and Reconnect Retries

Recommendation

  • The number of reconnect retries should be set to -1 so that the API will retry indefinitely during a replication failover.

In general, the duration of a replication failover is non-deterministic as it may require operational intervention for the switch, which can take up to tens of minutes, or hours. Hence, it's recommended to set the number of reconnect retires to -1 so that the API will try to indefinitely reconnect for a replication aware client application.

Refer to Reconnect Retries for instructions on how to set the reconnect retries parameter.

Replication Failover and Session Re-Establishment

Recommendation

  • API versions higher than 7.1.2 are replication aware, and automatically handle session re-establishment when a replication failover occurs. Client applications running lower API versions must re-establish a session upon reconnect.

Prior to 7.1.2, sessions need to be re-established after a replication failover when a client is publishing Guaranteed messages in a session that has been disconnected, because, while the reconnect is successful, the flow needs to be re-established since the newly connected event broker in the Replication site doesn't t have any flow state information, unlike the case for HA failover where this information is synchronized. The recommendation is to catch the unkonw_flow_name event and re-establish a new session to get the flow created. From version 7.1.2 onwards, the API is replication aware and transparently handles session re-establishment .

Blocking Call in Transacted Session Callback

Recommendation

  • A blocking call on the message receive callback is allowed as the messages are delivered from a message dispatcher thread, not the context thread. A message dispatcher thread is created implicitly by the API for transacted session.

One common usage pattern of a transacted session is to consume a message and publish the result, and commit these 2 steps as one atomic step. It's generally the case that one cannot make blocking calls, that is, send(), from within a message callback on the context thread. However, for a transacted session, the API implicitly creates a Message Dispatcher thread for the delivery of the message and, therefore, not from the context thread. Hence, the blocking call is on the message dispatcher thread, and not from the context thread.

When creating a transacted session, the client application can decide to have its own dispatcher thread, or to share a dispatcher thread with other transacted sessions using the same context.

Refer to “Transacted Session” in the C API Reference for further details.

File Descriptor Limitation

Recommendation

  • The number of Solace sessions created by an application shouldn't exceed the number of file descriptors supported per process by the underlying operating system. For Unix variants, this number is 1024, and for Windows it's 63.

File descriptor limits in Unix platforms restrict the number of files that can be managed per process, and this is 1024 by default. Hence, an application shouldn't create more than 1023 Sessions per Context. A session represents a TCP/IP connection, and such a connection occupies one file descriptor. A file descriptor is an element - usually a number - that allows you to identify, in this case, a stream of data from the socket. Open a file to read information from disk also occupies one file descriptor.

File descriptors are called ‘file’ because initially they only identified files, although more recently, they can be used to identify files on disk, sockets, pipes, and so on.

Similarly, on Windows platforms, a single Context can only manage at most 63 Sessions.

Selecting Blocking Modes

Blocking and non-blocking modes are configurable Session property parameters. When creating a Session, an application can configure whether a blocking or non-blocking mode is used when a connection is established, for send, subscribe, and unsubscribe operations. See the table below for a list of the available blocking mode Session properties.

Even if a blocking mode is set for a Session, blocking mode is ignored when a call is made within a Context message receive, event, or timer callback function. In this situation, a blocking call succeeds if it can be processed immediately in a Context thread, otherwise it returns SOLCLIENT_WOULD_BLOCK as if it were a non-blocking call.

Blocking applications must have separate threads for process events. Blocking threads are unblocked by events detected in the solClient_context_processEvent function.

Blocking Mode Session Property Parameters

Parameter Description

SOLCLIENT_SESSION_PROP_CONNECT_BLOCKING

Sets whether to connect the Session in a blocking or non-blocking mode.

Use SOLCLIENT_PROP_ENABLE_VAL to connect in a blocking mode (the default). Use SOLCLIENT_PROP_DISABLE_VAL to connect in a non-blocking mode. The default is blocking mode.

Avoid setting blocking connect when many Sessions are configured in a Context thread. Setting this mode serializes every step of the connection process for all the Sessions, and, as a result, increases the connection time.

SOLCLIENT_SESSION_PROP_SEND_BLOCKING

Sets whether a blocking or non-blocking send operation is used.

Use SOLCLIENT_PROP_ENABLE_VAL to send in a blocking mode (the default). Use SOLCLIENT_PROP_DISABLE_VAL to send in a non-blocking mode.

SOLCLIENT_SESSION_PROP_SUBSCRIBE_BLOCKING

Sets whether subscribe/unsubscribe operations occur in a blocking or non-blocking mode.

Use SOLCLIENT_PROP_ENABLE_VAL to subscribe in a blocking mode (the default). Use SOLCLIENT_PROP_DISABLE_VAL to subscribe in a non-blocking mode.

SOLCLIENT_SESSION_PROP_BLOCKING_WRITE_TIMEOUT_MS

The time-out (in milliseconds) when sending messages or subscribing/unsubscribing in a blocking mode.

If a time-out occurs, a return code SOLCLIENT_FAIL is returned.

How Blocking Modes Affect Publishing

How a solClient_session_sendMsg function call is handled depends on the blocking mode chosen for the Session.

  • Blocking Mode

    In this mode, the calling thread for each solClient_session_sendMsg(...) function call is blocked until C API accepts the message. As a result, the solClient_session_sendMsg calls are automatically limited to a rate at which the event broker can accept them. The solClient_session_sendMsg function call remains blocked until either it is accepted by the C API or the associated timer expires.

  • Non-blocking Mode

    In this mode, solClient_session_sendMsg function calls that cannot be accepted by the C API immediately return a SOLCIENT_WOULD_BLOCK error code to the application. When it can be accepted, the API receives a subsequent SOLCLIENT_SESSION_EVENT_CAN_SEND event, and then it can retry sending the request. In the interim it can continue to process other actions.

Subscription Management

The following best practices can be used for managing subscriptions:

  • If you are adding or removing a large number of subscriptions, set the Wait for Confirm flag (SOLCLIENT_SUBSCRIBE_FLAGS_WAITFORCONFIRM) on the final subscription to ensure that all subscriptions have been processed by the event broker. On all other subscriptions, to increase performance, it is recommended that the application not set Wait for Confirm.
  • In the event of a Session disconnect, you can have the API reapply subscriptions that were initially added by the application when the Session is reconnected. To reapply subscriptions on reconnect, enable the Reapply Subscriptions Session property (SOLCLIENT_SESSION_PROP_REAPPLY_SUBSCRIPTIONS). Using this setting is recommended.

Working with iOS Applications

The iOS distribution of the C API allows you to create new or integrate existing iOS applications for use with Solace PubSub+. This section provides information on special considerations that apply to iOS applications.

Responding to State and Connectivity Changes

For best performance, an application should respond appropriately when its state changes or the device’s network connectivity changes. This ensures that the application does not attempt to maintain connections to the event broker when it has been moved to the background by the OS.

Responding to state and network connectivity changes properly will extend device battery life and minimize data connection usage.

State Changes

Applications should close session connections when they are moved to the background, and reopen any closed connections when they return to the foreground.

As such, Solace recommends the following actions when state transitions occur:

Responding to State Changes

Event Recommended Action

applicationDidBecomeActive

Application is about to move to the foreground. Establish session connections by calling solClient_session_connect().

applicationWillResignActive

Application is transitioning out of the foreground. Close session connections by calling solClient_session_disconnect().

Network Connectivity Changes

For best performance, applications should communicate over wi-fi connections whenever they are available instead of wireless wide area network (WWAN) connections to lower data usage and increase battery life.

To prioritize wi-fi connections, Solace recommends the following actions when network connectivity changes occur:

Responding to Network Connectivity Changes

WWAN Status Wi-Fi Status Event Recommended Action

Connected

Not connected

Wi-fi connects

Force sessions to reconnect by calling solClient_session_disconnect() followed by solClient_session_connect().

Connected

Not connected

WWAN disconnects

Close session connections by calling solClient_session_disconnect().

Not connected

Connected

Wi-fi disconnects

Close session connections by calling solClient_session_disconnect().

Not connected

Not connected

WWAN connects

Establish session connections by calling solClient_session_connect().

Not connected

Not connected

Wi-fi connects

Establish session connections by calling solClient_session_connect().

Related Samples

For an example of how to respond to changes in application state, refer to the AppTransitionsExample sample included with the iOS API.

For an example of how to respond to changes in network state, refer to the AppReachabilityExample sample included with the iOS API.

Providing a Trust Store for Creating Secure Connections

To create a secure connection to an event broker, the iOS distribution of the C API uses OpenSSL, which is included in the API. The process of creating secure connections through the iOS distribution of the C API is the same as with other distributions. However, a few special considerations are required for the iOS distribution to provide a trust store to the C API to validate the event broker’s server certificate.

When creating a secure connection, the C API attempts to establish trust with the remote event broker. To do so, the application must provide the path to a trust store that contains CA or self-signed certificates. A path to the trust store can be provided in one of the following ways:

  • Include a static trust store into the applications bundle as a resource, and then provide a path to that location.

    For example, assuming that the application bundles a trust store folder named trustStore, which contains .crt files, the following code could be used to initialize the session with this trust store:

    // The path to the trustStore will be stored into trustStorePath
    NSString* trustStorePath = [[[NSBundle mainBundle] resourcePath] stringByAppendingString:@"/trustStore"];
    // This path must be passed to the session we are about to create and connect
    sessionProps[propIndex++] = SOLCLIENT_SESSION_PROP_SSL_TRUST_STORE_DIR;
    sessionProps[propIndex++] = [resourceDirectory cStringUsingEncoding:NSASCIIStringEncoding]
  • Use <application_Home>/Documents/ as the trust store, and then make that directory available to the user through file sharing. As this directory is initially empty, the application can copy a default trust store from the app bundle (as shown in the preceding bullet) into that directory. Then, the user can configure the trust store from iTunes through the application’s file sharing feature.

Which approach is best depends on how the event broker’s certificate is signed. It is recommended that a CA be created to sign the event broker’s certificate, and then to bundle this CA certificate into the application. An application designer can then use this CA to create a certificate for event brokers that the application will use. This approach ensures that the application designer has control over which event broker the application trusts.

Sending Messages

Blocking Send

Recommendation

  • Send-blocking calls automatically limit the publishing rate at which an event broker can accept messages. Use non-blocking send to increase application performance.

In send-blocking mode, the calling thread for each send function call is blocked until the API accepts the message, and hence, the sending rate is automatically limited to the rate at which the event broker can accept the message. The send call remains blocked until either:

  • it's accepted by the API, or
  • the associated blocking write timeout expires.

Compared to non-blocking mode, send calls that cannot be accepted by the API will immediately return a would_block error code to the client application. In the meantime, the client application can continue to process other actions. Subsequently, the API will receive a can_send event which signifies that the client application can retry the send() function call again.

The send blocking parameter is SOLCLIENT_SESSION_PROP_SEND_BLOCKING.

Batch Send

Recommendation

  • Use the batch sending facility to optimize send performance. This is particularly useful for performance benchmarking a client application.

Use the batch-sending facility to optimize send performance. This is particularly useful for performance benchmarking client applications.

A group of up to 50 messages can be sent through a single API call. This allows messages to be sent in a batch. The messages can be either Direct or Guaranteed. When batch-sending messages through the send-multiple API, the same Delivery mode, that is Direct or Persistent mode, should be set for all messages in the batch. Messages in a batch can be set to different destinations.

In addition to using the batch-sending API, messages should be pre-allocated and reused for batch-sending whenever possible. Specifically, don't reallocate new messages for each call to the batch-sending API.

The batch-sending API call is solClient_session_sendMultipleMsg().

Time-to-Live Messages

Recommendation

  • Set the TTL attribute on published guaranteed messages to reduce the risk of unconsumed messages unintentionally piling up in the queue if the use-case allows for discarding old or stale messages.

Publishing applications should consider utilizing the TTL feature available for Guaranteed Messaging. Publishers can set the TTL attribute on each message prior to sending to the event broker. Once the message has been spooled, the message will be automatically discarded (or moved to the queue’s configured Dead Message Queue, if available) should the message not be consumed within the specified TTL. This common practice reduces the risk of unconsumed messages unintentionally piling up.

Alternatively, queues have a max-ttl setting, and this can be used instead of publishers setting the TTL on each message sent. See Configuring Max Message TTLs for instructions on setting max-ttl for a queue.

Configuring respect-ttl

Queues should be configured to respect-ttl as, by default, this feature is disabled on all queues. Refer to Enforcing Whether to Respect TTLs for instructions on how to set up respect-ttl.

Receiving Messages

Consume Messages As Soon As Possible

Recommendation

  • A Client Application should not block in the message receive callback, and should consume the received messages as soon as possible to avoid reception performance degradation.

To optimize messaging throughput, received messages should be consumed as soon as possible after receipt.

Client applications can expect to receive a return fail code when making API function calls from a callback routine, if the function call would be blocking in other threads.

The application should ensure that the callback is returned promptly as this API operates in asynchronous mode only. Waiting in callback routines can deadlock the application, or at a minimum severely degrade the receive performance.

Handling Duplicate Message Publication

Recommendation

  • Publishing duplicate messages can be avoided if the client application uses the Last Value Queue (LVQ) to determine the last message successfully spooled by the event broker upon restarting.

When a client application is unexpectedly restarted, it's possible for it to become out-of-sync with respect to the message publishing sequence. There should be a mechanism by which it can determine the last message that was successfully published to, and received by, the event broker in order to correctly resume publishing without injecting duplicate messages.

One approach is for the publishing application to maintain a database that correlates between the published message identifier and the acknowledgment it receives from the event broker. This approach is completely self-contained on the client application side, but can introduce processing latencies if not well managed.

Another approach is to make use of the Last Value Queue (LVQ) feature, where the LVQ stores the last message spooled on the queue. A publishing client application can then browse the LVQ to determine the last message spooled by the event broker. This allows the publisher to resume publishing without introducing duplicate messages.

Refer to Configuring Max Spool Usage Values for instructions on setting up LVQ.

Handling Redelivered Messages

Recommendation

  • When consuming from endpoints, a client application should appropriately handle redelivered messages.

When a client application restarts, unexpectedly or not, and rebinds to a queue, it may receive messages that it had already processed as well as acknowledged. This can happen because the acknowledgment can be lost on route to the event broker due to network issues. The redelivered messages will be marked with the redelivered flag.

A client application that binds to a non-exclusive queue may also receive messages with the redelivered flag set, even though the messages are received by the client application for the first time. This is due to other clients connecting to the same non-exclusive queue which disconnects without the application acknowledging the received messages. These messages are then redelivered to other client applications that bind to the same non-exclusive queue.

The consuming application should contain a message processing mechanism to handle the above mentioned scenarios.

Dealing with Unexpected Message Formats

Recommendation

  • Client applications should be able to handle unexpected message formats. In the case of consuming from endpoints, a client application should acknowledge received messages even if those messages are unexpectedly formatted.

Client applications should be able to contend with unexpected message formats. There shouldn't be any assumptions made about a message's payload; for example, a payload may contain an empty attachment. Applications should be coded such that they will avoid crashing, as well as logging the message contents and sending an acknowledgment back to the event broker if using Guaranteed Messaging. If client applications crash without sending acknowledgments, then when they reconnect, the same messages will be redelivered causing the applications to fail again.

Client Acknowledgment

Recommendation

  • Client Applications should acknowledge received messages as soon as they have completed processing those messages when client acknowledgment mode is used.

Once an application has completed processing a message, it should acknowledge the receipt of the message to the event broker. Only when the event broker receives an acknowledgment for a Guaranteed Message will the message be permanently removed from its message spool. If the client disconnects without sending acknowledgments for some received messages, then those messages will be redelivered. For the case of an exclusive queue, those messages will be delivered to the next connecting client. For the case of a non-exclusive queue, those messages will be redelivered to the other clients that are bound to the queue.

There are two kinds of acknowledgments:

  • API (also known as Transport) Acknowledgment. This is an internal acknowledgment between the API and the event broker and isn't exposed to the application. The Assured Delivery (AD) window size, acknowledgment timer, and the acknowledgment threshold settings control API Acknowledgment. A message that isn't transport acknowledged will be automatically redelivered by the event broker.
  • Application Acknowledgment. This acknowledgment mechanism is on top of the API Acknowledgment. Its primary purpose is to confirm that message processing has been completed, and that the corresponding messages can be permanently removed from the event broker. There are two application acknowledgment modes: auto-acknowledgment and client acknowledgment. When auto-acknowledgment mode is used, the API automatically generates application-level acknowledgments on behalf of the application. When client acknowledgment mode is used, the client application must explicitly send the acknowledgment for the message ID of each message received.

Refer to the Receiving Guaranteed Messages for a more detailed discussion on the different acknowledgment modes.

Do Not Block in Callbacks

Applications must not block in and should return as quickly as possible from message receive, event and timer callbacks so that the calling thread can process the next message, event or timer and perform internal API housekeeping. The one exception is for transacted sessions. Applications can call API-provided blocking functions such as commit, rollback and send from within the message receive callback of a transacted session.

Queues and Flows

Receiving One Message at a Time

Recommendation

  • Setting max-delivered-unacked-msgs-per-flow to 1 and AD Window Size to 1 to ensure messages are delivered from the event broker to the client application one message at a time and in a time-consistent manner.

An API only sends transport acknowledgments when either,

  1. it has received the configured acknowledgment threshold worth of configured Assured Delivery (AD) window messages (i.e. 60%)
  2. a message has been received and the configured AD acknowledgment time as passed since the last acknowledgment was sent (i.e. 1 seconds), whichever comes first.

The application acknowledgment piggybacks on the transport acknowledgment for the delivery from the client application to the event broker. And the event broker only releases further messages once it receives the acknowledgment.

Therefore, while setting max-delivered-unacked-msgs-per-flow to 1 will ensure that messages are delivered to the client application one at a time, if the AD window size is not 1, then condition 1 will not be immediately fulfilled. This can result in a reception delay variation because the API only sends the acknowledgment after condition 2 is fulfilled. This is inconsistent with the expected end-to-end message receipt delivery delay. To avoid this, the event broker informs the API of the endpoint's max-delivered-unacked-msgs-per-flow setting. The API then uses this information to automatically adjust its ACK threshold, preventing reception delay variation.

Refer to Configuring Max Permitted Number of Delivered Unacked Messages for instructions on how to configure max-delivered-unacked-msgs-per-flow on queues.

Setting Temporary Endpoint Spool Size

Recommendation

  • Exercise caution if a client application frequently creates temporary endpoints to ensure that the sum of all temporary endpoint spool sizes does not exceed the total spool size provisioned for the Message VPN.

By default, the message spool quota of a Message VPN and endpoint is based on an over-subscription model. For instance, it's possible to set the message spool quota of multiple endpoints to the same quota as that of an entire Message VPN. Temporary endpoints created by a client application default to 4000 MB for the Solace application and 1500 MB for the software event broker. When temporary endpoints are used extensively by a client application, the message spool over-subscription model can quickly get out-of-control when temporary endpoints are being created on-demand. Therefore, it's recommended that a client application overwrite an endpoint’s default message spool size to a value that is inline with expected usage, especially if temporary endpoints are heavily used.

AD Window Size and max-delivered-unacked-msgs-per-flow

Recommendation

  • The AD window size configured on the API should not be greater than the max-delivered-unacked-msgs-per-flow value that is set for a queue on the event broker.

max-delivered-unacked-msgs-per-flow controls how many messages the event broker can deliver to the client application without receiving back an acknowledgment. The Assured Delivery (AD) window size controls how many messages can be in transit between the event broker and the client application. So, if the AD window size is greater than max-delivered-unacked-msgs-per-flow, then the API may not be able to acknowledge the messages it receives in a timely manner. Effectively, the AD window size is bounded by the value set for max-delivered-unacked-msgs-per-flow. For instance, if the AD window size is set to 10, and max-delivered-unacked-msgs-per-flow is set to 5, then the event broker will effectively be limited to send out 5 messages at a time regardless of the client application’s AD window size setting of 10.

Refer to Configuring Max Permitted Number of Delivered Unacked Messages for instructions on how to set up max-delivered-unacked-msgs-per-flow on queues.

Number of Flows and AD Window Size

Recommendation

  • Size the expected number of flows per session, and its associated AD window size, to within the available memory limit of the client application host, and within the default work units allocated per client egress queue on the event broker.

The API buffers received Guaranteed messages and, in general, also owns the messages and is responsible for freeing them. The amount of buffers used by a client is primarily determined by multiplying the Assured Delivery (AD) window size by the number of flows used per session. For example, if a receiving client application is using flows with an AD window size of 255 to bind to 10 different queues on an event broker, then the maximum buffer usage, given an average message size of 1 M, will be 2560 MB. If there are 10 such clients running on the same host, then 25.6 G of memory will be required.

Similarly, the event broker dedicates a per-client egress queue to buffer the to be transmitted messages to the client application. By default, this is 20,000 work units, or an equivalent of 40 M worth of buffer as each work unit is 2048 bytes. For a per-client egress queue to support 2560 MB worth of buffering, the number of work units for this particular client will need to be increased to 130,560. Hence, depending on application usage, it's recommended that you dimension the AD window size in relation to the number of expected flows per session such that it will be within the default 20,000 work units worth of buffer per client connection.

Error Handling and Logging

When Sessions are terminated unexpectedly, error information can be collected and sent to the application. Error information is handled separately for each individual thread.

Logging and Log Level

Recommendation

  • Client Application Debug level logging should not be enabled in production environments.

Client Application Event logging can have a significant impact on performance, and so, in a production environment, it's not recommended to enable debug level logging.

Handling Session Events / Errors

Recommendation

  • Client Applications should register an implementation of the Session Event handler interface / delegate / callback when creating a Session to receive Session events.

Client applications should register an implementation of the Session Event Handler interface / delegate / callback when creating a Session to receive Session events. A complete list of Session Event is listed in the table below. Subsequently, Session events should be handled appropriately based on client application usage.

Session Events

C (solclient.h) Description

SOLCLIENT_SESSION_EVENT_ACKNOWLEDGEMENT

The oldest transmitted Persistent / Non Persistent message that has been acknowledged.

SOLCLIENT_SESSION_EVENT_ASSURED_DELIVERY_DOWN

Guaranteed Delivery Publishing is not available.

SOLCLIENT_SESSION_EVENT_CAN_SEND

The send is no longer blocked.

SOLCLIENT_SESSION_EVENT_CONNECT_FAILED_ERROR

The Session attempted to connect but was unsuccessful.

SOLCLIENT_SESSION_EVENT_DOWN_ERROR

The Session was established and then went down.

SOLCLIENT_SESSION_EVENT_MODIFYPROP_FAIL

The session property modification failed.

SOLCLIENT_SESSION_EVENT_MODIFYPROP_OK

The session property modification completed.

SOLCLIENT_SESSION_EVENT_PROVISION_ERROR

The endpoint create/delete command failed.

SOLCLIENT_SESSION_EVENT_PROVISION_OK

The endpoint create/delete command completed.

SOLCLIENT_SESSION_EVENT_RECONNECTED_NOTICE

The automatic reconnect of the Session was successful, and the Session was established again.

SOLCLIENT_SESSION_EVENT_RECONNECTING_NOTICE

The Session has gone down, and an automatic reconnect attempt is in progress.

SOLCLIENT_SESSION_EVENT_REJECTED_MSG_ERROR

The appliance rejected a published message.

SOLCLIENT_SESSION_EVENT_REPLUBLISH_UNACKED_MESSAGES

After successfully reconnecting a disconnected session, the API received an unknown publisher flow name response when reconnecting the Guaranteed Delivery publisher flow.

SOLCLIENT_SESSION_EVENT_RX_MSG_TOO_BIG_ERROR

The API discarded a received message that exceeded the Session buffer size.

SOLCLIENT_SESSION_EVENT_SUBSCRIPTION_ERROR

The application rejected a subscription (add or remove).

SOLCLIENT_SESSION_EVENT_SUBSCRIPTION_OK

The subscribe or unsubscribe operation has succeeded.

SOLCLIENT_SESSION_EVENT_TE_UNSUBSCRIBE_ERROR

The Topic Endpoint unsubscribe command failed.

SOLCLIENT_SESSION_EVENT_TE_UNSUBSCRIBE_OK

The Topic Endpoint unsubscribe completed.

SOLCLIENT_SESSION_EVENT_UP_NOTICE

The Session is established

SOLCLIENT_SESSION_EVENT_VIRTUAL_ROUTER_NAME_CHANGED

The appliance’s Virtual Router Name changed during a reconnect operation.

Handling Flow Events / Errors

Recommendation

  • Client applications should register an implementation of the Flow Event handler interface / delegate / callback when creating a Flow to receive Flow events.

Client applications should register an implementation of the Flow Event Handler interface / delegate / callback when creating a Flow to receive Flow events. Flow error / events should be handled appropriately based on client application usage.

Flow Events

C (solclient.h)

Description

SOLCLIENT_FLOW_EVENT_UP_NOTICE

The Flow is established.

SOLCLIENT_FLOW_EVENT_DOWN_ERROR

The Flow was established and then disconnected by the appliance, likely due to operator intervention.

SOLCLIENT_FLOW_EVENT_BIND_FAILED_ERROR

The Flow attempted to connect but was unsuccessful.

SOLCLIENT_FLOW_EVENT_SESSION_DOWN

The Session for the Flow was disconnected.

SOLCLIENT_FLOW_EVENT_ACTIVE

The Flow has become active.

SOLCLIENT_FLOW_EVENT_INACTIVE

The Flow has become inactive.

SOLCLIENT_FLOW_EVENT_RECONNECTING

When Flow Reconnect is enabled, instead of a DOWN_ERROR event, the API generates this event and attempts to rebind the Flow.

If the Flow rebind fails, the API monitors the bind failure and terminates the reconnecting attempts with a DOWN_ERROR unless the failure reason is one of the following:

  • Queue Shutdown
  • Topic Endpoint Shutdown
  • Service Unavailable

For more information about Flow Reconnect, refer to Flow Reconnect.

SOLCLIENT_FLOW_EVENT_RECONNECTED

The Flow has been successfully reconnected.

Error Handling Functions

To complete error handling, include calls to the functions listed below in your event handling code:

Error Handling Functions

Function Description

solClient_getLastErrorInfo()

Returns a pointer to a solClient_errorInfo_t structure. This data structure contains the last captured error information for the calling thread.

solClient_resetLastErrorInfo()

Clears the last error information. Error information is recorded on a per-thread basis.

Subcodes

Subcodes provide more detailed error information. The basic subcodes that can result from any API call are listed in the table below.

Some API calls can also generate more specific error subcodes. For more information on these subcodes, refer to PubSub+ Messaging API C reference.

The last generated subcode is stored on a per-thread basis and can be retrieved by an application thread. An application can call solClient_subCodeToString() to convert a subcode to a string.

Generic Subcodes

Subcode Description

SOLCLIENT_SUBCODE_INIT_NOT_CALLED

An API call failed because solClient_initialize(...) was not called first.

This subcode cannot occur for functions that are allowed to be called before solClient_initialize.

SOLCLIENT_SUBCODE_PARAM_OUT_OF_RANGE

An API call was made with an out-of-range parameter.

SOLCLIENT_SUBCODE_PARAM_NULL_PTR

An API call was made with a null or invalid pointer parameter.

This subcode only applies to functions that accept pointer parameters.

SOLCLIENT_SUBCODE_PARAM_CONFLICT

An API call was made with an invalid parameter combination.

This subcode only applies to functions that have interdependent parameters.

SOLCLIENT_SUBCODE_INTERNAL_ERROR

An API call had an internal error (not an application fault).

SOLCLIENT_SUBCODE_OS_ERROR

An API call failed because of a failed operating system call.

SOLCLIENT_SUBCODE_OUT_OF_MEMORY

An API call failed because memory could not be allocated.

Event Broker Configuration that Influences Client Application Behavior

Max Redelivery

Recommendation

  • By default, messages are to be redelivered indefinitely from endpoints to clients. Set the maximum redelivery option on endpoints at the event broker, when appropriate, to limit the maximum number of message redeliveries per message.

The maximum redelivery option can be set on an endpoint to control the number of deliveries per message on that endpoint. After the maximum number of redeliveries by the endpoint is exceeded, messages are either discarded or moved to the Dead Message Queue (DMQ), if it's configured and the messages are set to DMQ eligible.

There are benefits for client applications when the number of redeliveries on an endpoint is not infinite (by default the redelivery mode is set to redelivery forever). For instance, if a client application is unable to handle unexpected poison messages, the message will eventually be discarded or moved to DMQ where further examination can take place.

Reject Message to Sender on Discard

Recommendation

  • reject-msg-to-sender-on-discard on an endpoint should be enabled unless there are good reasons not to.

When publishing guaranteed messages to an event broker, messages can be discarded for reasons such as message-spool full, maximum message size exceeded, endpoint shutdown, and so on. If the message discard option on the endpoint, that is, reject-msg-to-sender-on-discard, is enabled, then the client application will be able to detect that discarding is happening and take corrective action such as pausing publication. There is no explicit support at the API to pause publication; this should be carried out by the client application logic.

One reason to consider disabling reject-msg-to-send-on-discard is the situation where there are multiple queues subscribing to the same topic that the Guaranteed messages are published to, and the intent is for other queues to continue receiving messages even if one of the queues is deemed unable to accept messages.