.NET API Best Practices

General Best Practices

Tuning Guidelines for Guaranteed Messaging

Reductions in the rate at which clients receive messages can occur when a high volume of Guaranteed messages (particularly when they are large messages) is received over many flows. In this situation, the number of flows used and the Guaranteed window size used for each flow affects the buffer usage of the per‑client priority queues that the event broker uses for Guaranteed messages. These queues, called the G-1 queues, hold Guaranteed messages for the clients that are waiting for delivery out of the event broker, or have been sent but are waiting for acknowledgment from the clients.

Each G-1 queue is allocated a maximum depth buffer. This maximum depth is measured in work units, whereby a work unit represents 2,048 bytes of a message. (By default, each G-1 queue is given a maximum depth of 20,000 work units.)

To address slow Guaranteed message delivery rates caused by high demands on the buffer allocated by G-1 queues, you should reduce the Guaranteed message window size used for each flow and, when possible, reduce the number of flows used.

If it's not possible to reduce the Guaranteed message window size, or the number of flows, you can also effectively increase the G-1 queue size by adjusting the min-msg-burst size used by the event broker.

Reapply Subscriptions

If enabled, the API maintains a local cache of subscriptions and reapplies them when the subscriber connection is reestablished. Reapply subscriptions will only apply direct topic subscriptions upon a session reconnect. It will not reapply topic subscriptions on durable and non-durable endpoints.

Number of Flows and Guaranteed Message Window Size

The amount of buffers used by a client for receiving Guaranteed messages is primarily determined by the number of flows used per session * the Guaranteed message window size of each flow. To limit a client’s maximum buffer use, you can reduce the number of flows used and/or reduce the Guaranteed message window size of each flow. (The Guaranteed message window size for each flow is set through the flow properties. For more information, see Important Flow (Message Consumer) Properties.)

Consider, for example, a client using flows with a window size of 255 to bind to 10 queues, and the Guaranteed messages from those queues have an average size of 20kB. In this scenario, the flow configuration for the client is not appropriately sized, as the client’s maximum buffer usage (approximately 24,902 work units) exceeds that offered by the event broker (20,000 work units). However, if the flows are reconfigured with a window size of 25, then the client’s maximum buffer usage will fall within an acceptable range (approximately 2,441 work units).

Work units are fixed size buffers on the event broker that are used to process messages according to set queue depths. A work unit represents 2,048 bytes of a message.

Minimum Message Burst Size

If you can't reduce the number of flows, or the Guaranteed message window size, you can adjust the size of the G-1 queue. The simplest way to increase the queue is to adjust the min-msg-burst size. The min-msg-burst size specifies the number of messages that are always allowed entry into the queue. The min‑msg‑burst size is set on a per-client basis through client profiles.

Under normal operating conditions it's not necessary to change the default min‑msg-burst value for the G-1 queue. However, in situations where a client is consuming messages from multiple endpoints, it's important that the min‑msg‑burst size for the G-1 queue is at least equal to the sum of all of the Guaranteed message window sizes used for the flows that the client consumes messages from. For example, if the client connects to 1,000 endpoints, and the flows have a window size of 50, then the min-msg-burst size should be set to 50,000 or more.

Tuning the min-msg-burst size in this manner ensures that the NAB holds enough messages to fill the client’s combined Guaranteed message window size when it comes online. If there aren't enough messages held, messages that aren't delivered to the client can be discarded, then another delivery attempt is required. This process of discarding, then resending messages results in a slow recovery for a slow subscriber (that is, a client that doesn't consume messages at a quick rate).

For information on how to set the min-msg-burst size, see Configuring Egress Per-Client Priority Queues.

Basic Rules

When programming using the .NET API, it's useful to remember the following basic rules:

  • durable and non-temporary objects (such as durable endpoints) are created at the Factory level
  • non‑durable and temporary objects are created at the session level
  • flows are created at the session level

Threading

Selecting a Threading Model

The .NET API uses contexts that create and manage their own worker threads for organizing communications with event brokers. Each client application that uses the .NET API must contain a minimum of one context, and each context can contain one or more sessions.

Application developers using the .NET API can choose to create one or more sessions within a context. The decision on how to configure your sessions in contexts is an important consideration in application design, as it directly impacts factors such as CPU usage on application host machines, message latency, and throughput.

Context and Session Threading Model Considerations

Recommendation

  • Use the 'one session, one context' threading model whenever possible. The 'multiple session, one context' and 'multiple sessions, multiple contexts' models can potentially increase message processing throughput, but at the expense of additional processing.

There are three different threading models to consider when designing an application:

  1. One session one context. A single session is used with a single context.
  2. Multiple sessions one context. Multiple sessions are serviced using one context.
  3. Multiple sessions multiple contexts. Application provides or uses a number of threads, each containing a single context and each context contains one or more sessions.

For majority of cases, the 'one session, one context' model is sufficient for publisher and consumer application design.

An application designer may want to move to 'multiple sessions, one context' if there is a need to prioritize messages where higher value messages maybe sent/received across different sessions,for example, through different TCP connections. This approach can potentially increase throughput as well. This means that it may be necessary to forward received messages to downstream application internal queues such that messages are processed by additional application message processing threads. All received messages can be processed by the same message and event callback functions, or session specific ones by creating additional callbacks.

With 'multiple sessions, multiple contexts', a designer can reduce the context thread processing burden of the 'multiple sessions, one context' model where all sessions must wait in the select loop before being processed. In this model, each session can be separated into its own context thread, and enhance the processing performance that multi-threading in the OS provides. Due to the increased number of threads however, this approach requires expensive thread context switching, and therefore places more burden on the CPU and is more resource intensive.

Memory Management

The following sections discuss how to manage memory in the .NET API and provide some guidelines for optimizing the performance of the API.

Modifying the Global Pool Buffer Sizes

The .NET API allocates specific sized buffers from its own pools and maintains them internally. Buffers are allocated from heap storage and used for saving messages in the application space until the message buffers are released by the application or later garbage collection.

You can optionally modify the default global data buffer sizes for the five pools that are used when .NET API is initialized.

When you call the ContextFactory.Init(...) method to initialize the .NET API, you can modify the following ContextFactoryProperties:

  • MaxPoolMemory

    Specifies maximum amount of memory the .NET API can save in its data and message pools.

  • DBQuantaSize_<0-4>

    Specifies the size of the data buffers for each of the five pools. Once it reaches this size, datablocks are released back to heap and not kept in an API pool.

For more information, see the C#/.NET API Reference.

Configuring Message Buffer Sizes

When creating a session, an application can configure the following memory and resource allocation-related session properties:

  • SessionProperties.SdkBufferSize

    The session buffer size used for transmitting messages for the TCP session. This parameter specifies the maximum amount of messages to buffer (as measured in bytes). For maximum performance, when sending small messages, the session buffer size should be set to multiple times the typical message size.

    The .NET API always accepts at least one message to transmit. So even if the size of a single message exceeds the set buffer size, it is accepted and transmitted, as along as the current buffered data is zero. However, no further messages are accepted until the amount of data buffered is reduced below the set buffer size.

  • SessionProperties.SocketReceiveBufferSizeInBytes

    The receive buffer size for the subscriber data socket. A default value of 150,000 is used. If this property is set to 0, then the receive buffer size uses the default operating system size.

    On Windows platforms the receive socket buffer size must be much larger than the send socket buffer sizes to prevent data loss when sending and receiving messages. For example, the default send socket and internal buffer sizes are set at 90,000, and the default receive socket buffer size is set at 150,000. If you change the default sizes, it is recommended that you maintain a similar sizing ratio.

  • SessionProperties.SocketSendBufferSizeInBytes

    Allows the send buffer size for the publisher data socket to be set by the application. A default value of 90,000 is used. If this property is set to 0, then the send buffer size uses the operating system default.

    You can also modify the global data buffer sizes for the five pools of buffers used by the .NET API (DBQuantaSize_<0-4>). Modifying the global data buffer sizes can be done when you initialize the API. Refer to Modifying the Global Pool Buffer Sizes.

Managing Memory When Publishing Messages

To ensure a high level of operational performance when publishing messages, you should avoid unnecessary memory moving and copying. Therefore it is recommended to reuse the message instances whenever possible.

Managing Memory When Receiving Messages

Received message buffers are owned by the application. To ensure that allocated memory is freed, it is recommended that you explicitly call the Dispose() method for each message received.

Session Establishment

Blocking Connect

Recommendation

  • Connect blocking calls serializes each and every session connect and increases session establishment delay. If serialization is not necessary, disable blocking connect to improve overall session connect speed.

Enable blocking connect serializes each and every individual session connect when there are many sessions configured within a context. Doing so will increase the connection time. If serialization is not important, consider disabling blocking connect.

The blocking connect property is SessionProperties.ConnectBlocking.

Host Lists

Recommendation

As a best practice, use host lists (see note below). Host lists are applicable when you use replication for failover support and the software event broker's hostlist high availability (HA) support.

Host Lists should not be used in active/active replication deployments.

For replication failover support, client applications must be configured with a host list of two addresses, one for each of the Guaranteed messaging enabled virtual router at each site. If a connection fails for one host, it would the client would then try to connect to the to-be-active replication host before retrying the same host. For that reason, it's recommended to set the reconnect retires per host to 1.

Host lists must not be used in an active/active replication deployment where client applications are consuming messages from endpoints on the replication active message VPN on both sites.

Similarly, for software event broker HA failover support, if the switchover-mechanism is set to hostlist instead of IP address-takeover, the client application must provide a host list of two addresses.

For more details on hostlist configuration, see HA Group Configuration.

Client API Keep-alive

Recommendation

  • The client keep-alive interval should be set to the same order of magnitude as the TCP keep-alive setting on the client profile.

There are two types of keep-alive mechanisms between the client application and the event broker.

There is the TCP keep-alive that operates at the TCP level that is sent by the event broker to the client application. This is the TCP keep-alive mechanism described in RFC 1122. The client application’s TCP stack responds to the event broker’s TCP keep-alive probe. By default, the event broker sends out a keep-alive message after it detects idle connection for 3 seconds. It then sends 5 probes at the interval of 1 keep-alive probe per second. Hence, the event broker will flag a client to have failed TCP keep-alive if it receives no response after 8 seconds.

There is also the Client API keep-alive that occurs concurrently to the TCP keep-alive. This is the API’s built-in keep-alive mechanism, and operates on top of TCP at the API level. This is sent from the API to the event broker. By default, a client keep-alive is sent at the interval of once every 3 seconds, and up to 3 keep-alive responses can be missed before the API declares that the event broker is unreachable; that is, after 9 seconds.

These keep-alive mechanisms exist so that they will be able to advise the application or the event broker that its peer has died before it's able to notify the corresponding party. The keep-alive mechanism is also used to prevent disconnection due to network inactivity. However, if either mechanism is set much more aggressively, that is, with a shorter detecting time, than the other, the connection can be prematurely disconnected. For example, if the Client API keep-alive is set at a 500 ms interval with 3 keep-alive responses while the TCP keep-alive remains unchanged at the default, then the client API keep-alive will trigger aggressive disconnection.

High Availability Failover and Reconnect Retries

Recommendation

  • The reconnect duration should be set to last for at least 300 seconds when designing applications for high availability (HA) support.

When using a high availability (HA) appliance setup, a failover from one appliance to its mate will typically occur within 30 seconds. However, applications should attempt to reconnect for at least 5 minutes. Below is an example of setting the reconnect duration to 5 minutes using the following session property values:

  • connect retries: 1
  • reconnect retries: 20
  • reconnect retry wait: 3,000 ms
  • connect retries per host: 5

Refer to Configuring Connection Time-Outs and Retries for instructions on setting connect retries, reconnect retries, reconnect retry wait and connect retires per host parameters.

Replication Failover and Reconnect Retries

Recommendation

  • The number of reconnect retries should be set to -1 so that the API will retry indefinitely during a replication failover.

In general, the duration of a replication failover is non-deterministic as it may require operational intervention for the switch, which can take up to tens of minutes, or hours. Hence, it's recommended to set the number of reconnect retires to -1 so that the API will try to indefinitely reconnect for a replication aware client application.

Refer to Reconnect Retries for instructions on how to set the reconnect retries parameter.

Replication Failover and Session Re-Establishment

Recommendation

  • API versions higher than 7.1.2 are replication aware, and automatically handle session re-establishment when a replication failover occurs. Client applications running lower API versions must re-establish a session upon reconnect.

Prior to 7.1.2, sessions need to be re-established after a replication failover when a client is publishing Guaranteed messages in a session that has been disconnected, because, while the reconnect is successful, the flow needs to be re-established since the newly connected event broker in the replication site doesn't t have any flow state information, unlike the case for HA failover where this information is synchronized. The recommendation is to catch the unkonw_flow_name event and re-establish a new session to get the flow created. From version 7.1.2 onwards, the API is replication aware and transparently handles session re-establishment .

File Descriptor Limitation

Recommendation

  • The number of Solace sessions created by an application shouldn't exceed the number of file descriptors supported per process by the underlying operating system. For Unix variants, this number is 1024, and for Windows it's 63.

File descriptor limits in Unix platforms restrict the number of files that can be managed per process, and this is 1024 by default. Hence, an application shouldn't create more than 1023 sessions per context. A session represents a TCP/IP connection, and such a connection occupies one file descriptor. A file descriptor is an element - usually a number - that allows you to identify, in this case, a stream of data from the socket. Open a file to read information from disk also occupies one file descriptor.

File descriptors are called ‘file’ because initially they only identified files, although more recently, they can be used to identify files on disk, sockets, pipes, and so on.

Similarly, on Windows platforms, a single context can only manage at most 63 sessions.

Subscription Management

The following best practices can be used for managing subscriptions:

  • If you are adding or removing a large number of subscriptions, set the Wait for Confirm flag (SubscribeFlag.WaitForConfirm)on the final subscription to ensure that all subscriptions have been processed by the event broker. On all other subscriptions, to increase performance, it is recommended that the application not set Wait for Confirm.
  • In the event of a session disconnect, you can have the messaging API reapply subscriptions that were initially added by the application when the session is reconnected. To reapply subscriptions on reconnect, enable the Reapply Subscriptions session property (SessionProperties.ReapplySubscriptions). Using this setting is recommended.

Sending Messages

Blocking Send

Recommendation

  • Send-blocking calls automatically limit the publishing rate at which an event broker can accept messages. Use non-blocking send to increase application performance.

In send-blocking mode, the calling thread for each send function call is blocked until the API accepts the message, and hence, the sending rate is automatically limited to the rate at which the event broker can accept the message. The send call remains blocked until either:

  • it's accepted by the API, or
  • the associated blocking write timeout expires.

Compared to non-blocking mode, send calls that cannot be accepted by the API will immediately return a would_block error code to the client application. In the meantime, the client application can continue to process other actions. Subsequently, the API will receive a can_send event which signifies that the client application can retry the send() function call again.

The send blocking parameter is SessionProperties.SendBlocking.

Batch Send

Recommendation

  • Use the batch sending facility to optimize send performance. This is particularly useful for performance benchmarking a client application.

Use the batch-sending facility to optimize send performance. This is particularly useful for performance benchmarking client applications.

A group of up to 50 messages can be sent through a single API call. This allows messages to be sent in a batch. The messages can be either Direct or Guaranteed. When batch-sending messages through the send-multiple API, the same Delivery mode, that is Direct or Persistent mode, should be set for all messages in the batch. Messages in a batch can be set to different destinations.

In addition to using the batch-sending API, messages should be pre-allocated and reused for batch-sending whenever possible. Specifically, don't reallocate new messages for each call to the batch-sending API.

The batch-sending API call is ISession.Send().

Time-to-Live Messages

Recommendation

  • Set the TTL attribute on published guaranteed messages to reduce the risk of unconsumed messages unintentionally piling up in the queue if the use-case allows for discarding old or stale messages.

Publishing applications should consider utilizing the TTL feature available for Guaranteed messaging. Publishers can set the TTL attribute on each message prior to sending to the event broker. Once the message has been spooled, the message will be automatically discarded (or moved to the queue’s configured dead message queue, if available) should the message not be consumed within the specified TTL. This common practice reduces the risk of unconsumed messages unintentionally piling up.

Alternatively, queues have a max-ttl setting, and this can be used instead of publishers setting the TTL on each message sent. See Configuring Max Message TTLs for instructions on setting max-ttl for a queue.

Configuring respect-ttl

Queues should be configured to respect-ttl as, by default, this feature is disabled on all queues. Refer to Enforcing Whether to Respect TTLs for instructions on how to set up respect-ttl.

Receiving Messages

Handling Duplicate Message Publication

Recommendation

  • Publishing duplicate messages can be avoided if the client application uses the last value queue (LVQ) to determine the last message successfully spooled by the event broker upon restarting.

When a client application is unexpectedly restarted, it's possible for it to become out-of-sync with respect to the message publishing sequence. There should be a mechanism by which it can determine the last message that was successfully published to, and received by, the event broker in order to correctly resume publishing without injecting duplicate messages.

One approach is for the publishing application to maintain a database that correlates between the published message identifier and the acknowledgment it receives from the event broker. This approach is completely self-contained on the client application side, but can introduce processing latencies if not well managed.

Another approach is to make use of the last value queue (LVQ) feature, where the LVQ stores the last message spooled on the queue. A publishing client application can then browse the LVQ to determine the last message spooled by the event broker. This allows the publisher to resume publishing without introducing duplicate messages.

Refer to Configuring Max Spool Usage Values for instructions on setting up LVQ.

Handling Redelivered Messages

When a client application restarts, unexpectedly or not, and rebinds to a queue, it may receive messages that it had already processed as well as acknowledged. This can happen because the acknowledgment can be lost on route to the event broker due to network issues. The redelivered messages will be marked with the redelivered flag.

A client application that binds to a non-exclusive queue may also receive messages with the redelivered flag set, even though the messages are received by the client application for the first time. This is due to other clients connecting to the same non-exclusive queue which disconnects without the application acknowledging the received messages. These messages are then redelivered to other client applications that bind to the same non-exclusive queue.

The consuming application should contain a message processing mechanism to handle the above mentioned scenarios.

Dealing with Unexpected Message Formats

Recommendation

  • Client applications should be able to handle unexpected message formats. In the case of consuming from endpoints, a client application should acknowledge received messages even if those messages are unexpectedly formatted.

Client applications should be able to contend with unexpected message formats. There shouldn't be any assumptions made about a message's payload; for example, a payload may contain an empty attachment. Applications should be coded such that they will avoid crashing, as well as logging the message contents and sending an acknowledgment back to the event broker if using Guaranteed Messaging. If client applications crash without sending acknowledgments, then when they reconnect, the same messages will be redelivered causing the applications to fail again.

Client Acknowledgment

Recommendation

  • Client Applications should acknowledge received messages as soon as they have completed processing those messages when client acknowledgment mode is used.

Once an application has completed processing a message, it should acknowledge the receipt of the message to the event broker. Only when the event broker receives an acknowledgment for a Guaranteed Message will the message be permanently removed from its message spool. If the client disconnects without sending acknowledgments for some received messages, then those messages will be redelivered. For the case of an exclusive queue, those messages will be delivered to the next connecting client. For the case of a non-exclusive queue, those messages will be redelivered to the other clients that are bound to the queue.

There are two kinds of acknowledgments:

  • API (also known as Transport) Acknowledgment. This is an internal acknowledgment between the API and the event broker and isn't exposed to the application. The Assured Delivery (AD) window size, acknowledgment timer, and the acknowledgment threshold settings control API Acknowledgment. A message that isn't transport acknowledged will be automatically redelivered by the event broker.
  • Application Acknowledgment. This acknowledgment mechanism is on top of the API Acknowledgment. Its primary purpose is to confirm that message processing has been completed, and that the corresponding messages can be permanently removed from the event broker. There are two application acknowledgment modes: auto-acknowledgment and client acknowledgment. When auto-acknowledgment mode is used, the API automatically generates application-level acknowledgments on behalf of the application. When client acknowledgment mode is used, the client application must explicitly send the acknowledgment for the message ID of each message received.

Refer to the Receiving Guaranteed Messages for a more detailed discussion on the different acknowledgment modes.

Do Not Block in Callbacks

Applications must not block in and should return as quickly as possible from message receive, event and timer callbacks so that the calling thread can process the next message, event or timer and perform internal API housekeeping. The one exception is for transacted sessions. Applications can call API-provided blocking functions such as commit, rollback and send from within the message receive callback of a transacted session.

Queues and Flows

Receiving One Message at a Time

Recommendation

  • Setting max-delivered-unacked-msgs-per-flow to 1 and assured delivery Window Size to 1 to ensure messages are delivered from the event broker to the client application one message at a time and in a time-consistent manner.

An API only sends transport acknowledgments when either,

  1. it has received the configured acknowledgment threshold worth of configured assured delivery window messages (i.e. 60%)
  2. a message has been received and the configured assured delivery acknowledgment time as passed since the last acknowledgment was sent (i.e. 1 seconds), whichever comes first.

The application acknowledgment piggybacks on the transport acknowledgment for the delivery from the client application to the event broker. And the event broker only releases further messages once it receives the acknowledgment.

Therefore, while setting max-delivered-unacked-msgs-per-flow to 1 will ensure that messages are delivered to the client application one at a time, if the assured delivery window size is not 1, then condition 1 will not be immediately fulfilled. This can result in a reception delay variation because the API only sends the acknowledgment after condition 2 is fulfilled. This is inconsistent with the expected end-to-end message receipt delivery delay. To avoid this, the event broker informs the API of the endpoint's max-delivered-unacked-msgs-per-flow setting. The API then uses this information to automatically adjust its ACK threshold, preventing reception delay variation.

Refer to Configuring Max Permitted Number of Delivered Unacked Messages for instructions on how to configure max-delivered-unacked-msgs-per-flow on queues.

Setting Temporary Endpoint Spool Size

Recommendation

  • Exercise caution if a client application frequently creates temporary endpoints to ensure that the sum of all temporary endpoint spool sizes does not exceed the total spool size provisioned for the message VPN.

By default, the message spool quota of a message VPN and endpoint is based on an over-subscription model. For instance, it's possible to set the message spool quota of multiple endpoints to the same quota as that of an entire message VPN. Temporary endpoints created by a client application default to 4000 MB for the Solace application and 1500 MB for the software event broker. When temporary endpoints are used extensively by a client application, the message spool over-subscription model can quickly get out-of-control when temporary endpoints are being created on-demand. Therefore, it's recommended that a client application overwrite an endpoint’s default message spool size to a value that is inline with expected usage, especially if temporary endpoints are heavily used.

AD Window Size and max-delivered-unacked-msgs-per-flow

Recommendation

  • The assured delivery window size configured on the API should not be greater than the max-delivered-unacked-msgs-per-flow value that is set for a queue on the event broker.

max-delivered-unacked-msgs-per-flow controls how many messages the event broker can deliver to the client application without receiving back an acknowledgment. The assured delivery window size controls how many messages can be in transit between the event broker and the client application. So, if the assured delivery window size is greater than max-delivered-unacked-msgs-per-flow, then the API may not be able to acknowledge the messages it receives in a timely manner. Effectively, the assured delivery window size is bounded by the value set for max-delivered-unacked-msgs-per-flow. For instance, if the assured delivery window size is set to 10, and max-delivered-unacked-msgs-per-flow is set to 5, then the event broker will effectively be limited to send out 5 messages at a time regardless of the client application’s assured delivery window size setting of 10.

Refer to Configuring Max Permitted Number of Delivered Unacked Messages for instructions on how to set up max-delivered-unacked-msgs-per-flow on queues.

Number of Flows and AD Window Size

Recommendation

  • Size the expected number of flows per session, and its associated assured delivery window size, to within the available memory limit of the client application host, and within the default work units allocated per client egress queue on the event broker.

The API buffers received Guaranteed messages and, in general, also owns the messages and is responsible for freeing them. The amount of buffers used by a client is primarily determined by multiplying the assured delivery window size by the number of flows used per session. For example, if a receiving client application is using flows with an assured delivery window size of 255 to bind to 10 different queues on an event broker, then the maximum buffer usage, given an average message size of 1 M, will be 2560 MB. If there are 10 such clients running on the same host, then 25.6 G of memory will be required.

Similarly, the event broker dedicates a per-client egress queue to buffer the to be transmitted messages to the client application. By default, this is 20,000 work units, or an equivalent of 40 M worth of buffer as each work unit is 2048 bytes. For a per-client egress queue to support 2560 MB worth of buffering, the number of work units for this particular client will need to be increased to 130,560. Hence, depending on application usage, it's recommended that you dimension the assured delivery window size in relation to the number of expected flows per session such that it will be within the default 20,000 work units worth of buffer per client connection.

Error Handling and Logging

Logging and Log Level

Recommendation

  • Client application debug level logging should not be enabled in production environments.

Client application event logging can have a significant impact on performance, and so, in a production environment, it's not recommended to enable debug level logging.

Error Handling

When .NET API sessions are terminated unexpectedly, error information can be collected and sent to the application. The following session event enumerations trigger error information:

  • DownError—The session was established and then went down.
  • ConnectFailedError—The session attempted to connect but was unsuccessful.
  • RejectedMessageError—The event broker rejected a published message.
  • SubscriptionError—The event broker rejected a subscription add or remove.
  • MessageTooBigError—The API discarded a received message that exceeded the set session buffer size.
  • TEUnsubscribeError—The Topic Endpoint unsubscribe request was successful.

Error information is handled separately for each individual thread.

To configure error handling, include calls to the following method in your event handling code:

  • GetLastSDKErrorInfo() on ContextFactory singleton

    Returns a SDKErrorInfo, which contains the last captured error information for the calling thread.

Subcodes

Subcodes provide more detailed error information. The basic subcodes that can result from any API call are listed in .

Some API calls can also generate more specific error subcodes. For more information on these subcodes, refer to C#/.NET API Reference.

The last generated subcode is stored on a per-thread basis and can be retrieved by an application thread.

Generic Subcodes

Subcode Description

FactoryInitNotCalled

An API call failed because ContextFactory.Init() was not first called.

ParamOutOfRange

An API call was made with an out-of-range parameter.

ParamConflict

An API call was made with an invalid parameter combination.

This subcode only applies to methods that have interdependent parameters.

InternalError

An API call had an internal error (not an application fault).

OperatingSystemError

An API call failed because of a failed operating system call.

OutOfMemory

An API call failed because memory could not be allocated.

Handling Session Events / Errors

Recommendation

  • Client applications should register an implementation of the session event handler interface / delegate / callback when creating a session to receive session events.

Client applications should register an implementation of the session event Handler interface / delegate / callback when creating a session to receive session events. A complete list of session event is listed in the table below. Subsequently, session events should be handled appropriately based on client application usage.

Session Events

.NET (SessionEvent Enum) Description

Acknowledgment

The oldest transmitted persistent/non-persistent message that has been acknowledged.

AssuredDeliveryDown

Guaranteed Delivery Publishing is not available.

CanSend

The send is no longer blocked.

ConnectFailedError

The session attempted to connect but was unsuccessful.

DownError

The session was established and then went down.

DTEUnsubscribeError

Deprecated name. Same as TE_UNSUBSCRIBE_ERROR.

DTEUnsubscribeOK

Deprecated name. Same as TE_UNSUBSCRIBE_OK.

ModifyPropertyFail

The session property modification failed.

MOdifyPropertyOK

The session property modification completed.

ProvisionError

The endpoint create/delete command failed.

ProvisionOK

The endpoint create/delete command completed.

Reconnected

The automatic reconnect of the session was successful, and the session was established again.

Reconnecting

The session has gone down, and an automatic reconnect attempt is in progress.

RejectMessageError

The appliance rejected a published message.

ReplublishUnacked Messages

After successfully reconnecting a disconnected session, the API received an unknown publisher flow name response when reconnecting the Guaranteed Delivery publisher flow.

MessageTooBigError

The API discarded a received message that exceeded the session buffer size.

SubscriptionError

The application rejected a subscription (add or remove).

SubscriptionOK

The subscribe or unsubscribe operation has succeeded.

TEUnsubscribeError

The Topic Endpoint unsubscribe command failed.

TEUnsubscribeOK

The Topic Endpoint unsubscribe completed.

UpNotice

The session is established

VirtualRouter NameChanged

The appliance’s Virtual Router Name changed during a reconnect operation.

Handling Flow Events / Errors

Recommendation

  • Client applications should register an implementation of the flow event handler interface / delegate / callback when creating a flow to receive flow events.

Client applications should register an implementation of the flow event handler interface / delegate / callback when creating a flow to receive flow events. Flow error / events should be handled appropriately based on client application usage.

Flow Events

.NET (FlowEvent Enum) Description

UpNotice

The flow is established

DownError

The flow was established and then disconnected by the appliance, likely due to operator intervention.

BindFailedError

The flow attempted to connect but was unsuccessful.

ParentSessionDown

The session for the flow was disconnected.

FlowActive

The flow has become active.

FlowInactive

The flow has become inactive.

Reconnecting

When flow Reconnect is enabled, instead of a DownError event, the API generates this event and attempts to rebind the flow.

If the flow rebind fails, the API monitors the bind failure and terminates the reconnecting attempts with a DownError unless the failure reason is one of the following:

  • Queue Shutdown
  • Topic Endpoint Shutdown
  • Service Unavailable

For more information about flow Reconnect, refer to Flow Reconnect.

Reconnected

The flow has been successfully reconnected.

Event Broker Configuration that Influences Client Application Behavior

Max Redelivery

Recommendation

  • By default, messages are to be redelivered indefinitely from endpoints to clients. Set the maximum redelivery option on endpoints at the event broker, when appropriate, to limit the maximum number of message redeliveries per message.

The maximum redelivery option can be set on an endpoint to control the number of deliveries per message on that endpoint. After the maximum number of redeliveries by the endpoint is exceeded, messages are either discarded or moved to the dead message queue (DMQ), if it's configured and the messages are set to DMQ eligible.

There are benefits for client applications when the number of redeliveries on an endpoint is not infinite (by default the redelivery mode is set to redelivery forever). For instance, if a client application is unable to handle unexpected poison messages, the message will eventually be discarded or moved to DMQ where further examination can take place.

Reject Message to Sender on Discard

Recommendation

  • reject-msg-to-sender-on-discard on an endpoint should be enabled unless there are good reasons not to.

When publishing guaranteed messages to an event broker, messages can be discarded for reasons such as message-spool full, maximum message size exceeded, endpoint shutdown, and so on. If the message discard option on the endpoint, that is, reject-msg-to-sender-on-discard, is enabled, then the client application will be able to detect that discarding is happening and take corrective action such as pausing publication. There is no explicit support at the API to pause publication; this should be carried out by the client application logic.

One reason to consider disabling reject-msg-to-send-on-discard is the situation where there are multiple queues subscribing to the same topic that the Guaranteed messages are published to, and the intent is for other queues to continue receiving messages even if one of the queues is deemed unable to accept messages.