Configuring Connection Details
This section provides instructions for configuring the connection details required to establish communication between the Micro-Integration and your third-party system.
For information about configuring the connection to the event broker, see Connecting to Your Event Broker.
This Micro-Integration supports workflows in the following direction only:
-
Amazon S3 to Solace
For more information about workflows, see Enabling Workflows and Managing Workflows.
The name of the binder for Amazon S3 is awss3.
Solace Connection Details
The Spring Cloud Stream Binder for Solace uses Spring Boot Auto-Configuration for the Solace Java API to configure its session. In the application.yml, this typically is configured as follows:
solace:
java:
host: tcp://localhost:55555
msg-vpn: default
client-username: default
client-password: default
For more information and options to configure the Solace session, see Spring Boot Auto-Configuration for the Solace Java API.
Preventing Message Loss when Publishing to Topic-to-Queue Mappings
If the Micro-Integration is publishing to a topic that is subscribed to by a queue, messages may be lost if they are rejected (for example, if queue ingress is shut down).
To prevent message loss, configure the reject-msg-to-sender-on-discard option with the including-when-shutdown flag.
Amazon S3 Connection Details
To configure the Amazon S3 connection details, set the following values in the application.yml file:
spring:
cloud:
stream:
bindings:
input-1:
destination: "aws2-s3://bucketNameOrARN"
binder: awss3
The Spring Cloud Stream standard properties for the Amazon S3 are as follows.
| Configuration Option | Type | Valid Values | Description |
|---|---|---|---|
|
|
|
A valid Amazon S3 bucket name or Amazon Resource Name (ARN) |
The Amazon S3 destination to read from, in the format:
|
|
|
|
|
This property must be set to |
Checkpoint Store Configuration
The Micro-Integration for Amazon S3 stores the current progress of file processing in a checkpoint store backed by a Last Value Queue (LVQ) on a Solace event broker.
The following table lists the configuration options for the checkpoint store.
| Configuration Option | Type | Valid Values | Default Value | Description |
|---|---|---|---|---|
|
|
|
Any |
None |
Required. The name of the LVQ (spool size 0) to be used for checkpointing. The queue must exist on the same event broker and Message VPN as the target queue. If the LVQ is deleted (administratively) or a message from the LVQ is deleted or consumed by another consumer, the Micro-Integration will not be able to resume from the last checkpoint. In addition, the LVQ should not be shared by multiple instances of the Micro-Integration. |
|
|
|
|
|
Optional. Set to |
Amazon S3 Consumer Binding Configuration Options
The following configuration options are available for the Amazon S3 consumer binding. These properties are configured under spring.cloud.stream.camel.bindings.<inputname>.consumer.endpoint.query-parameters. For example:
spring:
cloud:
stream:
camel:
bindings:
<inputname>:
consumer:
endpoint:
query-parameters:
accessKey: ${AWS_ACCESS_KEY:}
secretKey: ${AWS_SECRET_KEY:}
region: ${AWS_REGION:us-east-1}
| Config Option | Type | Valid Values | Default Value | Description |
|---|---|---|---|---|
|
|
|
A valid AWS S3 bucket name or ARN |
- |
(Required) The Amazon S3 consumer destination. Specifies the bucket name or ARN. For example, |
|
|
|
A valid AWS access key ID |
|
(Required) The AWS access key ID. |
|
|
|
A valid AWS secret access key |
|
(Required) The AWS secret access key. |
|
|
|
A valid AWS region (for example, |
|
(Required) The AWS region. |
|
|
|
|
|
When When
The
|
|
|
|
A valid S3 object key prefix |
- |
(Optional) A prefix to filter objects in the bucket. Only objects with keys starting with this prefix are consumed. If not set, all objects in the bucket are processed. For example, if |
|
|
|
|
|
(Optional) The file type that determines how the connector parses the content of the consumed files. For details about configuring these file types, see one of: |
JSON File Type Configuration Options
When fileType is set to json, the following configuration options are available.
| Config Option | Type | Valid Values | Default Value | Description |
|---|---|---|---|---|
|
|
|
A valid JSONPath expression |
|
A JSONPath expression used to split the JSON file into multiple messages. The default is |
XML File Type Configuration Options
When fileType is set to xml, the following configuration options are available:
| Config Option | Type | Valid Values | Default Value | Description |
|---|---|---|---|---|
|
|
|
A valid XPath expression |
|
An XPath expression that splits the XML file into multiple messages. Default is |
Delimited File Type Configuration Options
When fileType is set to delimited, the following configuration options are available:
| Config Option | Type | Valid Values | Default Value | Description |
|---|---|---|---|---|
|
|
|
Any character or string |
|
The character or string that separates records in a delimited file. |
|
|
|
Any character or string |
|
The character or string that separates fields within each record. |
Advanced Configuration Options
The following additional options are available to control the behavior of the Micro-Integration for Amazon S3. These properties are configured under spring.cloud.stream.camel.bindings.<inputname>.consumer.endpoint.query-parameters.
| Option | Type | Default | Description |
|---|---|---|---|
|
autoCreateBucket |
boolean |
|
When true, enables the autocreation of the S3 bucket. You can also use this parameter when the |
|
delimiter |
String |
None |
The delimiter to use to consume only objects you are interested in. |
|
forcePathStyle |
boolean |
|
When true, the S3 client uses a path-style URL instead of virtual-hosted-style. |
|
overrideEndpoint |
boolean |
|
When true, enables overriding the endpoint. This option must be used in combination with the |
|
policy |
String |
None |
The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method. |
|
uriEndpointOverride |
String |
None |
The overriding URI endpoint. This option must be used in combination with the |
|
customerAlgorithm |
String |
None |
The customer algorithm to use when CustomerKey is enabled. |
|
customerKeyId |
String |
None |
The id of the Customer key to use when CustomerKey is enabled. |
|
customerKeyMD5 |
String |
None |
The MD5 of Customer key to use when CustomerKey is enabled. |
|
destinationBucket |
String |
None |
The destination bucket where an object must be moved when |
|
destinationBucketPrefix |
String |
None |
The destination bucket prefix to use when an object must be moved, and |
|
destinationBucketSuffix |
String |
None |
The destination bucket suffix to use when an object must be moved, and |
|
doneFileName |
String |
None |
If provided, Camel consumes files only if a done file exists. |
|
fileName |
String |
None |
The file name to get the object from the bucket. |
|
maxConnections |
int |
|
The |
|
maxMessagesPerPoll |
int |
|
The maximum number of messages as a limit to poll at each polling. The default value is 10. Use 0 or a negative number to set it as unlimited. |
|
moveAfterRead |
boolean |
|
When true, moves objects from S3 bucket to a different bucket after they have been retrieved. To accomplish the operation, the |
|
removePrefixOnMove |
boolean |
|
When true, removes the prefix from the object key when moving it to the destination bucket. |
|
batchMessageNumber |
int |
|
The number of messages composing a batch in streaming upload mode. |
|
batchSize |
int |
|
The batch size (in bytes) in streaming upload mode. |
|
bufferSize |
int |
|
The buffer size (in bytes) in streaming upload mode. |
|
deleteAfterWrite |
boolean |
|
When true, deletes the file object after the S3 file has been uploaded. |
|
keyName |
String |
None |
The key name for an element in the bucket through endpoint parameter. |
|
streamingUploadMode |
boolean |
|
When true, the upload to bucket is done in streaming. |
|
streamingUploadTimeout |
long |
None |
When streaming upload mode is true, this option sets the timeout to complete upload. |
|
awsKMSKeyId |
String |
None |
The id of KMS key to use when KMS is enabled. |
|
conditionalWritesEnabled |
boolean |
|
When true, uploads the object only if the object key name does not already exist in the bucket specified. |
|
useAwsKMS |
boolean |
|
When true, KMS must be used. |
|
useCustomerKey |
boolean |
|
When true, Customer Key must be used. |
|
useSSES3 |
boolean |
|
When true, SSE S3 must be used. |
|
proxyHost |
String |
None |
The proxy host when instantiating the SQS client. |
|
proxyPort |
Integer |
None |
A proxy port to be used inside the client definition. |
|
proxyProtocol |
Protocol |
|
The proxy protocol to use instantiating the S3 client. Set to one of:
|
|
backoffErrorThreshold |
int |
None |
The number of subsequent error polls that must occur before the |
|
backoffIdleThreshold |
int |
None |
The number of subsequent idle polls that must occur before the |
|
backoffMultiplier |
int |
None |
A multiplier that allows the scheduled polling consumer to back off if there has been a number of subsequent idle polls or errors in a row. The multiplier represents the number of polls that are skipped before the next actual attempt occurs. When you use this option, you must also configure |
|
delay |
long |
|
The number of milliseconds before the next poll. |
|
greedy |
boolean |
|
When true, the ScheduledPollConsumer runs immediately again if the previous run polled one or more messages. |
|
initialDelay |
long |
|
The number of milliseconds before the first poll starts. |
|
repeatCount |
long |
|
The maximum number of times the scheduler fires. For example, if you set this parameter to 1, the scheduler fires only once. If you set it to 5, the scheduler fires five times. A value of zero or a negative value means the scheduler fires indefinitely. |
|
startScheduler |
boolean |
|
When true, the scheduler is auto started. |
|
useFixedDelay |
boolean |
|
When true, fixed delay is used. When false, fixed rate is used. See ScheduledExecutorService in the JDK for details. |
|
profileCredentialsName |
String |
None |
The profile name when using a profile credentials provider. |
|
sessionToken |
String |
None |
The Amazon AWS Session Token used when the user must assume an IAM role. |
|
trustAllCertificates |
boolean |
|
When true, all certificates are trusted when overriding the endpoint. |
|
useDefaultCredentialsProvider |
boolean |
|
When true, the S3 client expects to load credentials through a default credentials provider. |
|
useProfileCredentialsProvider |
boolean |
|
When true, the S3 client expects to load credentials through a profile credentials provider. |
|
useSessionCredentials |
boolean |
|
When true, the S3 client expects to use Session Credentials (useful when the user must assume an IAM role for doing operations in S3). |
Connecting to Multiple Systems
To connect to multiple systems of the same type, use the multiple binder syntax.
For example:
spring:
cloud:
stream:
binders:
# 1st solace binder in this example
solace1:
type: solace
environment:
solace:
java:
host: tcp://localhost:55555
# 2nd solace binder in this example
solace2:
type: solace
environment:
solace:
java:
host: tcp://other-host:55555
# The only awss3 binder
awss31:
type: awss3
# Add `environment` property map here if you need to customize this binder.
# But for this example, we'll assume that defaults are used.
# Required for internal use
undefined:
type: undefined
bindings:
input-0:
destination: <input-destination>
binder: awss31
output-0:
destination: <output-destination>
binder: solace1 # Reference 1st solace binder
input-1:
destination: <input-destination>
binder: awss31
output-1:
destination: <output-destination>
binder: solace2 # Reference 2nd solace binder
The configuration above defines two binders of type solace and one binder of type awss3, which are then referenced within the bindings.
Each binder above is configured independently under spring.cloud.stream.binders.<bindername>.environment..
-
When connecting to multiple systems, all binder configuration must be specified using the multiple binder syntax for all binders. For example, under the
spring.cloud.stream.binders.<binder-name>.environment. -
Do not use single-binder configuration (for example,
solace.java.*at the root of yourapplication.yml) while using the multiple binder syntax.