Discovery

Discovery is a tool in PubSub+ Cloud that is used to discover event-driven architecture (EDA) data from your event brokers. You can use it to discover, import, and then visualize your EDA, including all associated applications, events, and schemas (if you have a Confluent Schema Registry) and their relationships from event brokers such as PubSub+ event broker services, Apache Kafka, Amazon MSK, or Confluent. You can run the Runtime Discovery Agent multiple times, and on each run, it will discover the data, allow you to add it to Event Portal's data model. Once imported, the data is available in Catalog and Designer to visualize and understand your event-driven architecture.

The following diagram illustrates the steps in performing a Discovery:

Understanding Discovery User Interface

On the Discovery homepage you can:

  • learn how to set up the Discovery Agent and run a scan
  • access all your uploaded Discovery files
  • use filter to view only those discoveries that are assigned to a specific Logical Event Mesh (LEM)
  • use a sample to get started
  • add new discoveries from your local system
  • start the process of importing Discovery file to Designer

Your ability to import a Discovery file to Designer will depend on your role and access level.

Screenshot showing the settings described in the surrounding text.

Prerequisites

  • Ensure that you have the correct user role and permission in Event Portal. At the minimum, you will need Event Portal Manager permission. For more information, refer to Managing Users, Groups, Roles, and Permissions.
  • The following event broker versions are currently supported:
    • Apache Kafka versions 2.2, 2.3, 2.4, 2.5
    • Confluent versions 5.3, 5.4, 5.5
    • Amazon MSK version 2.2
    • PubSub+ Cloudevent broker service
    • A supported Solace software event broker or appliance.

Installing the Discovery Agent

You can install the Discovery Agent as a Docker container on Linux, Mac, and Windows operating systems. Alternatively, you can download an executable file and install it locally in Windows, Mac or Linux operating systems. After installing the agent you can use it offline, so no internet connection is required.

The Discovery Agent requires JDK/JRE 11 to run; OpenJDK 11 is bundled with the agent installation package.

Install the Discovery Agent as a Docker Container

Follow the instructions below to install the Discovery Agent in a separate docker container.

  1. Log in to your PubSub+ Cloud account and select Discovery.
  2. Click How to get started?.
  3. On the dialog that appears navigate to Step 1. Run Discovery Agent and click Show Detailed Instructions.
  4. Select the Docker Instructions tab.
  5. Click the dropdown menu and select Mac, Linux, or Windows.
  6. Copy and paste the commands into a terminal window. The Discovery Agent may take a few minutes to initialize
  7. Go to http://localhost:8120 to start running a scan. Refer to Running a Discovery Scanfor more information.

Install the Discovery Agent via a Binary Executable

You can install the Discovery Agent locally in Mac, Windows and Linux. For local installation, download the Discovery runtime archive from the console, and execute bin/event-discovery-agent for Linux and Mac or bin/run.bat for windows.

To download and install the Discovery Agent, do the following:

  1. Log in to your PubSub+ Cloud account and select Discovery.
  2. Click How to get started?.
  3. On the dialog that appears navigate to Step 1. Run Discovery Agent and click Show Detailed Instructions.
  4. Select Mac Download, Linux Download, or Windows Download.
  5. When the download is complete, extract the archive and execute bin/event-discovery-agent for Mac or Linux, or bin/run.bat script for Windows.
  6. Go to http://localhost:8120 to start running a scan. Refer to Running a Discovery Scanfor more information.

Running a Discovery Scan

Once the Discovery Agent is installed, you can configure and run a scan from your browser. You can run a Discovery scan on Kafka clusters., PubSub+ event broker service, or PubSub+ event broker. For more information, see the following sections:

Kafka Runtime Discovery

To configure and run a Discovery scan on a Kafka Cluster or Kafka broker, do the following:

  1. Go to http://localhost:8120.
  2. Select the Apache Kafka, Confluent, or Amazon MSK to perform a run-time discovery.
    Screenshot showing the settings described in the surrounding text.
  3. Complete the input fields as shown in the example below:
    Screenshot showing the settings described in the surrounding text.
    Tools (icons)Description
    Discovery NameThe name that will be displayed on the list of discovered files in the Event Portal
    Host

    Location where the broker is hosted.

    Port.Port used by the given broker
    Authentication

    Authentication type. The following authentication methods are supported:

    • SASL Plain
    • SASL GSSAPI (Kerberos)
    • SASL SCRAM 256
    • SASL SCRAM 512
    • SSL
    Connector AuthenticationOptional information, if you have a connector to authenticate.
    Topic Subscriptions Options to scan specific topics or multiple topics.
  4. Click Start Scan. Once the scan is completed, you can upload the Discovery file to PubSub+ Cloud.
  5. To upload the file directly, click Upload to Solace Cloud.To manually upload the Discovery file, see Uploading Discovery to PubSub+ Cloud.

Confluent Cloud Kafka Cluster

Before running a Discovery scan on a Confluent Cloud Kafka Cluster, you will need the following cluster credentials:

  • API keys (key and secret) to connect to the cluster
  • Bootstrap server URL and port

To run a Discovery scan on a Confluent Cloud Kafka Cluster, perform the following steps:

  1. Go to http://localhost:8120.
  2. Select Apache Kafka.
  3. In the Host and Port fields, enter the Bootstrap server information.
    Screenshot showing the settings described in the surrounding text.
  4. In the authentication field, select SASL Plain and enter the Cluster API key and secret.
  5. Select the Add TLS Authentication checkbox.
  6. Add the Trust Store Location and Trust Store Password. The Trust Store Location value depends on whether the Discovery Agent is running from a Docker container or as a Downloaded file.
    • If the agent is running from the Docker container, set the following:
      • Trust Store Location: /opt/java/openjdk/lib/security/cacerts
      • Trust Store Password: changeit
    • If the agent was started from a script using a downloaded bundle file, do the following:
      1. Locate the cacerts file for the JRE. This can typically be found at $JAVA_HOME/lib/security/cacerts or $JAVA_HOME/jre/lib/security/cacerts.
      2. Enter the full path to the cacerts file in the Trust Store Location and changeit for the Trust Store Password.
      Screenshot showing the settings described in the surrounding text.
  7. Click Start Scan.

Confluent Cloud Schema Registry

Before running a Discovery scan on a Confluent Cloud Schema Registry, you will need the following schema registry information:

  • Confluent Cloud's Schema Registry API endpoint
  • API credentials (key and secret)

Additionally, you will also need the connection details of the Confluent Cloud Kafka Cluster in your environment, which is discussed in detail in Discovery.

To run a Discovery scan on a Confluent Cloud Schema Registry, perform the following steps:

  1. Go to http://localhost:8120.
  2. Select Confluent.
  3. Enter the Target Cluster Authentication details. For a step by step instructions, refer to Discovery.

    Screenshot showing the settings described in the surrounding text.
  4. Enter the Schema Registry Authentication information as show in the example below.

    1. In the Host field, enter the API endpoint URL.
      Screenshot showing the settings described in the surrounding text.
    2. In the authentication field, select SASL Plain and enter the Schema Registry API key and secret. Note that the API keys for the cluster and schema registry are different.
  5. Select the Add TLS Authentication checkbox.
  6.  Add the Trust Store Location and Trust Store Password. The Trust Store Location value depends on whether the is running from a Docker container or as a Downloaded file.
    • If the agent is running from the Docker container, set the following:
      • Trust Store Location: /opt/java/openjdk/lib/security/cacerts
      • Trust Store Password: changeit
    • If the agent was started from a script using a downloaded bundle file, do the following:
      1. Locate the cacerts file for the JRE. This can typically be found at $JAVA_HOME/lib/security/cacerts or $JAVA_HOME/jre/lib/security/cacerts.
      2. Enter the full path to the cacerts file in the Trust Store Location and changeit for the Trust Store Password.
  7. Click Start Scan.

Run-time Discovery

You can execute a runtime Discovery scan on event broker services, software event brokers, and appliances. The following event stream related data are discovered: Topics, Clients, Queues, Durable Topic Endpoints (DTEs), and Subscriptions. Once the scan is complete, a file in JSON format will be generated and available for download.

To configure and execute a runtime Discovery scan on a specific PubSub+ event broker service, do the following:

  1. Go to http://localhost:8120.
  2. Select Solace Broker.
    Screenshot showing the settings described in the surrounding text.
  3. Select Import to Event Portal.

  4. Complete the input fields as shown in the example below.
    Screenshot showing the settings described in the surrounding text.
    FieldDescription
    Discovery NameThe name that will be displayed on the list of discovered files in the Event Portal
    Client Username

    The client username assigned to the Message VPN. To find this information on PubSub+ Cloud, select Cluster Manager on the navigation bar, click the card for the service, select the Connect tab, expand Solace Messaging, and see the value for Username.

    Client PasswordPassword for the given client username. To find this information on PubSub+ Cloud, select Cluster Manager on the navigation bar, click the card for the service, on the Connect tab, expand Solace Messaging, and see the value for Password.
    Client ProtocolUse TCP or TCPS protocol.
    Client HostIP address or hostname of the event broker to connect to. To find this information on PubSub+ Cloud, select Cluster Manager on the navigation bar, click the card for the service, and on the Status tab, see the value for Hostname.
    Messaging PortThe port number the Discovery agent will use when connecting to the event broker. To find this information on PubSub+ Cloud, select Cluster Manager on the navigation bar, click the card for the service, select the Connect tab, expand Solace Messaging, and see the port number used for Secured SMF Host.
    Message VPNMessage VPN name. To find this information on PubSub+ Cloud, select Cluster Manager on the navigation bar, click the card for the service, and on the Status tab, see the value for Message VPN.
    SEMP usernameSEMP username. To find this information on PubSub+ Cloud, select Cluster Manager on the navigation bar, click the card for the service, and on the Status tab, see the value for Management Username.
    SEMP passwordSEMP password. To find this information on PubSub+ Cloud, go to Cluster Manager, click the card for the service, and on the Status tab, see the value for Management Password.
    SEMP HostIP address or hostname of the event broker to connect to.
    SEMP Port

    Port number to access the event broker. The following SEMP ports are supported:

    • Event Broker or event broker service: 943.
    • Software event broker: 8080 and 1943. For software event broker versions before the 9.4.0 release, use port 943.
    • Appliance: 80.
    SEMP URI SchemeSEMP URI scheme. The two valid values are http and https.
    Topic SubscriptionsRetrieves all topics from messages sent by the event broker during the scan interval and retrieves all available subscriptions. You can put one or more topic subscriptions (e.g., /topicA/topicB) or use a wildcard (e.g., >) to scan for everything. The subscription entered in the field filters published events for the topics that match that subscription; it does not filter subscriptions retrieved that are configured on the event broker. So limiting the scope of topics retrieved during the scan by entering a fined-grained subscription(s) in the field does not impact the number of subscriptions discovered.
    Scan DurationThe duration in seconds for the discovery agent to receive event instances through the client interface to collect the associated data. The actual scan duration will depend on the data discovered from the management (SEMP) interface.
  5. Click Start Scan.
  6. Once the scan is completed, you can upload the Discovery file to PubSub+ Cloud or download it.
  7. Once the scan is completed, click Download Results . After downloading the file, upload it to PubSub+ Cloud.

PubSub+ Topic Metrics Discovery (Preview)

Topic scans analyze your runtime data. You can run a topic analysis scan on event broker services, software event brokers, and appliances. Once the scan is complete, a file in JSON format will be generated and available for download.

The following event broker versions are currently supported: 8.11.0.1033, 9.1.1.12 , 9.3.1.17, 9.5.0.25, 9.6.0.34.

To configure and run a topic scan on an event broker , do the following:

  1. Go to http://localhost:8120.
  2. Select Solace Broker to perform a run-time discovery.
    Screenshot showing the settings described in the surrounding text.
  3. Select Topic Metrics Discovery.

  4. Complete the input fields as shown in the example below.
    Screenshot showing the settings described in the surrounding text.
    FieldDescription
    Client UsernameThe client username assigned to the Message VPN.
    Client Password

    Password for the given client username.

    Message VPNMessage VPN name.
    Secure SMF HostSecure SMF hostname.
    Topic SubscriptionsRetrieves all topics from messages sent by the event broker during the scan interval and retrieves all available subscriptions. You can put one or more topic subscriptions (e.g., /topicA/topicB) or use a wildcard (e.g., >) to scan for everything. The subscription entered in the field filters published events for the topics that match that subscription; it does not filter subscriptions retrieved that are configured on the event broker. So limiting the scope of topics retrieved during the scan by entering a fined-grained subscription(s) in the field does not impact the number of subscriptions discovered.
    Scan DurationThe duration in seconds for the discovery agent to receive event instances through the client interface to collect the associated data.
  5. Click Start Scan, and then click Continue on the dialog that appears.
  6. Once the scan is completed, you can download the Discovery file as well as explore and visualize the topic hierarchy.

Uploading Discovery to PubSub+ Cloud

After the Discovery scan is completed, you can upload the Discovery file to the PubSub+ Cloud directly through the Discovery Agent or you can download and add it manually.

Upload Discovery to PubSub+ Cloud

To upload a Discovery directly from the agent's user interface, do the following:

  1. Once the scan is complete, click Upload it PubSub+ Cloud.
  2. Add your login credentials and click Login and Send. If you have single sign-on (SSO) enabled, you must provide an authorization token.
    Screenshot showing the settings described in the surrounding text.
  3. If your account has multiple organizations, you will need to select the organization where you want the Discovery uploaded.
    Screenshot showing the settings described in the surrounding text.
  4. Click Send Results.

The uploaded Discovery will be available in the Runtime Discoveries list in Event Portal.

Manually Add a Discovery File to Event Portal

To manually add a Discovery file to PubSub+ Cloud, perform the following steps:

  1. Log in to the PubSub+ Cloud Console if you have not done so yet. The URL to access the Cloud Console differs based on your authentication scheme. For more information, see Logging In to the PubSub+ Cloud Console.

  2. On the left navigation bar, select Discovery.
  3. On the top-right part of the page, click Add Discovery.
  4. Select the Discovery file and click Open.
  5. The new Discovery file will be added to the list of discoveries.

You can now perform one of the following actions after adding your Discovery:

Importing a Discovery to Designer

After you upload a Discovery file into PubSub+ Cloud, you can import its contents into Designer to model your event-driven architecture (EDA).

As part of importing your Discovery file to Designer, you must assign the Discovery to a Logical Event Mesh (LEM). A LEM represents the event mesh over which associated events flow within an EDA. If you don't have a LEM created, you are prompted to create one when you import your Discovery to Designer. For more information about LEMs, see Logical Event Mesh. After you create a LEM, you must associate the discovered data with applications and application domains.

To import the Discovery file to Designer, follow these steps:

  1. Log in to the PubSub+ Cloud Console if you have not done so yet. The URL to access the Cloud Console differs based on your authentication scheme. For more information, see Logging In to the PubSub+ Cloud Console.

  2. Select the Discovery card or icon on the navigation bar.
  3. On the DiscoveryRuntime Discoveries page, select the Discovery that you want to import to Designer from the list. A panel will expand on the right.
  4. Select Import to Designer.
  5. On the dialog that appears, you can import the Discovery to an existing LEM or create a new LEM.
    Screenshot showing the settings described in the surrounding text.

    To use an existing LEM:

    1. Select Choose Existing
    2. Expand the drop-down menu and select a LEM
    3. Click Continue Import

    To create a new LEM:

    1. Select Create New.
    2. In the Name field, give the LEM a name—the name must be unique in the account
    3. (Optional) In the Description field, add information to help you understand the background. The Description text box supports native Markdown. Click toggle preview to preview raw markdown text in a rich text format.
    4. Under Topic Formatting, click the Level Delimiter drop-down and select a delimiter based on the broker type for your Discovery.
      • For Solace, it's a forward ("/") which is automatically selected for you.
      • For Kafka, it can be either a dot ("."), dash ("-"), or underscore ("_").
  6. Click Continue to Import.
  7. After you select an existing LEM or create a new LEM, you can perform the following tasks:
  8. Once you've completed adding the events, Client Delivery Endpoints, and Connectors, you can click Return to Designer, which will take you to Designer's Topology view. There you can further modify your EDA and visualize what you've imported. For example, if you double-click on the Application Domain you've imported to, you'll see event information that you've associated with your Applications. For more information about using Designer to further define your EDA, see Designer.

Additionally, you can re-run Discovery scans and merge any new discovered data to the existing EDA. See Re-scanning and Merging Discovery Data to learn more.

Creating an Event Type

You can add events using the topics (or event instances) that you've imported from your Discovery scan. In the scanned Discovery, you will have many topic addresses that are organized as a Topic Tree.

Topic Addresses consist of zero or more topic levels. Each topic level can be literal value (instances) of data found during a Discovery, but in large system you may have multiple instances of same data as Topic Level. For that reason, it's more useful to use variables at a topic level to build a dynamic topic address for events. You can compress the topic level event instances down into dynamic topic addresses or types where each topic level can be represented using one or more variables. For more information about creating variables at a topic level, see Creating Variables for a Topic Level.

The purpose of traversing your topic tree is to identify the events from looking at the raw data to represent and better understand it in your EDA and create types for reuse.

The following steps shows how to traverse a topic tree and create an event:

  1. On the Add discovered items to Event Portal page, select the Topics tab, expand each topic to drill down the hierarchy of the topic tree. At any topic level, you can optionally create variable to make that level dynamic.

    Screenshot showing the settings described in the surrounding text.

  2. Click Add Events beside the topic for which you want to create an event.
  3. In the Create Event dialog, type a name for the event in Name field.
  4. Choose an application domain for the event. You can do one of the following actions:
    • Select Existing Application Domain and then choose an existing domain from the drop-down list.
    • Select New Application Domain and type a name to create an application domain.
  5. Click Create Event.

On the right-most panel, you'll notice that a new event has been added to the given application domain and the yellow status disappears.

Screenshot showing the settings described in the surrounding text.

Creating Variables for a Topic Level

All topic address created in Event Portal contains individual topics from multiple event instances discovered from the event broker. These event instances are organized as a topic tree where you can see their hierarchical topic structure. These topics have levels that consist of literal value or a variable. A variable topic level is a placeholder that is substituted for a concrete value by publishing applications at runtime. Variable topic levels can be constrained into an enumeration set that contains a list of pre-defined possible values, or it can be unbounded. Since Event Portal models event types as opposed to event instances, using variables to represent some of the topic address levels usually makes sense for modeling an event-driven architecture, and therefore is appropriate to do before importing the events to Designer.

To create a variable, follow these steps:

  1. After you select Import to Designer, create your Logical Event Mesh (LEM) if it hasn't been created, you can view the discovered event instances in the Topics tab.
  2. In the Topic Tree, select the Topic Level for which to create a variable. As you expand the Topic Tree, you usually have a number of similar instance information at the same Topic Level. For example, in the example below there are a number of driver-related updates in the sample we are using. As there could potentially be thousands of these instances, it's best to create unbounded variables in such situations.

  3. Select one or more check box at the Topic Level beside the name of the event instances you want to create a variable.

    Screenshot showing the settings described in the surrounding text.

  4. (Optionally) If you want to create a variable to represent all the event instances at that Topic Level, click Select All beside the entry you selected.

    Screenshot showing the settings described in the surrounding text.

  5. Click Create Variable.

  6. In the Create Variable dialog, enter a unique name for the variable in the Name field and click Create.
    Screenshot showing the settings described in the surrounding text.
  7. Alternatively, you can create an Enum set to constrain the variable topic levels within a list of pre-defined values. To do so, complete these steps:
    1. On the Create Variable dialog, click Create beside the Enum field.
    2. In the Create Enum dialog, enter a name for the enumerator in the Name field.
    3. (Optional) Enter a description for the Enum in the Description field
    4. Add or delete  values and optionally enter a string in the Display Name field for each enumerated value, which is shown instead of the value when you visualize the event.
    5.  Click Create and you return back to the Create Variable dialog.

      Screenshot showing the settings described in the surrounding text.

  8. Click Create.

    Screenshot showing the settings described in the surrounding text.

In the example below, you can see how the event instances for that topic level have been compressed.

Screenshot showing the settings described in the surrounding text.

You can select more topic levels and create variables as required until you get to a leaf node, in which you can create an event as described in Creating an Event Type.

Mapping Client Delivery Endpoints

The Client Delivery Endpoint (CDE) specifies the location on an event mesh that is used by an application to consume events. CDEs can be added to an Application using these rules:

  • Applications can have multiple CDEs. For Direct Client Endpoints, you can have multiple Direct Client Endpoints per application, but only one per client username.
  • A CDE can only be assigned to one application at any given time.

Each CDE that is found is marked with a yellow status indicator on the left-side of the name of the CDE. The yellow status indicators shows that a CDE not been associated with an application. When a CDE has been associated with an application, the yellow status indicator is removed.

You can add the following different types of Client Delivery Endpoints (CDEs) if there are endpoints found as part of your Discovery, which is indicated by a number in parenthesis beside the Client Delivery Endpoints.

The following CDEs are supported:

  • Direct Client Endpoint (PubSub+ event broker services only)
  • Durable Topic Endpoint (PubSub+ event broker services only)
  • Event Queue (PubSub+ event broker services only)
  • Consumer Group (Kafka only). For more information refer to Mapping Consumer Groups to Applications.

If you don't have an application, you can also add them as part of these steps.

  1. On the Add discovered items to Event Portal page, click on the Queues & Direct Clients tab.
    Screenshot showing the settings described in the surrounding text.
  2. If you have created an application domain with an application within it, proceed to the next step, otherwise the complete the following steps as required.
    • If you don't have a and application domain, click Add Domain to create an application domain.
    • If you don't have an application, select the application domain that you want use in the Import Tool pane and then click Add Application and complete the form.
  3. Select at least one CDE using the check box beside it. You can select multiple CDEs to add to an application at the same time.
  4. Click Add to Application.
  5. In the Place in Application dialog, select the application domain from the first list, select the application from the second list, and then click Add Application.

After a CDE has been added, the yellow status disappears, and name of the application the endpoint that was added appears in the Application column.

Screenshot showing the settings described in the surrounding text.

Any remaining Client Delivery Endpoints that have a yellow status indicate that they are not associated with an application. However, you can choose to use the parts of the Discovery that aligns with your EDA.

When one or more CDEs are selected, you can drag the  icon to the application listed in the Import Tool panel. This makes it easier for you to quickly associate a CDE with an application.

Mapping Consumer Groups to Applications

For Kafka discoveries, you must map the consumer groups to applications through a consumer group type Client Delivery Endpoints (CDEs). CDEs are Event Portal data model extensions that model how subscribing applications attract the appropriate event types. CDE model Kafka's consumer groups in Event Portal.

Each discovered consumer group is marked with a yellow status indicator on the left-side of the name of the object. The yellow status indicators means that consumer group is not mapped to an application’s client delivery endpoint (CDE). Once the consumer groups are mapped to respective applications, the yellow status indicator is removed.


To map a consumer group to a CDE in an application, do the following:

  1. On the Add Architecture from Discovery File, click on the Consumer Groups tab.
  2. (Optional) Filter on the Name of the consumer group.
  3. (Optional) If you have created an application domain with an application within it, proceed to the next step, otherwise complete the following steps as required.
    • If you don't have a domain, click Add Domain in the Import Tool panel.
      • In the dialog that appears, provide the Name, Topic Domain and Description for the application domain, and click Save.
    • If you don't have an Application, you can expand an Application Domain in the Import Tool panel and then click Add Application.
      • In the dialog that appears, type a name for the application in the Name field; optionally type a description in the Description field.
  4. Select at least one consumer group using the check box beside it . You can associate multiple consumers to an Application at the same time.
  5. Click Add to Application.
  6. In the dialog that appears:
    1. Select Convert to individual applications or Merge into one application. If you have ten consumer groups and you want to create ten applications, select the first option. If you want all the consumer groups to be part of one application, select the second option. Sometimes many consumer groups are discovered that belong to the same application; in that case, you can use the option to merge those consumer groups into one application. However, this will depend on what is discovered and how you want to map the discovered data.
    2. Select the Application Domain.
    3. (Optional) Use Consumer Group Name as Application Name is selected by default to ensure the applications names will be the same as consumer group names To give a new name to the application, deselect the checkbox and write your preferred name.
    4. Click Import to Designer.

The consumer groups will be imported as applications inside the associated application domain. After the consumer groups are added to the application domain, notice that the yellow status is removed.

Adding Connectors to an Application Domain

If you have a Kafka Discovery, you can map connectors and their associated topics to application domains, so that you can view all the relevant EDA objects in Designer and Catalog model.

A connector is used in Kafka for connecting Kafka event brokers with external systems to stream data into or out of Apache Kafka. In the Event Portal, a Kafka Connector is an application class you select to configure associated published and/or subscribed events and a set of Kafka-native attributes. When importing a Kafka-based Discovery, the connectors must be imported as applications. When importing connectors, only those published topics that were previously converted to events will be imported.

To add connectors to an application domain, follow these steps:

  1. On the Add discovered items to Event Portal page, click the Connectors tab. Notice the yellow bar beside the connectors, which indicates that the connectors are not associated to an application domain.
  2. Select at least one connector using the check box beside it. You can select multiple connectors to add to an application domain at the same time.
  3. Click Import as Application.
  4. On Import Connectors as Application dialog, select an application domain and then click Import.
  5. On the Connector Topics dialog, review the connectors you are importing and click Import Connector.

After a connector has been added, the yellow status beside the connector disappears, and on the Import Tool panel, the connector will be visible under the associated application domain.

Adding Schemas to an Application Domain

For Kafka Discovery, you need to add schemas to an application domain, so that you can view the schema and it's relationship in Designer and Catalog. Note that you cannot import primitive type schemas from Discovery, but you can create them in Designer for an event type.

To add schemas to an application domain, do the following:

  1. On the Add Architecture from Discovery File, click on the Schemas tab.
  2. Optionally, filter on the Name, type of Schema using a combination of the first and second dropdown lists.
  3. (OptionaI) f you have an existing application domain, proceed to the next step, otherwise the complete the following steps as required.
    1. Click Add Domain in the Import Tool panel.
    2. In the dialog that appears, provide the Name, Topic Domain and Description for the application domain.
    3. Click Save. Your new application domain will be visible in the Import Tool panel.
  4. Select at least one schema using the check box beside it. You can select multiple schemas add to an application domain.
  5. Click Add to Application Domain.

  6. In the dialog that appears do one of the following:

    • Select Existing Application Domain, choose an a domain from the drop-down list, and click Add.
    • Select New Application Domain, give a Name to the domain, and click Add.

After a schema has been added, the yellow status beside the schema disappears, and on the Import Tool panel, the schema number will increment under the associated application domain.

Re-scanning and Merging Discovery Data

One of the key features of Discovery is the ability to re-run Discovery scans on different event brokers, assign the Discovery files to an existing LEM, and merge the discovered data. The newly discovered data is flagged and can be merged with the existing EDA in Designer. You can run scan on all the different PubSub+ event brokers that are connected together in a Direct Message Routing (DMR) cluster to form an event mesh, and get a full view of the EDA components and events flowing through that mesh. This enables you to add to your modeled event-driven architecture up to date by comparing existing data with new discoveries. You can also re-run a scan on one event broker at different times and associate the newly discovered data with a previously create LEM.

To re-run Discovery scan on an event broker and merge the discovered data, do the following.

  1. Re-run scans on target event broker. For instructions, see Running a Discovery Scan.
  2. Upload the newly scanned data to Event Portal. For instructions, see Uploading Discovery to PubSub+ Cloud.
  3. Import the discovered data to Designer. All the new data will be flagged, which can then be merged with the existing data in the LEM. For instructions, see Importing a Discovery to Designer.

Archiving a Discovery File

You have the option to archive a Discovery file or even delete the file permanently. You may want to use the archive option if you have already imported the file into Event Portal and to reduce clutter.

To archive a Discovery file, follow the steps below:

  1. Click the  on the Discovery file you want to archive.
    Screenshot showing the settings described in the surrounding text.
  2. Select Archive Discovery. The state of the file will change to archived.

After the file is archived, you have the following options:

Activate Discovery: If you activate discovery, the audit will automatically run.

Delete Discovery: In this case, the Discovery file will be permanently deleted from Event Portal Runtime Discoveries list.