Discovery

Discovery is a tool to discover events, schemas, and application interactions, running over your event brokers. You can use it to discover, import, and then visualize your event-driven architecture (EDA), including all associated applications, events, and schemas and their relationships from event brokers such as PubSub+ , Apache Kafka, Amazon MSK, or Confluent. Once imported, it will be available in Catalog and Designer to visualize and understand your event-driven architecture.

The following diagram illustrates the steps in performing a Discovery:

 

On this page, we will dive into the detailed features and functionality of Discovery. If you are getting started, take a look at Discover and Import Kafka Runtime Data Flows or Discover and Import PubSub+ Data Flows (Beta). Additionally, we also have a guided Discovery codelabs available on our Solace Developer Codelabs portal.

The current release of PubSub+ Discovery is in the Beta stage.

Prerequisites

  • Ensure that you have the correct user role and permission in Event Portal. At the minimum, you will need Event Portal Manager permission. For more information, refer to Managing Users, Roles, and Permissions.
  • The following event broker versions are currently supported:
    • Apache Kafka versions 2.2, 2.3, 2.4, 2.5
    • Confluent versions 5.3, 5.4, 5.5
    • Amazon MSK version 2.2
    • PubSub+ event broker and event broker services versions 8.11.0.1033, 9.1.1.12 , 9.3.1.17, 9.5.0.25, 9.6.0.34.

Installing the Runtime Discovery Agent

You can install the Runtime Discovery Agent as a Docker container on Linux, Mac, and Windows operating systems. Alternatively, you can download an executable file and install it locally in Windows or Linux operating systems. After installing the Runtime Discovery Agent you can use it offline, so no internet connection is required.

The Discovery Agent requires JDK/JRE 11 to run; OpenJDK 11 is bundled with the agent installation package.

Install the Runtime Discovery Agent as a Docker Container

Follow the instructions below to create the Runtime Discovery Agent as a Docker container in your local environment.

Mac OS and Linux

  1. Copy and paste the following commands into a terminal window. The Discovery Agent may take a few minutes to initialize.
    echo 'cMopz4m+GV60hBb8DysZna8uMP4tM84P' | docker login --username discovery-preview --password cMopz4m+GV60hBb8DysZna8uMP4tM84P solaceclouddev.azurecr.io
        docker pull solaceclouddev.azurecr.io/maas-event-discovery-agent:latest
        docker run \
        --env SPRING_APPLICATION_NAME=maas-event-discovery-agent-offline \
        --env EVENT_DISCOVERY_OFFLINE=true \
        --env MAAS_VMR_ENABLED=false \
        --env MAAS_HEARTBEATS_ENABLED=false \
        --env MAAS_RESTPROXY_GATEWAY=false \
        --name discovery_agent -p 8120:8120 \
        -d solaceclouddev.azurecr.io/maas-event-discovery-agent:latest
  2. Go to http://localhost:8120 to start running a scan. Refer to Running a Discovery Scanfor more information

Windows

  1. Copy and paste the following commands into a terminal window. The Discovery Agent may take a few minutes to initialize.
    echo 'cMopz4m+GV60hBb8DysZna8uMP4tM84P' | docker login --username discovery-preview --password cMopz4m+GV60hBb8DysZna8uMP4tM84P solaceclouddev.azurecr.io
        docker pull solaceclouddev.azurecr.io/maas-event-discovery-agent:latest
        docker run `
        --env SPRING_APPLICATION_NAME=maas-event-discovery-agent-offline `
        --env EVENT_DISCOVERY_OFFLINE=true `
        --env MAAS_VMR_ENABLED=false `
        --env MAAS_HEARTBEATS_ENABLED=false `
        --env MAAS_RESTPROXY_GATEWAY=false `
        --name discovery_agent -p 8120:8120 `
        -d solaceclouddev.azurecr.io/maas-event-discovery-agent:latest
  2. Go to http://localhost:8120 to start running a scan. Refer to Running a Discovery Scanfor more information

Install the Runtime Discovery Agent via a Binary Executable

You can install the Runtime Discovery Agent locally in Windows and Linux. For local installation, download the Discovery runtime archive from the console, and execute bin/event-discovery-agent for Linux or bin/run.bat for windows.

To download and install the Runtime Discovery Agent, do the following:

  1. Log in to your PubSub+ Cloud account and select Discovery.
  2. Click the How do I run a Discovery scan?.
  3. On the pop-up dialog, click download.
  4. When the download is complete, extract the archive and execute bin/event-discovery-agent for Linux or bin/run.bat script for Windows.
  5. Go to http://localhost:8120 to start running a scan. Refer to Running a Discovery Scanfor more information

Running a Discovery Scan

Once the Runtime Discovery Agent is installed, you can configure and run a scan from your browser. You can run a Discovery scan on Kafka clusters or PubSub+ event brokers. For more information, see the following sections:

Kafka Runtime Discovery

To configure and run a Discovery scan on a Kafka Cluster, do the following:

  1. Go to http://localhost:8120.
  2. Select the Apache Kafka, Confluent, or Amazon MSK to perform a run-time discovery. In this example, we have selected Confluent.
  3. Complete the input fields as shown in the example below:

    Discovery Name: The name that will be displayed on the list of discovered files in the Event Portal

    Host: Location where your event broker is hosted.

    Authentication: Authentication type. The following authentication methods are supported:

    • SASL Plain
    • SASL GSSAPI (Kerberos)
    • SASL SCRAM 256
    • SASL SCRAM 512
    • SSL

    Connector Authentication: Optional information, if you have a connector to authenticate.

    Topics Subscription: Options to scan specific topics or multiple topics.

    Schema Registry Authentication: For Confluent Run-time Discovery, you have the option to configure the Schema Registry Authentication.

  4. Click Start Scan. Once the scan is completed, you can upload the Discovery file to PubSub+ Cloud.
  5. To upload the file directly, click Upload to Solace Cloud.To manually upload the Discovery file, see Uploading Discovery to PubSub+ Cloud.

PubSub+ Runtime Discovery

You can execute a runtime Discovery scan on event broker service, software event broker, and appliance. The following event stream related data are discovered: Topics, Clients, Queues, Durable Topic Endpoints (DTEs), and Subscriptions. Once the scan is complete, a file in JSON format will be generated and available for download.

Runtime Discovery has been tested against broker versions 8.11.0.1033, 9.1.1.12 , 9.3.1.17, 9.5.0.25, 9.6.0.34 so far, but is supported for all supported PubSub+event broker versions.

To configure and execute a runtime Discovery scan on a PubSub+ event broker, do the following:

  1. Go to http://localhost:8120.
  2. Select Solace PubSub+ Runtime Discovery.
  3. Complete the input fields as shown in the example below.
      
      

    Discovery Name: The name that will be displayed on the list of discovered files in the Event Portal.

    Client Username: The client username assigned to the Message VPN. For example, to find this on PubSub+ Cloud, select Cluster Manager on the navigation bar, click the card for the service, select the Connect tab, expand Solace Messaging, and see the value for Username.

    Client Password: Password for the given client username. For example, to find this on PubSub+ Cloud, select Cluster Manager on the navigation bar, click the card for the service, on the Connect tab, expand Solace Messaging, and see the value for Password.

    SEMP username: SEMP username. For example, to find this on PubSub+ Cloud, select Cluster Manager on the navigation bar, click the card for the service, and on the Status tab, see the value for Management Username.

    SEMP password: SEMP password. For example, to find this on PubSub+ Cloud, go to Cluster Manager, click the card for the service, and on the Status tab, see the value for Management Password.

    Client Protocol: Use TCP or TCPS protocol.

    Client Host: IP address or hostname of the event broker to connect to. For example, to find this on PubSub+ Cloud, select Cluster Manager on the navigation bar, click the card for the service, and on the Status tab, see the value for Hostname.

    SEMP Host: IP address or hostname of the event broker to connect to.

    Messaging Port: The port number the Discovery agent will use when connecting to the event broker. For example, to find this on PubSub+ Cloud, select Cluster Manager on the navigation bar, click the card for the service, select the Connect tab, expand Solace Messaging, and see the port number used for Secured SMF Host.

    Message VPN: Message VPN name. For example, to find this on PubSub+ Cloud, select Cluster Manager on the navigation bar, click the card for the service, and on the Status tab, see the value for Message VPN.

    SEMP Port: Port number to access the event broker. The following SEMP ports are supports:

    • Event broker service: 943.
    • Software event broker: 8080 and 1943. For software event broker versions before the 9.4.0 release, use port 943.
    • Appliance: 80.

    SEMP Scheme: SEMP URI scheme. The two valid values are http and https.

    Topics Subscriptions: Topics and subscriptions that you want to scan and discover. You can put one or more topic subscriptions (e.g., /topicA/topicB) or put > to scan for everything.

    Scan Duration: The duration in seconds for the discovery agent to receive event instances through the client interface to collect the associated data. The actual scan duration will depend on the data discovered from the management (SEMP) interface.

  4. Click Start Scan.
  5. Once the scan is completed, you can upload the Discovery file to PubSub+ Cloud or download it.

PubSub+ Topic Metrics Discovery (Preview)

You can run a Topic Metrics Discovery scan on event broker service, software event broker, and appliance. Once the scan is complete, a file in JSON format will be generated and available for download.

The following event broker versions are currently supported: 8.11.0.1033, 9.1.1.12 , 9.3.1.17, 9.5.0.25, 9.6.0.34.

At this Preview stage, the PubSub+ Topic Metrics Discovery displays the topic structure and associated metrics in graphical formats. This is to provide our users with a preview of future event metrics capabilities.

To configure and run a Topic Metrics Discovery scan on a PubSub+ event broker, do the following:

  1. Go to http://localhost:8120.
  2. Select Solace PubSub+ Topic Metrics Discovery to perform a run-time discovery.
  3. Complete the input fields as shown in the example below.

    Client Username: The client username assigned to the Message VPN.

    Client Password: Password for the given client username.

    Message VPN: Message VPN name.

    Secure SMF Host: Secure SMF hostname.

    Topics Subscriptions: Topics and subscriptions that you want to scan and discover.

    Scan Duration:The duration in seconds for the discovery agent to receive event instances through the client interface to collect the associated data..

  4. Click Start Scan, and then click Continue on the dialog that appears.
  5. Once the scan is completed, you can download the Discovery file as well as visualize the discovered topic hierarchy.

Uploading Discovery to PubSub+ Cloud

Once the Discovery scan is completed, you can upload the Discovery file to the PubSub+ Cloud directly through the Offlline Discovery Agent or you can download and add it manually.

Upload Discovery to PubSub+ Cloud

To upload a Discovery directly from the agent's user interface, do the following:

  1. Once the scan is complete, click Upload to PubSub+ Cloud.
  2. Log in to your PubSub+ Cloud account. If you have single sign-on (SSO) enabled, you must provide an authorization token.
  3. If your account has multiple organizations, you will need to select the organization where you want the Discovery uploaded.
  4. Click Send Results.

The uploaded Discovery file will be available in the list of Discoveries

Manually Add a Discovery File to PubSub+ Cloud

To manually add a Discovery file to PubSub+ Cloud, perform the following steps:

  1. Log in to your PubSub+ Cloud account.
  2. On the left navigation bar, select Discovery.
  3. On the top-right part of the page, click Add Discovery.

  4. In the Open dialog, navigate to select your Discovery file and then click Open.
  5. The new Discovery file will be added to the list of discoveries. You can now import the file into Designeror even archive the Discovery if you change your mind about using it.

Now you can do one of the following actions after adding your Discovery:

Importing a Discovery to Designer

After you import the Discovery file into PubSub+ Cloud, you can import the Discovery contents into Designer to model your event-driven architecture (EDA).

At this point, you are in Designer. If you haven't imported the Discovery file when you ran the scan or uploaded the Discovery file that you downloaded, see Running a Discovery Scan and Uploading Discovery to PubSub+ Cloud, respectively.

As part of importing your Discovery to Designer, you must create a Logical Event Mesh (LEM). A LEM is a representation or model view of a broker instantiated event mesh. If you don't have a LEM created, you are prompted to create one when you import your Discovery to Designer. For more information about LEMs, see Logical Event Mesh.

After you create a LEM, you can:

  • add events that you create from the topics found during the Discovery to a new or existing Application Domain
  • associate any discovered objects with an Application Domain

To import the Discovery to Designer, follow these steps:

  1. If you haven't done so yet, log in to your PubSub+ Cloud account.
  2. On the navigation bar, select Discovery.
  3. On the DiscoveryRuntime Discoveries page, click the Actions () beside the name of the Discovery that you want to import to Designer, and then select Import to Designer.
  4. If a LEM has not been created, the Add Architecture from Discovery File page appears, where you must create a Logical Event Mesh (LEM) using these steps:

    1. In the Name field, give the LEM a name—the name must be unique in the account
    2. (Optional) In the Description field, add information to help you understand the background. The Description text box supports native Markdown. Click toggle preview to preview raw markdown text in HTML format.
    3. In the Level Delimiter drop-down, select the delimiter based on Broker Type for your Discovery.
      • For Solace, it's a forward ("/") which is automatically provided for you.
      • For Kafka, it can be a either a dot ("."), dash ("-"), or underscore ("_"). You must select a delimiter.

  5. Click Add to Designer.
  6. From here, you perform the following tasks:
  7. After you've completed adding the events, Client Delivery Endpoints, and Connectors, you can click Return to Designer at the bottom right of the page and it returns to you to Designer: Topology tab where to can further modify your EDA and visualize what you've imported. For example, if you double-click on the Application Domain you've imported to, you'll see event information that you've associated with your Applications. For more information about using Designer to further define your EDA, see Designer.

Creating an Event Type

You can add events using the topics (or event instances) that you've imported from your Discovery scan. From your scan, you have many topic addresses that are organized as a Topic Tree.

Topic Addresses consist of zero or more topic levels. Each topic level can be literal value (instances) of data found during a Discovery but in large system, you may have multiple instances of same data as Topic Level. For that reason, it's more useful to use variables at a topic level to build a dynamic topic address for events. You can compress the topic levels event instances down into dynamic topic addresses or types where each topic level can be represented using one or more variables. For more information about creating variables at a topic level, see Creating Variables for a Topic Level.

The purpose of traversing your topic tree is to identify the events from looking at the raw data to represent and better understand in your EDA and create types for reuse in your EDA.

The following steps shows traverse a topic tree and create an event:.

  1. On the Add Architecture from Discovery File page, select the Topics tab, expand each topic to drill down the hierarchy of the topic tree. At any topic level, you can optionally create variable to make that level dynamic.

  2. Click Add Events beside the topic for which you want to create an event.
  3. In the Create Event dialog, type a name for the event in Name field.
  4. Choose an Application Domain for the event. You can do one of the following actions:
    • Select Existing Application Domain and then choose an existing domain from the drop-down list.
    • Select New Application Domain and type a name to create an Application Domain.
  5. Click Create Event.

On the right-most panel, you'll see the event you added to the Application Domain and in the menu, you'll see an event .

Creating Variables for a Topic Level

A Discovery file usually contains the individual topics from multiple event instances discovered from the event broker. These event instance topics are organized as a topic tree where you can see their hierarchical topic structure. These topics have levels that consist of literal values, which are often replacing topic address level variables by the applications at runtime. Since Event Portal models event types as opposed to event instances, using variables to represent some of the topic address levels usually makes sense for modeling an event drive architecture, and therefore is appropriate to do before importing the events to Designer. To do this, you can use the following variable types:

  • unbounded—Only specific values specified in the enumeration are valid. As part of defining a bounded variable, you choose the valid values. You also have the option to specify different display values for the enumerated values. Ideally, create this type of variable when the values are a finite and specific set.
  • bounded—Only specific values specified in the enumerator are valid. As part of defining a bounded value, you choose the valid values. Optionally, you can specify different display values for the enumerated values. Ideally, create this type of variable when the values are a finite set.

To create a variable, follow these steps:

  1. After you selected Import to Designer, create your Logical Event Mesh (LEM) if it hasn't been created, you can view the discovered event instances in the Topics tab.
  2. In the Topic Tree, select the Topic Levels for which to create a variable. As you expand the Topic Tree, you usually have a number of similar instance information at the same Topic Level. For example, in Flight Data Processing System (FDPS), you might have a number of positions as shown here. Because there are thousands of them, it's ideal to create an unbounded variable.

  3. Select the check box at the Topic Level beside the name of the event instance you want to create a variable.

  4. (Optionally) If you want create a variable to represent all the event instances at that Topic Level, click Select All beside the entry you selected.

  5. Click Create Variable.

  6. In the Create Variable dialog, enter a unique name for the variable in the Name field.
  7. To create an unbounded variable, skip this step. To create a bounded variable, complete these steps:
    1. Click Add New.
    2.  In the Create Enum dialog, enter a name for the enumerator in the Name field.
    3.  (Optional) Enter a description for the Enum in the Description field
    4.  Add or delete values and optionally enter a string in the Display Name field for each enumerated value, which is shown instead of the value when you visualize the event.
    5.  Click Create and you return back to the Create Variable dialog.

  8. Click Create.

In this example, we've created bounded variable. You can see how the event instances for that topic level under position have been compressed.

 

You can select more topic levels and create bounded or unbounded variables as required until you get to a leaf node, in which you can create an event as described in Creating an Event Type.

Mapping Client Delivery Endpoints

The Client Delivery Endpoint (CDE) specifies the location on an event mesh that is used by an application to consume events. CDEs can be added to an Application using these rules:

  • Applications can have multiple CDEs. For Direct Client Endpoints, you can have multiple Direct Client Endpoints per application, but only one per client username.
  • A CDE can only be assigned to one Application at any given time.

Each CDE that is found is marked with a yellow status indicator on the left-side of the name of the CDE. The yellow status indicators shows that a CDE not been associated with an application. When a CDE has been associated with an application, the yellow status indicator is removed.

You can add the following different types of Client Delivery Endpoints (CDEs) if there are endpoints found as part of your Discovery, which is indicated by a number in parenthesis beside the Client Delivery Endpoints.

The following CDEs are supported:

  • Direct Client Endpoint (PubSub+ only)
  • Durable Topic Endpoint (PubSub+ only)
  • Event Queue (PubSub+ only)
  • Consumer Group (Kafka only). For more information refer to Mapping Consumer Groups to Applications.

If you don't have an Application, you can also add them as part of these steps.

  1. On the Add Architecture from Discovery File, click on the Client Delivery Endpoints tab.
  2. Optionally filter on the Name, type of Client Delivery Endpoint, or Client Username using a combination of the first and second dropdown lists.
  3. If you have created an Application Domain with an Application within it, proceed to the next step, otherwise the complete the following steps as required.
    • If you don't have a Domain, click Create Domain to create the Application Domain. For more information, Design and Model Your Event-Driven Architecture.
    • If you don't have an Application, you can expand the Application Domain that you want in the Import Tool pane and then click Add Application.
      • In the Create Applcation dialog, type a name for the application in the Name field; optionally type a description in the Description field. For more information, see Create an Application.
  4. Select at least one CDE using the check box beside it. You can select multiple CDEs to add to an Application at the same time.
  5. Click Add to Application.
  6. In the Place in Application dialog, select the Application Domain from the first list, select the Application from the second list, and then click Add Application.

After a Client Endpoint has been added, the yellow status beside the Client Delivery Point disappears, and name of the Application the endpoint that was added appears in the Application column.

Any remaining Client Delivery Endpoints have a yellow status to indicate that action needs to be taken based on the Discovery. However, you can choose to use the parts of the Discovery that aligns with your EDA.

When one or more CDEs are selected, you can drag the icon to the application listed in the Import Tool panel. This makes it easier for you to quickly associate a CDE with an Application.

Mapping Consumer Groups to Applications

For Kafka discoveries, you must map the consumer groups to applications through a consumer group type Client Delivery Endpoints (CDEs). CDEs are Event Portal data model extensions that model how subscribing applications attract the appropriate event types. CDE model Kafka's consumer groups in Event Portal.

Each discovered consumer group is marked with a yellow status indicator on the left-side of the name of the object. The yellow status indicators means that consumer group is not mapped to an application’s client delivery endpoint (CDE). Once the consumer groups are mapped to respective applications, the yellow status indicator is removed.


To map a consumer group to a CDE in an application, do the following:

  1. On the Add Architecture from Discovery File, click on the Consumer Groups tab.
  2. (Optional) Filter on the Name of the consumer group.
  3. (Optional) If you have created an application Domain with an application within it, proceed to the next step, otherwise complete the following steps as required.
    • If you don't have a domain, click Add Domain in the Import Tool panel.
      • In the dialog that appears, provide the Name, Topic Domain and Description for the application domain, and click Save.
    • If you don't have an Application, you can expand an Application Domain in the Import Tool panel and then click Add Application.
      • In the dialog that appears, type a name for the application in the Name field; optionally type a description in the Description field.
  4. Select at least one consumer group using the check box beside it . You can associate multiple consumers to an Application at the same time.
  5. Click Add to Application.
  6. In the dialog that appears:
    1. Select Convert to individual applications or Merge into one application. If you have ten consumer groups and you want to create ten applications, select the first option. If you want all the consumer groups to be part of one application, select the second option. Sometimes many consumer groups are discovered that belong to the same application; in that case, you can use the option to merge those consumer groups into one application. However, this will depend on what is discovered and how you want to map the discovered data.
    2. Select the Application Domain.
    3. (Optional) Use Consumer Group Name as Application Name is selected by default to ensure the applications names will be the same as consumer group names To give a new name to the application, deselect the checkbox and write your preferred name.
    4. Click Import to Designer.

The consumer groups will be imported as applications inside the associated application domain. After the consumer groups are added to the application domain, notice that the yellow status is removed.

Adding Connectors to an Application Domain

If you have a Kafka Discovery, you can map connectors and their associated topics to application domains, so that you can view all the relevant EDA objects in the Designer and Catalog model.

A connector is used in Kafka for connecting Kafka event brokers with external systems to stream data into or out of Apache Kafka. In the Event Portal, a Kafka Connector is an application class you select to configure associated published and/or subscribed events and a set of Kafka-native attributes. When importing a Kafka-based Discovery, the connectors must be imported as applications. When importing connectors, only those published topics that were previously converted to events will be imported.

To add connectors to an application domain, follow these steps:

  1. On the Add Architecture from Discovery File, click the Connectors tab. Notice the yellow bar beside the connectors, which indicates that the connectors are not associated to an application domain.
  2. Optionally filter on the Name, type of Connector using a combination of the first and second dropdown lists.
  3. Select at least one connector using the check box beside it. You can select multiple connectors to add to an Application Domain at the same time.
  4. Click Import and Application.
  5. In the dialog that appears, select an application domain and then click Import.

After a connector has been added, the yellow status beside the connector disappears, and on the Import Tool panel, the connector will be visible under the associated Application Domain.

Adding Schemas to an Application Domain

For Kafka Discovery, you need to add schemas to an Application Domain, so that you can view the schema and it's relationship in Designer and Catalog. Note that you cannot import primitive type schemas from Discovery, but you can create them in Designer for an event type.

To add schemas to an application domain, do the following:

  1. On the Add Architecture from Discovery File, click on the Schemas tab.
  2. Optionally, filter on the Name, type of Schema using a combination of the first and second dropdown lists.
  3. (OptionaI) f you have an existing Application Domain, proceed to the next step, otherwise the complete the following steps as required.
    1. Click Add Domain in the Import Tool panel.
    2. In the dialog that appears, provide the Name, Topic Domain and Description for the application domain.
    3. Click Save. Your new application domain will be visible in the Import Tool panel.
  4. Select at least one schema using the check box beside it. You can select multiple schemas add to an application domain.
  5. Click Add to Application Domain.

  6. In the dialog that appears do one of the following:

    • Select Existing Application Domain, choose an a domain from the drop-down list, and click Add.
    • Select New Application Domain, give a Name to the domain, and click Add.

After a schema has been added, the yellow status beside the schema disappears, and on the Import Tool panel, the schema number will increment under the associated Application Domain.

Archiving a Discovery File

You have the option to archive a Discovery file or even delete the file permanently. You may want to use the archive option if you have already imported the file into Event Portal and to reduce clutter.

To archive a Discovery file, follow the steps below:

  1. Click the on the Discovery file you want to archive.
  2. Select Archive Discovery. The state of the file will change to archived.

After the file is archived, you have the following options:

Activate Discovery: If you activate discovery, the audit will automatically run.

Delete Discovery: In this case, the Discovery file will be permanently deleted from Event Portal Runtime Discoveries list.

Additional Resources