PubSub+ Event Portal Overview

The PubSub+ Event Portal (Event Portal) is an event management tool—presented through an accessible Web user interface (UI)—that enables you to discover, design, visualize, share, and manage various aspects of your event-driven architecture (EDA). On this page, we will discuss the fundamental elements of the Event Portal and provide an overview of the tools. We will also discuss some of the underlying features such as runtime discovery of EDAs, support for Kafka-native objects, event sharing, version control, REST API, AsyncAPI, and other essential features.

To get started, let's first understand the Foundational Elements of the Event Portal.

Foundational Elements of the Event Portal

When designing an event-driven architecture (EDA), it is essential to model the enterprise or interworking systems as a whole, which means dividing it into smaller, more manageable pieces. Once broken down into what the Event Portal defines as an Application Domain, within each Application Domain, you can create a set of event-driven entities or objects (schemas, events and applications) which represent the runtime interactions.

The following are the four foundational elements of the Event Portal:

Schemas, Events, and Applications are often referred to as objects in the Event Portal.

Application Domain

An application domain represents a namespace where applications, events, and schemas can live. Within this namespace, you can create a suite of applications, events and schemas that are independent of other application domains. This provides a way to create independent event-driven architectures for different products. You can also share events and schemas across application domains by creating the event or schema with the shared option selected.

Furthermore, application domains define a topic domain. For all the events in the application domain, the topics defined for these events should be named with the topic domain at the beginning of the topic. For example, if a topic domain is defined as solace/weather, then an event in that application domain may be named solace/weather/blizzard/started.

Schema

In simple terms, a schema represents the contract to describe the payload of an event. Producers and consumers of an event can trust that the event's payload matches the schema definition assigned to that event. Schemas define a type of payload through JSON, AVRO, XML, Binary, or Text. JSON, AVRO, and XML schemas have content that describes each property of the schema. The content is either in JSON or AVRO Schema format, or XSD/DTD format.

Event

The event represents a business moment or an action that can be communicated with zero or more interested applications. The event is where you define metadata that describes and categorizes the event. An event is produced on a specific topic that must be defined for that event. From a modelling perspective, events reference payload schemas, and events are referenced by applications, thus forming the glue that binds applications together.

It is important to note that an event (or an event type), as referred to in the Event Portal is different from an event instance. For more information on this topic, refer to Event Type vs Event Instance.

Application

An application represents a piece of software that produces and consumes events. Applications connect to the event broker in an event-driven architecture and communicate with other applications via events. A single application represents a class of applications that are running the same code base; therefore, there is no need to create multiple applications for each instance in a cluster.

When creating an application, you can set an attribute called Application Class to specify a Kafka Connector, Kafka Application, or leave it as Unspecified. If you specify Kafka Connector or Kafka Application, you can add additional information.

Kafka Connector

A connector is used in Kafka for connecting Kafka brokers with external systems to stream data into or out of Apache Kafka. In the Event Portal, a Kafka Connector is an application class you select to configure associated published and/or subscribed events and a set of Kafka-native attributes as described below.

  • Connector Type: Set either a source or a sink connector.
  • Connector Class:The main java class, an entry point to launch the connector. By looking at the connector class's full name, you can derive the company/author of the given connector as well as look up the implementation details by reviewing the source code.
  • Cluster ID: The connector's cluster ID. In most cases, this will be a dedicated cluster specifically for connect capabilities.
  • Maximum Task: Configuration property that describes several concurrent tasks that will be launched to complete the ETL job on time.

To learn more about Kafka connector, refer to the discussion on connectors in Kafka's documentation.

Kafka Application

You can select a Kafka Application as an application type to add consumer groups that you can associate event subscriptions with. Refer to Consumer Groups to learn how it's supported in the Event Portal.

Event Portal Tools

Use these tools to create, design, manage, discover, visualize, and share all events within your ecosystem.

Designer: Design and view all aspects of your event-driven architecture.

Catalog: Browse for events, schemas, and applications defined in your environment using a searchable, sortable, and filterable interface.

Discovery : Discover and visualize events your event driven architecture from your event brokers. Initial support is for discovery and the import of events, schemas, and application interactions from Apache Kafka, Confluent, and Amazon MSK. Support for additional brokers types will follow.

Designer

Designer is a tool to design and view all aspects of your event-driven architectures. It is through Designer that you can create new events and associate the payload schema to these events. It provides a graphical representation of application domains, applications, schemas, and events. Use Designer to visualize the interaction of events between applications and to provision your architecture into the Event Portal.

Catalog

Catalog acts as a storefront for all the applications, events, and schemas you've created in the Event Portal. Objects created in Designer are automatically available in Catalog. Using Catalog's searchable interface, you can access all the existing events, schemas, and applications.

Discovery

Discovery is a tool to discover events, schemas, and application interactions, running over your event brokers. You can use it to discover, import, and then visualize your event-driven architecture (EDA), including all associated applications, events, and schemas and their relationships from event messaging clusters such as Apache Kafka, Confluent, or Amazon MSK. You can run the Runtime Discovery multiple times, and on each run, it will discover the data, compare it to what is in Designer, allow the user to enrich it, and then import it into Event Portal's data model. Once imported, it will be available in Catalog and Designer to visualize and understand your event-driven architecture.

The Discovery functionality includes a component called the Runtime Discovery Agent, which gathers runtime data from the event brokers.


In the subsequent topics, we will discuss some of the additional concepts and features of the Event Portal that are important to understand.

Event Type vs Event Instance

An event type (or event as it is referred to in Event Portal) represents a class of events produced in an event-driven architecture. The event type is made up of its topic and schema that represents the allowed payload for the event.

An event instance is a specific instance of an event that is produced. An event instance has an event type. It conforms to the schema of the event type and is produced on a topic defined for the event type. Over the lifecycle of an application, many event instances are produced and consumed.

Shared Events and Schemas

By default, you can only reference events and schemas within their own application domain. For another application domain to reference your events and schemas, the event or schema must be marked as shared. For example, imagine a situation where there are two application domains modelled in Event Portal: the Weather Events and the Traffic Events. Say, an event is created in the Weather Events application domain called the Blizzard. Since the Traffic Events application domain should be able to consume the Blizzard event, the creator of the Blizzard marks the event as shared. Now when an application is created in the Traffic Event application domain, it can choose to publish or subscribe to the Blizzard event. If it were not marked as shared, it would not be allowed to reference by applications in the Traffic Events application domain.

Runtime Discovery Agent

The Runtime Discovery Agent is used to create a Discovery file; to do so, it connects to the target event broker and gathers configuration and runtime data. The following event brokers are currently supported: Apache Kafka, Confluent, and Amazon MSK.

You can install the agent locally and use it to create a Discovery file of your event-driven architecture (EDA). The gathered data is generated as a JSON file, which you can upload into the Event Portal. The file contains entities (including Consumer Groups, Connectors, Topics, and Schemas) that are returned and used for staging the Discovery file before being imported into Designer and Catalog. The following data/metadata is returned in the Discovery file result:

  • The data that was input to run the discovery (for connecting to brokers / connectors / schema registry). All passwords in this part of the data are redacted.
  • A list of topic names.
  • A list of consumer groups with a flag indicating if they are simple consumers.
  • A list of connectors with the following configuration data:
    • connector class
    • connector type
    • maximum thread allocation
  • A list of schemas. In the case of a string schema (non-json format), the contents of the message will be included.

Refer to Installing the Offline Discovery Agent to learn more.

Discovery Runtime Audit

When the Discovery file is successfully staged to import into the Event Portal, an audit is performed. The results of running the audit will return the following:

Match: A match represents objects and relations that are found to be an exact match to the corresponding item in Designer. A match percentage is provided that signifies how closely the Discovery file matches the associated Application Domains that exist in Designer.

Discrepancy: A discrepancy represents objects and relationships identified in the Discovery that do not have an exact match in Designer. The audit algorithm will use the attributes that are being added to the Designer to determine if resources in Designer match the discovered objects. Discrepancies are highlighted with a yellow dot.

Staging View

The Staging View is where you can resolve discrepancies, assign objects to different Application Domains, and map undiscoverable inter-object relationships, before importing them into the Designer and Catalog. When you stage the Discovery file for import, it runs an audit, listing out the discrepancies and providing the list of possible resolutions for each discrepancy. Objects are marked with a colored discrepancy indicator. Yellow indicates an item that need an action before they can be moved into Event Portal. Blue indicates an item that was discovered in the scan but is not currently present in Event Portal. Once the discrepancies are resolved, you can commit the changes to Designer.

For more information refer to Staging and Committing Discovery Scans into Event Portal

Workspace

In the Discovery, Application Domains are grouped by a property called Workspace. After adding a Discovery file, it must be assigned a Workspace before being imported into the Event Portal's data model. When performing an audit, the Workspace must be specified, and only application domains within that specified group are considered for audit comparison with the Discovery file.

Take a look at Discovering Events and Event-driven Architecture to see how you can use Workspace to compare your discovered event-drive architecture (EDA) to a specific Application Domain or set of Application Domains in a Workspace.

Consumer Groups

Event Portal supports the concept of Kafka's consumer groups. A consumer group is used by Kafka to group consumers into a logical subscriber for a topic. In Event Portal, you can model consumer groups in Designer. This enables the Event Portal's runtime discovery to associate a discovered consumer group to an existing application.

Kafka consumers that belong to the same consumer group share a group ID. The consumers in a group divide the topic partitions, as fairly as possible, so that each consumer consumes only a single partition from the group.

To learn how you can set up consumer groups in the Event Portal, see Configuring a Consumer Group.

Visit the Kafka's documentation to learn more about consumer groups.

Topic Scheme

When creating an event, you can specify the event's Topic Scheme as Kafka or Solace (AMQP, REST, SMF) or "Other". Setting the Topic Scheme to a value other than "Other" validates the event’s topic value to the allowed format for that Topic Scheme. Once the Topic Scheme is set to Kafka or Solace, you can then add additional payload information—value and key (for Kafka only), and the option to use Schema or a Primitive Type. In Event Portal, you can create and use both the Key and Value type payloads to accurately model Kafka topics.

To learn how to set the Topic Scheme, refer to Creating an Event.

Primitive Types

Some message payloads consist of primitive types such as String and Numbers, rather than more complex structures (JSON or XML). Event Portal supports primitive event types so that you can model your events (for example, Kafka topics) with payloads that are simple primitives. Primitive types for message payloads are automatically detected in the Discovery and can be committed or imported into Designer. Likewise, you can also create and edit an event to use a primitive type in the Designer. Using AsyncAPI, you can generate an AsyncAPI specification for the application associated with an event with primitive payloads.

The following primitive types are currently supported in Event Portal:

  • Avro: Null, Boolean, Int, Long, Float, Double, Byte, String
  • JSON: Null, Boolean, Number, String

To learn how to use a Primitive Type when designing an event, see Creating an Event.

Multiple Record Types

It is common for Kafka-based applications to publish events to the event broker on the same topic that does not share the same schema. In Event Portal, you can discover, re-discover, and stage different topics with more than one record type. In other words, topics that have multiple records with different payload schemas are discovered in Event Portal Discovery. The payload schema could be JSON, Avro, XML, text, or Binary. Discovered events for the same topic are uniquely named to differentiate each event before committing them into the Designer as events. Once committed, the topic to schema association occurs in the backend, which can then be visualized in Designer.

To learn how to discover and commit your EDA into Event Portal, refer to Discovering Events and Event-driven Architecture.

Tags

Applications, events and schemas all have the option to have tags associated with them. Searching for a tag name in Catalog will find all the objects associated with that tag.

Tags can be a great way to share information with other members of your organization by grouping sets of objects together. Here are some examples of how you can use tags:

  • Create a tag called Java and set it against all of your Java-based applications
  • Create a tag called User and tag all of your schemas related to users
  • Create a tag called Sign in Flow and tag the series of events and applications that are involved in the sign in flow for your application

In the future release of the Event Portal, Designer will have the ability to filter applications and events based on the tags applied to them. This will give another layer of customization to visualize your event-driven architectures.

Revision History

Revision history track the changes to objects in Event Portal. Any time an application, event or schema is updated, it creates a new revision. For schemas, you can perform additional tasks such as creating multiple versions of a particular schema and advanced version control management. Refer to Schema Versions and Version Control for more information.

You can restore an old revision of an object at any point. The act of restoring an object to a previous revision creates a new revision of that object. For example, an object whose current revision is at Rev 2, when restored to Rev 1, will result in it having three revisions. The new Rev 3 will be the same as Rev 1 since it was restored from it.

There are few exceptions that you should consider when using the revision history of an object:

  • Owner and tag information of an object is not included in the revision history. Changing the owners or tags of an object will not create a new revision of the object.
  • Restoring to a revision of an object whose associations have changed will not change the associated objects. For example, imagine an event and schema that have both undergone ten revisions. At the time of the second revision of the event, the schema was on its first revision. Reverting the event to its second revision will not return the schema to its first revision, it will continue to be its tenth revision.
  • Application domains do not track revision history. Instead, if you need to revert an application domain and all of its contained objects to a point in time, it is recommended to use the import/export functionality.

For tutorials, refer to Managing Object Revisions.

Schema Versions and Version Control

You can create schemas with our without versions. When using schema versions, you can create and store multiple versions of the same schema, which can be used in your EDAs that require two or more versions of a particular schema. Revision History is support with schema version control, such as saving revisions of each schema, viewing changes in the revisions, and reverting to an older revision.

For additional information and tutorials, see Creating a Schema and Designing and Managing Events .

Archiving Objects and Revisions

When you delete an object, it will be archived for ninety days, along with its associated revisions. You can restore the deleted object and its associated revisions anytime within the ninety days.

To learn more, refer to Deleting an Object.

AsyncAPI

AsyncAPI is an open-source initiative that seeks to improve the current state of Event-Driven Architectures (EDA). Using AsyncAPI allows development teams to create applications that communicate asynchronously through events more easily. The core of AsyncAPI is a specification that describes how to create a document that explains the inner workings of an event-driven API.

You can use an AsyncAPI specification document for many functions, such as:

  • Generating documentation
  • Generating code
  • Validating events received by your application
  • Applying API management policies

Learn more about AsyncAPI on their website at https://www.asyncapi.com.

AsyncAPI and the Event Portal

Event Portal natively supports the AsyncAPI 2.0.0 specification and applications can be exported into an AsyncAPI document. You can export applications in JSON and YAML; the two supported formats of AsyncAPI.

To learn how to generate an AsyncAPI document for an application, refer to Generating an AsyncAPI.

REST API

Event Portal provides a RESTful API that you can use to manage your data in the PubSub+ Cloud. Use the REST API to integrate other applications, systems, or client applications with Event Portal, and model or retrieve your event-driven architectures from your own client applications.

For more information, refer to the Event Portal REST API documentation.

Associating Objects to Build Relationships

A key benefit of using Event Portal is its ability to track the relationships that exist in an extremely decoupled event-driven architecture (EDA). It enables the reuse of schemas and events, and graphically presents the relationships that exist between applications and events. For example, a schema may be used by many different events, and the business object changes through a transaction. Multiple producers may produce an event, and an application may consume and/or produce many events. Finally, events may be shared across various application domains. Thus, the Event Portal helps you manage and understand the EDA regardless of which event broker is being used at runtime.

Using Event Portal in Multiple Operational Environments

As an Event Portal user, you can model your event-driven architecture (EDA) in different operational environments. For instance, many organizations will have their applications running in at least one of these environments: development, staging, and production. Some enterprises also operate additional environments such as developer, quality assurance, user acceptance testing (UAT), or multiple production environments. The goal, therefore, is to keep these operational environments separate in terms of software running and user access permissions.

Event Portal allows for environment separation, in the same way how the event broker services are separated within enterprises. This means you can create a different cloud console or Event Portal account for each environment within enterprise. You can then create and grant different user access permissions for each of these environments, and use that to model your EDAs as they progress from one environment to the next.

EDAs, including all associated applications, events, and payload schemas, can be promoted from one environment to the next by exporting entire Application Domains from the previous environment and importing it into the next, using the Import/Export Application Domain functionality in Event Portal.

Next steps