Understand, Manage, and Enhance Your Event-Driven Architecture Lifecycle

At a high level, managing the lifecycle of events can be categorized into the following sequence of phases and activities: Discover, Design, Develop and Operate. You can use Event Portal at any stage of your event-driven architecture (EDA) lifecycle, whether that's discovering the EDA assets, designing the architecture, documenting the EDA, or enhancing your existing EDAs.

On this page, we discuss various scenarios to help you understand when and how you can use Event Portal to address your EDA-related concerns. The information on this page is laid out based on a typical lifecycle management flow so that you can jump into a topic relevant to your EDA needs.

Discover Existing Event-Driven Architecture Assets

Most organizations are already leveraging event-driven architecture (EDA) and using one or more event brokers. Event Portal currently supports the ability to scan, catalog and reverse engineer the following event brokers:

If you have an event broker type/configuration that's not currently supported, you will need to add the schemas, events and applications to the Event Portal manually using your existing documentation. While this may seem like a lot of work, it may be possible to capture this metadata and use Event Portal's APIs to automate the data ingestion. The benefits of doing this from a dependency management perspective is enormous as your EDA evolves and enables you to begin to manage and expose the existing event-driven capabilities and relationships implemented.

Decompose Your Enterprise

Whether you perform Discovery manually or use the Discovery Agent, it is essential to consider how your enterprise is organized so that you can decompose it using the application domain construct. An Application Domain provides the ability to organize and decompose an enterprise into logical groupings. You can base these groupings on a line-of-business, related functional capabilities, or team dynamics. The benefits of doing this include:

  • Establish event sharing rules—decide which events should be shared with other application domains and those for internal application domain usage only. This has implications both from a security perspective but also the ability to mange events that affect others outside of the application domain more efficiently.
  • Provide uniform event topic prefixes—can be used to ensure that the prefix is unique between different application domains and that topic best practices are followed.

Data Importation

Once you have decided on the application domains that are required for your enterprise, it is time to start the data importation process. You can automate the data discovery and importation process.

Automated Discovery Process

If you have an event broker type/configuration supported by the Discovery Agent, you can automate the discovery process. This not only provides a faster path to managing and governing your existing EDA assets, but it also ensures that the data is valid and up to date.

Organize Your Event-Driven Architecture Assets

Events, Applications and Schemas must be organized within an application domain, an essential first step to manage your event-driven architecture (EDA). Additionally, you can use Tags to organize your data based on the attributes that matter to you. The following list provides ideas for applying tags to organize the event-driven assets within Event Portal:

  • Use Case—based on the use case, have discussions about your design and understand the impact of changing something
  • Cluster—understand where the event originates or is available
  • Sensitivity—if you have events or schemas that communicate sensitive information, use tags to filter for quick visualization and search purposes
  • Deployment Status—tagging events and applications with states such as In-Design, In-Review, TEST, Deployed can be useful to understand what is available and filter accordingly

Document Your Event-Driven Architecture

Events are only as good as their documentation. After all, it is up to a human to understand what something is and decide whether it provides value. This is why documentation is critical for success in event-driven architectures (EDA). Creating and maintaining good documentation that is easy to read, enjoyable to interact with, and sets up the user for success can be challenging. Writing good documentation requires effort but has significant implications on the reuse of the events within the ecosystem.

Event Portal enables you to document events easily as well as manage the decoupled relationships so that users can understand the context of an event. Before you sit down and write documentation for events, applications and schemas, it's good to consider its purpose along with who will be using it.


While the software developers and engineers are often responsible for integrating events, the audience consuming the documented information are often a mix of group of personas. We can categorize the audience into two groups—Decision Makers and Users. The documentation must cater to both types of personas to achieve a high reuse factor of events along with developer agility.

Decision Makers

Some members in the organization assess and evaluate events and schemas to decide if it makes sense to have the development team further explore the service. They evaluate with a problem in mind and ascertain whether the events registered within Event Portal can be used to solve that problem. In many cases, they are not the ones writing the code that solves the problem, but they drive the decision as to whether the effort to use should be undertaken. Some of these types of decision makers include, but are not limited to: CTO, Product Managers, Enterprise Architects, Data Analysts, and Data Scientists/Engineers.


These are the people in your organization who will directly consume and develop applications using the events and schemas defined in Event Portal. At this stage, the decision to use the events and schemas have been made, and the user needs to understand the event, how it applies to their use case and how to integrate with it. Quality documentation is critical to enabling users as they are always short on time and are the last link to getting an event to be reused. Additionally, these users create or contribute to the documentation to enable others, especially when they are the author of a new event type or schema that will be published by an application. Therefore, they are critical the maintaining the documentation for the event-driven ecosystem. Examples of users include, but are not limited to, integration engineers, frontend developers, or backend developer.

Documentation Best Practices

Here, we cover some best practices when documenting your event-driven architecture.

Capture Business Point of View and Moment

What does this event (event type) represent? A piece of information that is important to capture, and without which, it will be difficult for a decision-maker to understand if the event can provide additional business value through its reuse. Make sure to document the moments in which the event instances are generated or triggered, the attributes of which it is the authoritative source, and the intended use of the event. Do not assume that the user will read the corresponding payload schema or understand much about the publishing application. Focus on documenting the event concisely and thoroughly.

Technical Requirements

This is the section where you need to provide developers with the information needed to consume the event itself. You may want to address the following questions:

  • What are some suggested client APIs that should be used to consume the event?
  • Are there important headers being used?
  • What authentication/authorization schemes are required?

Your documentation should capture all this information to ensure an easy development process.

Link to other References

Event Portal is just one source of information within the organization. Additional information about the application may be hosted in repositories, such as Github. Likewise, a schema may also have a corresponding Github or Wiki page, while an event could be part of a larger development task tracked in JIRA. You can reference all these external repositories, Wiki pages, and JIRA issues in the documentation. The point is to link all the locations where the information is captured and ideally link from those locations into Event Portal. In this way, no matter where you start, you can understand what’s available and the state.


An example is often an underutilized format of communication. By seeing an example of an event, the user can better understand a concrete business moment rather than the description. The examples will also be searchable in Event Portal, so anything within it provides better search context.

Terms of Use

This is the legal agreement between the event producer and any/all consumers. Talk to the API teams about their Terms of Use contracts and decide if it should be updated for event-driven API relationships. Also, think of others within the same organization and their expectations of use and document them here.


When in doubt, add a tag (within reason). As more events, applications, and schemas are added to the system, searching and tagging will be critical for users to find the capabilities available. Browse the existing tags and identify ones that apply to your event, application or schema. Add tags if needed so that others can more easily filter and find your event, application, or schema.

Learn and Enhance Your Event-Driven Architecture

A critical aspect of Event Portal is capturing the event-driven architecture (EDA) design and documentation in a central place to enable cross-organizational learnings. These learnings come in multiple forms, from creating new ideas to enabling and training team members on the architecture or performing change impact analysis and more. The purpose of this section is to outline some of these scenarios and to think about ways you can incorporate them into your organization.


To create a new business value, you must imagine or conceive a new solution to an existing problem. You can derive these ideas from two different directions. First, understand a known problem and the solution you are searching. Secondly, look at what is available and uncover unique solutions for problems that you were not actively looking for.

Event Portal enables learnings from both directions. It provides a central location to capture all of the available events and provides a way to understand whether a given event stream solves your problem. The search and filter functionality enables the user to perform keyword searches, ranging from data level attributes to metadata within the description. This helps when you are aware of the problem and are looking for ideas for solving them with events. For example, let's say you operate a Taxi company. You get complaints about drivers speeding and you want to analyze the problem in real-time. You know the data has an attribute called "speed," but what event streams contain that data? In Event Portal, you can simply search for speed in the Catalog's schemas section, review the matches, decide which schema is of interest, and navigate to those events that capture the moment you want to analyze. What if you don't have a specific problem and want to think about the art of the possible? This is where browsing the event catalog can be the key. Maybe you want to improve an area in your business and see what events are available in that area. Filter by that application domain and view the events like a menu of business capabilities that could fundamentally transform that business area when combined.

Once a new idea has been formulated, you can jump to the Design phase and start down the path of defining the new business capability in detail. Of course, in that phase, you should consider making this new capability event-driven so that your colleges can ideate and solve more problems. The more events you have, the more ideation that can occur.


Organizational changes happen all the time. How ready are you to take over another group's EDA implementation? How about enabling new members on your team? What if your current architect were to resign? Are you capturing everything you should be?

In many organizations, there is often a large amount of EDA-related information (knowledge) known only to certain employees. This information is called tribal knowledge and is risky. The above-mentioned organizational changes showcase the multitude of scenarios that can occur and leave the business in limbo resulting in reverse engineering, something that was already engineered. If you get into the habit of designing, documenting and continuously validating your EDA, tribal knowledge is eliminated as it's now available centrally and kept up to date. While most organizations believe they have software development and governance processes that will prevent this from happening, it is typically comprised of multiple conflicting sources of truth, none of which represent the current truth. This leads the team to ask the question frequently—so how does this actually work? This results in wasting time trying to investigate vs simply using a tool that captures the information in a central location and ensures it matches reality.

So next time you are faced with the questions presented above, your answer should be an emphatic YES for your event-driven architecture.

Change Impact Analysis

Changes happen; however, the question is—what is the effect and who is affected? In the synchronous world, changes to an API may/will affect the clients. So when changes are rolled out, clients are notified and changes implemented. The challenge in the EDA world is that consumers are decoupled from producers and vice-versa. Furthermore, the ripple effect can be enormous. Integration through connectors and decoupled application adoption thorough event subscriptions can move events between different groups, which further casts a fog upon dependency management.

Consider a scenario where "Joe" in the HR team intends to deprecate an application producing "employee" type events. He may know his back-office systems that depend on these events, but he may not be aware of a connector or replicator type capability moving those events to another team. How does Joe know what would be affected downstream? Meanwhile, "Bobby" in the other team has a system that stops ordering uniforms for the employees and nobody notices until the uniform inventory goes to zero.

Decoupling is EDA's biggest strength, but also its greatest weakness when changes occur. You can use Event Portal to track these critical relationships and dependencies such that if a schema is being changed, you can understand its effects and notify those affected. The same goes for events and applications. You will now have a place to perform change impact analysis early, before finding out in production that you "forgot" to let someone know and have to roll back your changes. It also enables downstream capabilities to know about new capabilities that could prove valuable. For example, adding a data attribute that is backward compatible but extremely valuable. How would they know otherwise?

Design or Extend Your Event-Driven Architecture

By designing a new event-driven application or extending your event-driven architecture, you can deliver new real-time business capabilities in a decoupled and reusable fashion. However, several key elements should be considered when designing events, schemas, and applications, including topic structure best practices, options for exchanging event data, and sharing/visibility rules. Considering these things early will put you on the road to success and enable better reusability down the road.

Topic Structure Best Practices

The topic of which an event is addressed seems like a pretty simple decision, but in reality, it can result in some negative consequences if not planned. A topic is more than an address; it is metadata that describes the event and can be used for several purposes such as routing, access control, and versioning. Thus, it is essential to govern and manage the topic structure correctly.

Regardless of your broker type, it is good practice to make topics structured and hierarchical, the same way how a RESTful resource uses hierarchical addressing. In other words, we want to produce hierarchical topics that rank from least specific to most specific.

Some brokers and wireline protocol specifications recognize different topic delimiters; others, such as Kafka, do not recognize them at all. As much as possible, the '/' character works as a good delimiter as it enables you to write topic hierarchies just like the web does. Equally, '.' character is also commonly used. These delimiters separate the different parameters that form an event's topic.

Topic parameters logically come in two forms: fixed/static parameters and parameters that are dynamic and sometimes unique for each event instance. Combining these different parameter types enables you to provide both organization and descriptive metadata within the topic name.

Below, we will discuss some of the common topic structure and hierarchy, which you might need to modify to meet your use case. It is important, though, to remain consistent within an Application Domain once you've decided on a hierarchy.

Parts of the Event Topic

The event topic structure has two parts:

  • The event topic root contains enough information to describe the type of event that has occurred. Each event topic root is a static field that describes the type of event. The list of event topic roots forms a catalog of events that can be produced and consumed. This catalog could be brought into Event Portal's Catalog, listing each event type along with details about the event. Each event topic root describes the event in as much detail as necessary to map it to a single data schema.
  • The event topic properties are optional fields that further describe a particular event. This part of the topic has fields that are dynamically filled when the producer publishes the event. These fields are used to describe the specific or unique attributes of this event instance used for routing and filtering.

Payload Format Definition

There are multiple Event Exchange Patterns (EEP) that must be considered when using EDA:

Thin Event Notification—If this pattern is used, only the necessary details are provided from a data point of view; this tends to increase coupling between the event source (producer) and sink (consumers). This is because the provided attributes directly correlate to the needs of the use case as opposed to being more flexible. The benefit of this pattern is that the data is smaller in size and can reduce latency and bandwidth when imported. In general, the source of that event should be the single authoritative source for all published attributes.

Hypermedia-Driven Events—In this pattern, links are provided in the event payload that bridges event notifications with dynamic API backends. Use this pattern where there are multiple levels of security related to attributes of the event. Consumers are still notified in real-time of state changes, but you must invoke the hyperlink to access more data. The service can then filter the response based on the client's access level. The downside to this pattern is that it increases interaction latency as all the data is not available within the event, which puts more complexity on the client and its behaviour.

Event-Carried State Transfer—With his pattern, all known data is broadcasted with the event (possibly entire record), thus enabling the consuming system to know the state of the entire entity instead of what has changed (similar to Thin Event Notification). This is a common approach as the subscribing application may want the entire snapshot to avoid previous state changes. In this case, the challenge is that the publishing application may not be the authoritative source of all attributes published. Additionally, the event may become large, thereby increasing latency and decreasing performance. However, the benefit is that decoupling is achieved and it supports a variety of use cases, and the publisher does not need to be aware of the client's usage of the data.

Deploy Event-Driven Services

It is currently outside Event Portal's scope to deal with the deployment of the business services created during the "implementation" phase. Developers must work within the typical deployment process of their organization and leverage their existing tools. We recommend that you update Event Portal with the deployment state manually or automatically (through DevOps tooling integration). Future releases of the Event Portal will enable the ability to support environments and states in a more robust and automated way.

Operate and Manage Your Event-Driven Services

No matter how well things are designed, something is bound to go wrong. Event Portal can be a valuable tool in the post-deployment phase of your EDA's lifecycle.


When something does go wrong, Event Portal enables you to understand the impact of the failure and trace the failure to its source. Some failure sources are easy to diagnose, but the impact is hard to decipher due to EDA's loose coupling.

Use Event Portal to understand downstream impacts and notify those owners before they notice the problem themselves. Conversely, people may see a problem and come to you for help. In that case, identify their application in Designer, get the missing data, trace the problem back to the publishing application, and reach out to the owner. While this may sound obvious and easy, it is only possible because you spent the time documenting the events and applications and their relationships.

Update Your Event-Driven Architecture

Nothing in an organization is static. Things are always changing, and new business capabilities are continuously created. Before you change existing schemas, events, and applications, use Event Portal to understand the impact of the change (more information inChange Impact Analysis) and update the documentation accordingly. The Discovery Agent will help you keep many of the relationships in-sync, but it is up to you and your change management process to ensure that descriptions, owners and tags are kept up to date. Content management related to events is everyone’s job and should be continually improved as these components evolve through their lifecycles.