Technology

Dual APIs – Events to APIs of the future

May 2, 2022

One thing is clear: Working with relational databases is not always easy. Anyone who has had to pre-apply an application-level cache to minimize database traffic knows that the two big problems are:

  • the cache invalidation
  • the race conditions or the dual write problem.

In order to invalidate entries, either an algorithm or an expiry time must take effect. Such algorithms are mostly erorr-prone. In addition, the expiry-time approach can lead to stale reads. This means that the data on the database has already been updated. To meet these challenges, materialized views could be used on the database. However, this does not solve the basic problem of getting less traffic to the database. This is often sufficient, but if not, distributed databases and shards could be used. But even this level of complexity is challenging - and therefore associated with effort.

Event logs and projections can help

Another possibility is the processing of event logs, i.e. the creation of “materialized views” and projections on the application side. This can also be seen as a scalable application-level cache, but with the advantages of materialized views, i.e. without the above-mentioned problems with general caches. Kafka, for example, could be used as an event log, which is not exactly large with around one million writes per second. This in turn brings with it complexity as multiple databases need to be orchestrated at the infrastructure level. If only one database is used, the infrastructure is back to the same level as before and the database would have to be scaled. In addition, the creation of the views must now be handled and scaled at the application level. In concrete terms, this means that event processing must also be scaled accordingly. The good thing: With Kafka, for example, the partitioning is not an afterthought, but is always considered at the beginning due to the design. In short: much of the complexity from the distributed systems world is exposed to the developers, since databases abstract a lot from scalability. Not to be disregarded is the know-how, e.g. Kafka, which must be available for the development process.

Both systems are therefore not perfect, but have their right to exist. When building APIs, it should therefore be taken into account that the host systems of the users can be different and different use cases have to be covered.

A dual API should therefore always offer several interfaces such as REST and events. Companies like Solace have already integrated this and are already using it in their pitches. Webhooks are often used for events these days. However, they do not make sense if, as described above, the users of the API want to build their own projection or their own read model and thus the application layer “cache” based on the consumed events of the API. This can mean that they are not directly affected by availability problems of the API. There must be a guarantee that no events will be lost and that the order will be followed. Systems like Kafka guarantee this, but webhooks do not. We will now take a detailed look at how such a dual API could be set up.

From the first steps to effective use: In our Apache Kafka workshop you will get to know the Kafka architecture and learn everything you need to know for successful ongoing operation.‍

Outbound via outbox pattern

APIs that expose the changes to objects to the outside are one way of implementing this. Guaranteeing changes and consistently exposing them to the outside world is not entirely trivial. Events are usually written to a third-party system that allows partitioning, routing and many subscribers. As with any API, writing to such a third-party system is never error-free. Network errors, system failures on both sides and more can occur - so there is a dual-write problem. From the service point of view, the users have to write to the database and the third-party system. If one or the other fails, inconsistencies arise - again: the dual-write problem.

This is where the outbox pattern comes into play. The alternative way via outbox pattern means exposing the applied CRUD operations to the outside through a link after writing them to the database. Usually, an event table (outbox table) is written atomically. The link consumes these events from the table - usually from the WAL log or via mechanisms for scalability without queries - and forwards them to the third-party system. A transaction-capable database is therefore still used by the microservice itself. However, the operations on the outbox table are consumed by the link and forwarded to the third-party system. If the microservice now also exposes a REST interface, a dual API is created, i.e. an API that can be consumed via REST and via events. Depending on the target system and use case, this can lead to expanded potential of your own API and thus of your own offer. It can also stabilize communication between event-based microservices, since REST communication and reliance on external read models between microservices are disadvantageous. With this approach, transaction-capable databases can continue to be used successfully without forgoing external events.

If the consistency of the two APIs is of high relevance in the use case, it is important that actual changes - committed operations - are not forwarded directly to the WAL log entries to the third-party system. Not every distributed relational / distributed NoSQL database supports this. For example, MongoDB offers “change stream events” - a feature that was built for this purpose. Not every link supports this either. Depending on the application, it is therefore important to use the right database and links for dual APIs.

In order to use the combination successfully in practice, links are required between the database and a sink – a third-party system/subsystem – from which the API users consume events. These links consume the CRUD operations that are exposed in MongoDB, for example via the “change event stream” feature, and read them with a persistent cursor so as not to miss any events. The process is also abbreviated as CDC (change data capture). These links are distributed and fault tolerant. They thus offer the guarantee that we can write events at-least-one or exactly-once - depending on the framework - into a subsystem. There are a number of suitable subsystems to which API consumers can dock, including Kafka. Kafka is able to store events persistently, which benefit event-sourced architectures. The idea behind it is to make life easier for as many host system architectures as possible. Well-known link frameworks include Kafka Connect, Debezium and Strimzi.

Inbound via REST and commands

A dual API can be controlled in the ecosystem via two interfaces: via a classic REST interface and the command interface, which has not been used much to date. Depending on the application and architecture of the system used, an API can be offered in its pattern.

REST and events combined

It is becoming more and more common for certain information to be consumed live, for example because time-critical decisions have to be made or because of availability concerns certain data statuses from the API have to be replicated on the consumer side. Instead of implementing polling and diffing via REST, outbound queues that can be consumed help here.

The call for MetaAPIs

Metadata of an API describe the incoming and outgoing interfaces and can also rewrite data formats and SLA. With optimized metadata, coordination efforts such as e.g. B. Reduce version upgrades of APIs. They can greatly simplify development agility. Unfortunately, the reading of metadata directly from the API is still rarely found in practice. We often see Swagger files on Swaggerhub, for example, which unfortunately only describe the API endpoints. Our assumption is that the relevance of meta-APIs, integrated e.g. B. as an endpoint of the API, but increase with the increasing scope of synchronous and asynchronous APIs, as they actively help to accelerate development cycles.

  • For REST APIs, OpenAPI is today’s meta-API standard
  • For event APIs, schema registries and AsyncAPI are today’s meta-API standard

AsyncAPI is aware that dual APIs are becoming more and more popular. For this reason, the roadmap was aligned in such a way that they want to unite OpenAPI, GraphQL and RPC API formats with AsyncAPI. As AsyncAPI shows, the trend is that developers have recognized the importance of metadata. The metadata ecosystem is growing and standards are being established. With today’s standards, there is no getting around a separate description of the API via OpenAPI and AsyncAPI. However, the picture was soon to change for the better.

With metadata descriptions of the future, however, not only the end points and data formats should be able to be called up, but also limitations or SLAs should be able to be viewed. Interfaces should also be able to subscribe to deprecations, updates, and new attached endpoints. IDEs should be able to get the schemas from APIs used in the code base. Notifications in the IDEs as well as general push notifications for changes should be subscribable to facilitate development. It should also be possible to generate mock datasets in order to simplify tests. Monitoring and information on the use of the API could also be read via metadata interfaces.

Which outbound queue – Kafka the standard

Kafka has established itself as the standard in the field of event streaming and today, with Confluence Cloud, offers good, easily scalable options for writing inbound commands and outbound events in so-called topics or queues. While writing is very easy, consuming is a bit more complex but highly scalable. For the outbound queues, it must therefore be considered whether the development team can handle the complexity of Kafka and whether it wants to expose it to the API users. It is also important to know that Kafka is not intended for an infinite number of subscribers. You can easily avoid this by e.g. B. Set up a new topic for each API key. As soon as KRaft is live and thus Zookeeper is removed from Kafka, this should no longer be a problem. No matter which event broker is ultimately under the API, it would be helpful if a standard was established here and the underlying system was abstracted for the end consumer.

Conclusion: Dual APIs are becoming more and more important

Many companies have long known that APIs are part of the core offering of their PaaS / SaaS offering. Companies are even so far that API-First applications are built and the building of UIs is often left to the customers. Today, APIs are often a more important part of companies than effective applications with a front end. With the rethinking of API-First, it is becoming increasingly important to enable customers to use and integrate the API appropriately for their underlying architecture - be it via REST or events. Both interfaces have their use cases. With dual APIs, the range for the customer can be increased. We would be happy to advise you on the implementation and are available to answer any questions you may have. Simply contact us using our contact form and we will get back to you. Do you already know our blog? Here you will find further texts on the topics of Stream Oriented Architectures, Big Data and Event Sourcing.‍

📝 Like what you read?

Then let’s get in touch! Our engineering team will get back to you as soon as possible.