Sinara_Pos_DIGITAL

Making sense of message queues

Introduction

When designing complex trading operational systems composed of multiple specialised processes, ensuring reliable, effective communication among them can be a significant challenge. Traditional approaches, such as remote procedure calls or direct binary protocols, can introduce tight coupling and inefficiencies in certain scenarios. For very high performance, low latency links, that may be acceptable and sometimes necessary, but what about the many other pathways within systems that are less time-sensitive?

Interprocess communication techniques

To start with, it’s worth reminding ourselves of the wide range of approaches to managing communication between separate processes. Focusing on the .NET ecosystem, we can think of a few key ones that we have used at Sinara over the years:

Database Polling: One process monitors a database table for updates, while another writes data to that table.

WCF (Windows Communication Foundation): SOAP interfaces are exposed by one component, and others call these interfaces using auto-generated proxies.

Duplex WCF: Similar to regular WCF, but with the calling component also exposing a callback interface for two-way communication.

gRPC: Similar to WCF, but using a more language-agnostic protocol, with .NET Core support and better developer tooling.

REST APIs: One component exposes a RESTful API, and others communicate with it using HTTP requests.

SignalR: One component exposes a SignalR hub for real-time communication over Web Sockets.

The asynchronous alternative: message queues

What these methods generally have in common is that the communication needs to happen immediately; if the process you are trying to talk to is unavailable for whatever reason, the communication will fail and the message you are sending may be lost. If you don’t want loss, you need to somehow ‘queue up’ your messages until the process comes back online.

We can take this idea to the next level and introduce the concept of the independent message queue. This is a mechanism to enable different software components, processes, or systems to exchange information asynchronously. It’s a way to facilitate communication and coordination between various parts of a distributed or multi-threaded application, and particularly important in modern microservice architectures, as we’ll touch on later.

Instead of directly trying to pass data between components, which can lead to synchronisation issues or bottlenecks, you can use a message queue as an intermediary. Picture them as data highways, intricately designed to manage the asynchronous flow of information between distinct components, ensuring nothing gets lost.

In a nutshell, message queues enable one component to write a message into a queue, and another component, when ready, reads and processes the message. Unlike the traditional communication methods where components might wait for responses, message queues operate asynchronously—imagine sending a message and moving on, allowing the rest of your system to remain responsive and agile.

How a message queue works

Let’s break down the different conceptual elements of a message queue and consider them in turn:

Producer: The component or process that generates data or messages to be sent. It creates a message and places it in the message queue. Note that there could be more than one producer.

Queue: This is a buffer or storage area that holds the messages sent by producers. It ensures that messages are stored in the order they are received and provides a way to manage the flow of messages between producers and consumers.

Consumer: The component or process that retrieves messages from the queue and processes them (just as with producers, there could be more than one taking things from the queue). Consumers might be in charge of different tasks, like handling requests, processing data, or performing specific actions based on the messages they receive.

Asynchronous Communication: One of the key features of message queues is asynchronous communication. Producers and consumers don’t need to interact in real-time. Producers can continue generating messages without waiting for consumers to process them immediately, and consumers can retrieve messages from the queue whenever they are ready.

Decoupling: Message queues help decouple different parts of a system. Producers and consumers are independent, meaning changes to one component won’t necessarily disrupt the others. This flexibility allows for better scalability and fault tolerance.

Persistence: Depending on the message queue implementation, messages can be stored persistently even if a component fails. This ensures that data is not lost and can be processed once the system is back online.

Message Types: Messages can carry various types of data, such as commands, requests, notifications, or events. Different types of messages can be used to trigger specific actions or workflows within the system.

Implementing message queues

To actually implement all this in practice, you would use a combination of message protocols and software components that work together to enable this kind of asynchronous communication between different parts of a system. Several existing technological options are available for the implementation of message queues through message broker software that support a wide range of messaging protocols.

In any Sinara project, the choice of message broker and protocol would depend on factors such as performance requirements and integration with existing systems.

ActiveMQ / Apache Artemis

Apache ActiveMQ is the most popular, open source, multi-protocol, Java-based message broker. It supports industry standard protocols so users get the benefits of client choices across a broad range of languages and platforms.

Apache Artemis is the next-generation version of ActiveMQ that offers high-performance messaging for highly scalable microservices. It supports a wide range of messaging protocols (AMQP, STOMP, HORNETQ, MQTT, OPENWIRE).

RabbitMQ

RabbitMQ is another versatile open-source message broker software, second in popularity only to ActiveMQ.

Apache Kafka

Apache Kafka is a distributed streaming platform particularly suitable for handling large streams of data in real time, such as building data pipelines, processing events, and enabling event-driven microservices architectures. It is high throughput, fault-tolerant, and optimised for handling large volumes of data and real-time streaming and continuous data processing.

Azure Service Bus

Azure Service Bus is a fully managed cloud-based messaging platform provided by Microsoft Azure. It offers reliable messaging patterns, supports both queues and topics, and integrates with other Azure services for building cloud-native applications.

Amazon MQ

Amazon MQ is a fully managed message broker service provided by Amazon Web Services (AWS).
It supports multiple messaging protocols and simplifies the deployment and management of message queues. Amazon MQ makes it easy to migrate a message broker to the cloud streamlining the setup, operation and management and removing the overheads of managing the underlying infrastructure, scaling and maintenance.

SinaraTLC uses AMQP (Advanced Message Queueing Protocol) and is compatible with any message broker which supports that. On our developer and hosted environments, we use Apache Artemis as the broker, while on an AWS cloud-hosted environments we use ActiveMQ, provided/managed via AmazonMQ.

Message queues in microservices architecture

Solutions like ActiveMQ and RabbitMQ, and indeed the message queue idea in general, are particular well-suited to microservices architectures. Microservices have emerged as one of the preferred approaches to designing complex systems, particularly suited for the cloud environment. This ability to break down large applications into smaller, independent services allows for faster development, easier maintenance, and improved scalability.

However, the elegance of this architecture also brings a challenge. Since these services are decoupled and independent, they need a reliable way to exchange information and coordinate their actions. This is where message queues come into play, acting as intermediaries, facilitating asynchronous communication between microservices.

Loose Coupling: Message queues encourage loose coupling between components, which aligns well with the microservices architectural style.

Scalability: Asynchronous messaging supports future scalability by allowing the duplication of components for increased capacity.

Future Changes: Decoupling through message queues allows easy insertion of new components into the data flow or changes to existing ones.

Conclusion

By implementing message queues effectively, organisations can build resilient and responsive microservices ecosystems that can adapt to changing demands and ensure the smooth flow of data and events throughout their distributed systems. Sinara’s own SinaraTLC framework is designed to harness the full potential of this kind of modern architecture and help us deliver robust, efficient, and scalable trading solutions.

Share the Post:

Related Posts