Rethinking Message Brokers on RDMA and NVM


Over the last years, message brokers have become an important part of enterprise systems. As microservice architectures become more popular and the need to analyze data produced by the individual services grows, companies increasingly rely on message brokers to orchestrate the flow of events between different applications as well as between data-producing services and stream processing engines that analyze the data in real-time. Current state-of-the-art message brokers such as Apache Kafka or Apache Pulsar were designed for slow networks and disk-based storage. In this work, we propose a new architecture that leverages remote direct memory access (RDMA) and non-volatile memory (NVM) to improve the weaknesses of existing message brokers and further scale these systems.

In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data
Hendrik Makait
Hendrik Makait
OSS Engineer

I’m a data and software engineer building systems at the intersection of large-scale data management and machine learning. I work as an OSS Engineer on the Dask Foundations team at Coiled evolving Dask into a performant, production-grade distributed computing platform. Previously, I worked on data management platforms for machine learning at SiaSearch and Scale AI. I studied CS at TU Berlin while redesigning message brokers for modern hardware at Hasso Plattner Institute.