Tuesday, December 07, 2010

LinkedIn's Kafka messaging project

Kudos to the LinkedIn team for making another highly focused and elegant project available as open source - Kafka. In spite of its name it is anything but Kafkaesque.

Kafka seems to be a serious attempt to address the messaging problem by starting from first principles. Not having played with the project yet, but from just reading the design doc it looks like a well thought out design.

I have written about the scalability limits of push-systems that are somewhat common to JMS implementations - here about polling from NoSql instead of push, a little here about JMS spec needing an upgrade and vaguely here when talking about alternatives to 2 phase transactions.

The alternative systems like Flume, Scribe, Hedgwig, Chukwa and such are too log-file-collection focused. Whereas Kafka looks more like a regular messaging system with a clean polling mechanism. Explicit polling with good storage automatically solves many of the problems that I had written about here like retries, slow consumers/flow control and durable subscriptions. I'm particularly glad to see that they've read the Varnish article on OS disk caching which Redis seems to have somewhat muddled up (comment #29). Funny, zero-copy was something I was exploring just a few weeks ago with Netty.

I don't however foresee any enterprise projects switching to Kafka immediately. Its performance and cost of license (ASL) might not be enough to motivate people from trying it out. The strangely simplistic yet clever design does require some careful reading of the docs and understanding of the APIs. Hopefully it will gain a wider user base unlike their other nice project Voldemort - another simple and elegant project.

Also be sure to have a look at their new disk store - Krati. I'm even more glad to see that all these projects are in Java (actually Scala).

Until next time!


A said...

@Ashwin - Noticed a small error in your post - Kafka is written in Scala, not Java.

Ashwin Jayaprakash said...

Corrected. Thanks.