How Kafka achieves high throughput low latency

  sonic0002        2019-03-08 09:42:57       9,069        0         

Kafka is a message streaming system with high throughput and low latency. It is widely adopted in lots of big companies. A well configured Kafka cluster can achieve super high throughput with millions of concurrent writes. How Kafka can achieve this? This post will try to explain some technologies used by Kafka.

Page cache + Disk sequential write

Every time when Kafka receives a record, it will write it to disk file eventually. But if it writes to disk every time it receives a record, it would not have very good performance. In fact, Kafka has a fantastic design here which is it utilizes the page cache of operating system. 

OS has L1 cache called page cache, it's a cache in memory. It can also be called OS cache, a cache managed by the operating system. When writing the record to the disk, it can be directly written to the OS cache, i.e, write to the memory. Next OS will decide when to write the record in the OS cache to the disk.

This step will improve the write speed a lot since it is indeed writing data to memory instead of writing to disk directly. 

Another key part of data writing is that Kafka writes data to a file in sequential order which means it will not randomly access a file and write at a random location. Normally it's slow to randomly access a file in a disk. 

Based on these two approaches, Kafka achieves high throughput when writing data. What about read? How does Kafka achieves the same?

Zero copy

There are consumers which would consume data from Kafka, this is actually reading data from disk files where the data is stored and thereafter sending it to downstream. This whole process would consist a few steps。

  1. It checks whether the data is in OS cache, if not, it will read it from disk and put it in OS cache
  2. Copy the data from OS cache to the application cache
  3. Copy the data from application cache to OS Socket cache
  4. Socket cache data is sent to network gateway
  5. Data sent to downstream consumer

In fact step 2 and step 3 can be removed. These two steps take time since the data needs to be copied from one place to another place and also OS needs to switch context in this process. Hence Kafka adopts the so called zero copy technology to save these two steps.

What it does is that it will directly send the data in OS cache to the network gateway and thereafter send to downstream. What socket cache will contain is just a descriptor to the data but not the actual data. Since there is no data copy involved in this process, it is called zero copy.

If you pay attention to above steps, you would also find that if there are lots of data in the OS cache, it would no need to read disk file so frequent. This will further improve the read performance. 

Hope this will give your a sense on how Kafka utilizes different techniques to achieve high throughput.

BIG DATA  KAFKA 

       

  RELATED


  0 COMMENT


No comment for this article.



  RANDOM FUN

Spurious wakeup without spinlock