Download presentation
Published byCalvin Lucas Modified over 8 years ago
1
Apache Kafka A distributed publish-subscribe messaging system
Neha Narkhede, LinkedIn 11/11/11
3
Outline Introduction to pub-sub Kafka at LinkedIn Hadoop and Kafka
Design Performance
4
What is pub sub ? Producer publish(topic, msg) Consumer subscribe msg
Publish subscribe system Multiple subscribers, decoupling Some examples : notification on adding connection etc Consumer Producer msg
5
Outline Introduction to pub-sub Kafka at LinkedIn Hadoop and Kafka
Design Performance
6
Motivation Activity tracking Operational metrics
7
Kafka Distributed Persistent High throughput
8
Kafka at LinkedIn Frontend Frontend Service Service Broker Broker
Every service in LinkedIn uses Kafka of real time consumers using Kafka Hadoop DWH Real time monitoring Security systems News feed
9
Outline Introduction to pub-sub Kafka at LinkedIn Hadoop and Kafka
Design Performance
10
Hadoop Data Load for Kafka
Live data center Offline data center Hadoop Dev Hadoop Kafka Kafka Frontend Real time consumers Hadoop PROD Hadoop Give some background on – basically generating business reports Tracking data is crucial to measure user engagement – unique member visits per day, ad metrics CTR to ad publishers Also data analytics – new models for PYMK Collapse multple boxes into one. Show multiple Hadoop clusters If you track some new data, it will automatically reach Hadoop in a few minutes.
11
Multi DC data deployments
Live data centers Offline data centers Kafka Real time consumers Hadoop DWH
12
Volume 20B events/day 3 terabytes/day 150K events/sec
13
Message queues Log aggregators ActiveMQ Flume TIBCO Scribe KAFKA
Focus on HDFS Push model No rewindable consumption Low throughput Secondary indexes Tuned for low latency Messaging systems tuned for very low latency, but not high volume
14
What Kafka offers Very high performance Elastically scalable
Low operational overhead Durable, highly available (coming soon)
15
Architecture Producer Producer Broker Broker ZK Broker Broker Consumer
16
Outline Introduction to pub-sub Kafka at LinkedIn Hadoop and Kafka
Design Performance
17
Efficiency #1: simple storage
Each topic has an ever-growing log A log == a list of files A message is addressed by a log offset One topic is enough
18
Efficiency #2: careful transfer
Batch send and receive No message caching in JVM Rely on file system buffering Zero-copy transfer: file -> socket
19
Multi subscribers 1 file system operation per request
Consumption is cheap SLA based message retention Rewindable consumption
20
Guarantees Data integrity checks At least once delivery
In order delivery, per partition
21
Automatic load balancing
Producer Producer Broker Broker Consumer Consumer
22
# events published = # events consumed
Auditing # events published = # events consumed
23
Outline Introduction to pub-sub Kafka at LinkedIn Hadoop and Kafka
Design Performance
24
Performance 2 Linux boxes 200 byte messages
GHz cores rpm SATA drive RAID 10 24GB memory 1Gb network link 200 byte messages Producer batch size 200 messages
25
Basic performance metrics
Producer batch size = 40K Consumer batch size = 1MB 100 topics, broker flush interval = 100K Producer throughput = 90 MB/sec Consumer throughput = 60 MB/sec Consumer latency = 220 ms s
26
Latency vs throughput (100 topics, 1 producer, 1 broker)
27
Scalability (10 topics, broker flush interval 100K)
28
Throughput vs Unconsumed data
Add compression line here instead
29
State of the system 4 clusters per colo, 4 servers each
850 socket connections per server 20 TB 430 topics Batched frontend to offline datacenter latency => 6-10 secs Frontend to Hadoop latency => 5 min Batching on frontends, batching on both kafka clusters, tuned for throughput
30
State of the system Successfully deployed in production at LinkedIn and other startups Apache incubator inclusion 0.7 Release Compression Cluster mirroring
31
Replication
32
Some project ideas Security Long poll More compression codecs
Locality of consumption
33
Team Jay Kreps Jun Rao Neha Narkhede Joel Koshy Chris Burroughs
34
Thank you http://incubator.apache.org/kafka/index.html
@nehanarkhede #kafka Kafka webpage here
35
more compact storage format 9 bytes/message overhead in Kafka
single producer to send 10 million messages, 200 bytes each. single consumer thread Flush interval 10K ActiveMQ – syncOnWrite=false, KahaDB RabbitMQ – mandatory=true, immediate=false, file persistence the producer throughput in messages/secondKafka produces at the rate of 50K / second with batch size of 1 400K / second with batch size of 50. These numbers are orders of magnitude higher than that of ActiveMQ and at least twice better than RabbitMQ. No producer ACKS more compact storage format 9 bytes/message overhead in Kafka 144 bytes/message overhead in ActiveMQ Cost of maintaining heavy index structures One thread in ActiveMQ spent most of its time accessing a B-Tree to maintain message metadata and state
36
Kafka consumes at 22K messages / sec, more than 4 times that of ActiveMQ and RabbitMQ
More efficient storage format - fewer bytes transferred from server to consumer The broker in ActiveMQ/RabbitMQ had to maintain delivery status for each message. One of the threads in ActiveMQ kept writing KahaDB pages to disk during the test. Kafka uses sendFile API to reduce the transmission overhead
37
API
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.