spark streaming 集成kafka0.10 offs
2018-06-01 本文已影响0人
稻草人_d41b
spark streaming+kafka 0.10集成默认spark partition和kafka partition数量是1:1,这样可以使得每个spark partition对应一个kafka partition。将spark partition中kafka consumer进行缓存,在每次获取新数据时可以利用CachedKafkaConsumer消费,只需要修改offsetRange值。附:spark-streaming-kafka-0.10 jar包结构
![](https://img.haomeiwen.com/i12472907/cc5e910e58192bff.png)
![](https://img.haomeiwen.com/i12472907/eb5ea7b50193f96a.png)
![](https://img.haomeiwen.com/i12472907/e4ea3b0be1a43bbc.png)
![](https://img.haomeiwen.com/i12472907/01802508b9ffe849.png)
![](https://img.haomeiwen.com/i12472907/831ccad09b206f9b.png)
![](https://img.haomeiwen.com/i12472907/a1aa28cc35adc7bc.png)
![](https://img.haomeiwen.com/i12472907/1e4bfad1892866bf.png)
![](https://img.haomeiwen.com/i12472907/8920337fe8e74395.png)
![](https://img.haomeiwen.com/i12472907/bb1aa2d5a53b7aef.png)
![](https://img.haomeiwen.com/i12472907/4d12247889ff8f78.png)
![](https://img.haomeiwen.com/i12472907/ba7657e62199e026.png)
![](https://img.haomeiwen.com/i12472907/c5d5b02280804941.png)