max.request.size出错;RecordTooLarg

2019-07-24  本文已影响0人  MGary

解决步骤

1.Kafka者配置文件 server.properties中添加

message.max.bytes=100000000(100M)
replica.fetch.max.bytes=100000000(100M)每个分区试图获取的消息字节数。要大于等于message.max.bytes
  1. 生产者配置文件producer.properties中添加
max.request.size = 100000000(100M)请求的最大大小为字节。要小于 message.max.bytes
  1. 消费者配置文件consumer.properties中添加
fetch.message.max.bytes=100000000(100M)每个提取请求中为每个主题分区提取的消息字节数。要大于等于message.max.bytes
  1. Springboot 配置
 kafka:
    bootstrap-servers: localhost:9092
    producer:
      properties:
        max.request.size: 100000000
  1. 修改成功项目生产者启动答应打印
ProducerConfig values: 
    acks = 1
    batch.size = 100000000
    bootstrap.servers = [127.0.0.1:9092]
    buffer.memory = 100000000
    client.id = 
    compression.type = none
    connections.max.idle.ms = 540000
    enable.idempotence = false
    interceptor.classes = null
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 0
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 100000000
    metadata.max.age.ms = 300000
上一篇 下一篇

猜你喜欢

热点阅读