You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use ConsumerGroups to consume kafka messages. I have two instances that run on kubernetes. I give the same group ID to both instances. This is to have fault tolerance (one of the pod goes down, the other should be able to keep consuming). What happens is, two instances run side by side. One of them consumes messages alright. But, when I kill one of the pods, the other one can't keep consuming. My guess is maybe broker only accepts one of the pod's ConsumerGroup and just ignores the other one? Could someone please clarify the behavior?
Does your current topic have only one partition? If so, only one consumer will be able to consume messages. Could you check the number of partitions in your topic?
Does your current topic have only one partition? If so, only one consumer will be able to consume messages. Could you check the number of partitions in your topic?
I mean, even if this were the case, it should redistribute partitions to the consumers if one of them fails, right?
Thank you for your reply. I apologize for the misunderstanding. You are correct that Kafka should redistribute partitions among the remaining consumers in the group when one of them fails.
Description
I use ConsumerGroups to consume kafka messages. I have two instances that run on kubernetes. I give the same group ID to both instances. This is to have fault tolerance (one of the pod goes down, the other should be able to keep consuming). What happens is, two instances run side by side. One of them consumes messages alright. But, when I kill one of the pods, the other one can't keep consuming. My guess is maybe broker only accepts one of the pod's ConsumerGroup and just ignores the other one? Could someone please clarify the behavior?
Versions
Configuration
The text was updated successfully, but these errors were encountered: