Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Same Group ID with Multiple Consumer Groups Causes Problems #2975

Open
ugrkm opened this issue Aug 29, 2024 · 3 comments
Open

Using Same Group ID with Multiple Consumer Groups Causes Problems #2975

ugrkm opened this issue Aug 29, 2024 · 3 comments

Comments

@ugrkm
Copy link

ugrkm commented Aug 29, 2024

Description

I use ConsumerGroups to consume kafka messages. I have two instances that run on kubernetes. I give the same group ID to both instances. This is to have fault tolerance (one of the pod goes down, the other should be able to keep consuming). What happens is, two instances run side by side. One of them consumes messages alright. But, when I kill one of the pods, the other one can't keep consuming. My guess is maybe broker only accepts one of the pod's ConsumerGroup and just ignores the other one? Could someone please clarify the behavior?

Versions
Sarama Kafka Go
v1.43.2 3.5.1 1.22.2
Configuration
config.Consumer.Offsets.AutoCommit.Enable = false
config.Consumer.Offsets.Initial = sarama.OffsetOldest
config.Consumer.Return.Errors = true
@lbydev
Copy link

lbydev commented Sep 14, 2024

Does your current topic have only one partition? If so, only one consumer will be able to consume messages. Could you check the number of partitions in your topic?

@puellanivis
Copy link
Contributor

Does your current topic have only one partition? If so, only one consumer will be able to consume messages. Could you check the number of partitions in your topic?

I mean, even if this were the case, it should redistribute partitions to the consumers if one of them fails, right?

@lbydev
Copy link

lbydev commented Sep 14, 2024

Thank you for your reply. I apologize for the misunderstanding. You are correct that Kafka should redistribute partitions among the remaining consumers in the group when one of them fails.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants