Releases: qntfy/frafka
v0.4.0
Breaking configuration changes for Kafka clients
KAFKA_CONFIG_FILE
takes a file path of config file for kafka client parameters (See provided example file / discussion in readme)- The following previously optional config for setting specific kafka parameters are no longer supported, and must be set using
KAFKA_CONFIG
orKAFKA_CONFIG_FILE
.
Prior Config | Kafka Parameter to Set | Source/Sink | Default if unset |
---|---|---|---|
KAFKA_MAX_BUFFER_KB | queued.max.messages.kbytes | Both | 16384 (16MB) |
KAFKA_CONSUME_LATEST_FIRST | auto.offset.reset | Source | earliest |
KAFKA_COMPRESSION | compression.type | Sink | none |
Required configuration (KAFKA_BROKERS
, KAFKA_TOPICS
, KAFKA_CONSUMER_GROUP
) should still be set via environment variable.
v0.3.0
- Adds support for setting arbitrary librdkafka configurations for underlying kafka client via
KAFKA_CONFIG
usingkey1=value1 key2=value2 ...
format - Also a shortcut
KAFKA_COMPRESSION
for setting compression on a sink - can benone, gzip, snappy, lz4, zstd
- Bump to go 1.14 and update version on a few underlying dependencies
v0.2.0
Update underlying confluent-kafka-go/librdkafka version from 0.11.6 to 1.4.2. With this change, the librdkafka binary is bundled and no longer needs to be separately installed to use.
Note that this still must be compiled with CGO_ENABLED=1
and you should use -tags musl
when building on alpine or other MUSL based distributions.
v0.1.3
Add KAFKA_MAX_BUFFER_KB
configuration, with a default of 16MB
(per topic+partition being consumed, plus each producer). Previously this used the librdkafka default which is 1GB
and can result in OOM errors when consuming a backlog in memory constrained environment.
v0.1.2
Support Sink.Close()
being called multiple times on same object (previously hung on subsequent calls)
v0.1.1
Add Source.Ping()
method to check connectivity to configured brokers and topics. Run during Init.Source()
to ensure errors are detected on start up.
v0.1.0
Update to librdkafka v0.11.6
Includes the latest underlying kafka client. Updating minor version as this may introduce new issues with static builds - see confluentinc/confluent-kafka-go#151
v0.0.2
Does not send kafka.OffsetsCommitted
or kafka.PartitionEOF
to Events()
channel unless they also report an error.
v0.0.1
Frafka is now open source!