With this release it is the first time that we added a feature that mutates state within your Apache Kafka cluster: Consumer Group management. This blogpost highlights a few of the new features brought in this release, the full changelog can be found in the release notes.

Consumer Group Management

Sometimes you may want to edit a consumer group's offset to reprocess or skip certain messages. In this release we added the possibility to edit a group's offset for all topics with an active group offset, for a single topic or even for a single partition. You can set the offset to Start, End a specific time or you can copy the offsets from another group.

Copying offsets from another group is a handy tool to migrate a consumer group from one name to another. When you choose this option you can also specify whether you want to copy all offsets from that source group or only those topics/partitions that both groups currently have offsets for.

Edit Consumer Group Offsets
Edit Consumer Group Offsets

Reworked Config Pagess

It is well known that topic configurations can be inherited from Brokers and the default Kafka configuration. But did you know that there are various ConfigSources that can be used to configure the brokers and which ones take precedence if multiple are set for the same key? The reason for that are the different modes (static / dynamic) and scopes (per-broker vs cluster-wide) a config may be applied to. Accordingly the following so called ConfigSources exist in Kafka:

  1. Dynamic Topic Config
  2. Dynamic Broker Config
  3. Dynamic Default Broker Config
  4. Dynamic Static Broker Config
  5. Default Config
  6. Dynamic Broker Logging Config

We reworked the config pages to reflect these options. Our goal was that you can understand what configuration is currently active and what settings were inherited or overriden at each level. Sensitive config options (such as passwords) are marked as such and obviously not sent to the frontend.

Reworked config page for Brokers
Reworked config page for Brokers

Protobuf Schema Registry Support

Kowl added Protobuf support in v1.3.0 a couple weeks ago already, however if you use Protobuf along with Confluent's Schema registry there is a subtle difference in the serialized message that is written to Kafka. If you serialize Protobuf messages using Confluent's KafkaProtobufSerializer the protobuf message is wrapped into another binary wrapper. This wrapper contains information about the registered schema id and the message type that is used for this message. Accordingly Kowl used to fail to deserialize these messages, because it was not aware of this wrapper.

Now that we've added schema registry support for Protobuf you only need to configure the schema registry and Kowl will automatically deserialize the messages into the human readable JSON format. In contrast to manually managed Protobuf schemas you don't need to provide any mappings and proto schemas that describe Kowl what message type it has to use for each topic. Kowl will pull that information from any schema registry that is API compatible with Confluent's schema registry.

What's planned for the next release

The biggest planned feature for the next release certainly is support for Kafka Connect. We heard your feedback and we want to enable you to manage your Kafka Connect clusters in the most efficient & comfortable way. Additionally we want to continue adding features that mutate state - e.g. topic & record deletion. You can track the progress by looking at our GitHub milestone for release 1.5.