It's been a bit more than four months since the last Kowl release and we are happy to finally release version 1.3.0. In this post we will provide more details and explanations about the new features than we do in the release notes.

Protobuf Support

Since Kowl had been published Protobuf support has always been one of the most demanded features among our userbase. Because we wanted to keep it as easy as possible for our users, this was not an easy task for us. Instead of requiring users to generate descriptor files from the .proto schemas using the protoc binary, we went the extra mile so that users just have to provide the .proto files and Kowl will take care of the rest. Internally Kowl will actually also compile descriptor files for Go, but these are details one has no longer to worry about.

The .proto schemas can be made available via a Git repository that Kowl clones and periodically pulls for new changes. Mappings that map a proto type to the respective Kafka topic must be configured as it's technically not possible to inherit these mappings (without using a schemaregistry) automatically:

  enabled: true
      - topicName: owlshop-orders-protobuf
        valueProtoType: fake_models.Order
        # keyProtoType: A Prototype for the key could be set as well
    # Other file providers beside a git repo might be added
      enabled: true
      refreshInterval: 5m
        enabled: true
        username: token
        password: redacted # Can also be set via environemtn variable `KAFKA_PROTOBUF_GIT_BASICAUTH_PASSWORD`

In the future we will add further providers that you can use to mount the .proto files - the most obvious one being a provider that accesses the local filesystem. Apart from that we will look into schema registry support for Protobuf depending on the demand across the community.

kafka protobuf message view
Kowl renders Protobuf encoded messages in JSON

Reassigning Partitions

We implemented a three-step wizard which allows users to reassign partitions to different brokers. With KIP-455 Kafka added a new interface that allows Kafka clients to reassign replicas from one broker to another. This was released with Kafka v2.4.0. Before that release replica reassignments were only possible via ZooKeeper.

Kafka itself ships a CLI tool (bin/ that accepts a JSON file with the desired partition reassignments. The JSON file for a single parttition reassignment looks like this:

         "replicas":[1, 2, 3]

If you want to use this tool to balance the partition count (or leadership) across the existing brokers, you are faced with the challenge of generating that JSON. If you want to decomission a broker and move this broker's partitions to one or more new brokers you also have to generate the JSON file first. To make that as easy as possible we created the three-step wizard:

  • Step 1: Choose the topics / partitions you want to reassign
  • Step 2: Choose one or more brokers that shall host the replicas of your partition
  • Step 3: Review the reassignment plan Kowl creates for you, configure a throttle rate if desired and start

Most of the reassignment logic happens after step 2 when Kowl creates the assignment plan. It makes some assumptions and generally tries to consider these things from top to bottom priority when creating the plan:

  1. Try to reassign the partition to the same broker if allowed as one of the target broker
  2. Try to reside partitions in the same rack as before
  3. Assign to broker with the fewest partitions
  4. Use broker with the least used disk space
  5. Rotate partition leadership among the current results so that partition leaders are balanced across all brokers

Another unique feature: On the first wizard page you can monitor the in progress reassignments, adapt the throttle rate as you like and obviously setup further reassignments.

Showing the Kafka Version

Did you ever wonder what Kafka version is running in the cluster you work with? Wonder no more! In the Brokers tab we now show the brokers' version Kowl is connected to. Even though this is a minor feature it comes in handy in larger organizations with dedicated platform teams where the users that work with Kafka may not know what Kafka version and therefore what features are available.

Use Timestamp as Start Offset

You can now use a specific timestamp as start offset for consuming messages in Kowl. Only messages with a timestamp older than that timestamp will be sent to the Frontend.

kafka timestamp start offset
Consume messages right after a specific timestamp

Downloading messages

If you want to download the messages in the search response you can now download them in a JSON formatted file for further processing. You can find the download button at the end of the message list

kowl download messages
Download messages in JSON format from Kafka

New Kafka Library

Until this release we were using the popular and battle tested Kafka library sarama. While it served the purpose of consuming and producing messages very well, we were looking for a library that offers access to more low level features. Sarama was also lacking support for newer versions and features hat had been introduced in Kafka v2.0.0+. Luckily we found franz-go which seemed to meet all our requirements and wishes. We've ported the code base to franz-go and we are super happy with the new library.

As far as I know this is also the only non Java client that is feature complete. If you plan to write a Go application that talks to Kafka you should definetely give it a try!


As you can see we've added quite a few new features and made larger changes and bugfixes which are not even mentioned here. In the future we aim to cut releases more often. If you are curious what we'll be working on in the next releases keep an eye on our release milestones in GitHub.