Robust Message Serialization in Apache Kafka Using Apache Avro, Part 3

Categories: Avro CDH How-to Kafka

Part 3: Configuring Clients

Earlier, we introduced Kafka Serializers and Deserializers that are capable of writing and reading Kafka records in Avro format. In this part we will going to see how to configure producers and consumers to use them.

Setting up a Kafka Topic for use as a Schema Store

KafkaTopicSchemaProvider works with a Kafka topic as its persistent store. This topic will contain at most thousands of records: the schemas. It does not need multiple partitions, but it needs to be available even when one of the brokers is down. That’s why we configure it with replication factor of 3 and at least 2 in-sync replicas.

Of course, in a production environment, we would set up Apache Sentry rules to allow only certain principals to add schemas.

As a next step, let’s add a schema to this topic with the administration tool created in Part 2.

Example Schema

The first version of our schema is a simplistic record that captures some attributes of a user.

The later version defines a new field and changes some types. For convenience we set the record’s name to User2 in that schema so we can generate classes for both of them in the same project. But in a real life scenario, User2 would be a later version of the same class instead of a different class coexisting with User.

Configuring the Producer

We add some general producer config:

then we need to configure our producer to use our Serializer:

and configure the Serializer to serialize User objects:

Next we configure the SchemaProvider to use Kafka for schema storage and set the topic name. We also have to set bootstrap servers. This allow us to use a different Kafka cluster as the Schema-provider backend than what we are producing to.

We can now use this configuration to create a producer and produce
User object to a Kafka topic.

Configuring the consumer

We set up a consumer in a quite similar way. After some general configuration

we specify our Deserializer and the class it will read. We use a different version of the class: User2.

Then we need to set up the Schema provider just like we did above.

With this configuration, we can set up a consumer and start polling for records.


We have shown how Avro can be used in conjunction with Kafka to track evolving versions of schema over time.  In situations where the entire lifespan of data is managed by different groups, this technique provides a clean way to allow each such managing group to use schemas to manage the data evolution handoff.

The code for this blog post can be found in Cloudera’s kafka-examples Github repository.



Leave a Reply

Your email address will not be published. Required fields are marked *

Prove you're human! *