It is based on the 4-series log Streams and Tables in Apache Kafka.

1 A Primer

Kafka is an event streaming platform that provides three key functionalities in a scalable, fault-tolerant, and reliable manner.

  • publish and subscribe to events
  • store events for as long as you want
  • process and analyze events

An event records that fact that “something happended” in the world. An event usually has a key, value and timestamp. An event stream records the history of what has happened as a sequence of events. This history is an ordered sequence or chain of events and new events are constantly being appended to the history.

Compared to an event stream, a table is mutable and represents the state of the world at a particular point in time. A stream is appending-only whereas existing events are immutable. Streams and tables are persistent, durable, and fault-tolerant. Events in a stream can be keyed and one key may have many events – events have no unique key.

2 Topics, Partitions, and Storage

A topic is a storage layer concept that is an unbounded sequence of serialized events, where each event is represented as an encoded key-value pair or “message”. It is where the events are stored with a strorage policy. A topic has a name.

Events are serialized when they are written to a topic and deserialized when they are read. There is no single storage format in Kafka because different clients may use different formats such as protobuf or JSON.

The components that store and serve the data are called Kafka brokers. A broker runs in one server. Brokers are agnostic to the serialization format or “type” of a stored event. All they see is a pair of raw bytes for event key and event value <byte[], byte[]> in both write and read operations.

Kafka topics are partitioned in a sense that a topic is spread over a number of buckets located on different brokers. You set the number of partitions when creating a topic. Each partiction can be replicated (commonly a factor of 3), even across geo-regions or data centers. Partition is a fundamental concept in Kafa that operations such as read/write/process/join/store/replicate/order are based on partitions.

Kafka decouples event producers from event consumers. Producers determine event partitioning. Topic name, event key and cluster metadata are used to determine the desired target partition.The default partitioning function is ƒ(event.key, event.value) = hash(event.key) % numTopicPartitions. The primary goal of partitioning is the ordering of events: producer should send “related” events to the same partition because Kafka guarantees the ordering of events only within a give partition of a topic - not across partitions of the same topic. Same event key goes to the same partition. There is another reason why partitioning matters: Kafka consumer groups read from the same topic(s) for collaborative processing of data in parallel. In such cases, it is important to be able to control which partitiions go to different participants within the same group.

Events with the same key may end up in different partitions for two common cuases:

  • Topic configuration: for example, the number of partitions of a topic changes.
  • Producer configuration: a producer uses a custom partitioning function that doesn’t purely depends on event key.

The author recommends to use 30 partitions per topic to avoid the changing of the number of partitions.

3 Processing

Kafka streams and ksqlDB are tools that turn events stored in “raw” topics into streams and tables. An event stream is a topic with a schema. A table is a materialized view that relies on a change being made elsewhere rather than being directly updatable itself. Both streams and tables are partitioined.

The producer and consumer must agree on a data contract to enable the schema. Schema registry is one common way to achieve it. Broker-side shema valiation enabls centrally enforced before data written.

Data in a topic is processed per partition. A Kafka consumer group reads from the same input topic(s). The group membership is configured with

  • application.id: for Stream apps
  • ksql.service.id for ksqlDB servers
  • group.id for lower-level consumer clients and worker nodes in Kafka Connect cluster

Within a group, every member instance gets an exclusive slice of the data - one or more topic partitions, which it processes in isolation from the other instances. The group uses consumer group protocol to detect whenever new instances join or existing instances leave the group, and then automatically redistributes the workload and partition assignments. It is called relalancing in Kafa and is part of the consumer group protocol.

The unit of parallelism for processing/rebalancing is a stream task. An application instance can run zero, one, or multiple tasks during its lifecycle. Input partitions from topics/streams/tables are assigned 1:1 to these tasks for processing. To recap: each task is responsible for processing one partition. There are exactly as many stream tasks as there are input partitions. These stream tasks are evenly assigned by Kafka to the running instances of your application to allow for scalable and parallel processing of the data.

Tables use state stores to maintain its state between events. Data stores are materialized on local disk inside application instances or ksqlDB servers for quick access.

The global tables of Kafka streams are not parititioned.

Advanced Topics

Kafka data is stored reliably and durably. Any data stored in a table is also stored remotely in Kakfa.

Application instances can be configured to maintain passive replicas of another instance’s table data.