Kafka 2.8 is out in the wild and does not need ZooKeeper anymore

KIP-500 is implemented and Kafka is now completely standalone

Long time coming but the KIP-500[1] has finally landed. It’s official, Apache Kafka does not require ZooKeeper anymore. The KRaft, the Kafka Raft implementation, is not recommended for production yet. Full announcement from Confluent is here[2].

Regardless, this is a fantastic milestone and a kudos to all the contributors for making this happen as the simplification in the operations will be significant.

§taking it for a test drive

First, generate a new cluster ID:

1
2
$ ~/dev/kafka-2.8/bin/kafka-storage.sh random-uuid
HCsQovjcTs-8xhS1DSU5Gw

Next, format the storage directory. The default directory is /tmp/kraft-combined-logs and the setting can be found under the log.dirs property of the new config/kraft/server.properties file:

1
2
$ ~/dev/kafka-2.8/bin/kafka-storage.sh format -t HCsQovjcTs-8xhS1DSU5Gw -c ~/dev/kafka-2.8/config/kraft/server.properties
Formatting /tmp/kraft-combined-logs

And simply start the broker:

1
$ ~/dev/kafka-2.8/bin/kafka-server-start.sh ~/dev/kafka-2.8/config/kraft/server.properties

Which produces the following output:

[2021-03-31 23:25:56,097] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-03-31 23:25:56,425] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-03-31 23:25:56,664] INFO [Log partition=@metadata-0, dir=/tmp/kraft-combined-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-03-31 23:25:56,727] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2021-03-31 23:25:56,914] INFO [RaftManager nodeId=1] Completed transition to Unattached(epoch=0, voters=[1], electionTimeoutMs=1165) (org.apache.kafka.raft.QuorumState)
[2021-03-31 23:25:56,917] INFO [RaftManager nodeId=1] Completed transition to Candidate(localId=1, epoch=1, retries=1, electionTimeoutMs=1680) (org.apache.kafka.raft.QuorumState)
[2021-03-31 23:25:56,921] INFO [RaftManager nodeId=1] Completed transition to Leader(localId=1, epoch=1, epochStartOffset=0) (org.apache.kafka.raft.QuorumState)
[2021-03-31 23:25:57,004] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2021-03-31 23:25:57,009] INFO [kafka-raft-outbound-request-thread]: Starting (kafka.raft.RaftSendThread)
...
[2021-03-31 23:25:57,959] INFO Kafka version: 2.8.0-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser)
[2021-03-31 23:25:57,959] INFO Kafka commitId: 08849bc3909d4fab (org.apache.kafka.common.utils.AppInfoParser)
[2021-03-31 23:25:57,959] INFO Kafka startTimeMs: 1617225957958 (org.apache.kafka.common.utils.AppInfoParser)
[2021-03-31 23:25:57,959] INFO [Controller 1] The request from broker 1 to unfence has been granted because it has caught up with the last committed metadata offset 1. (org.apache.kafka.controller.BrokerHeartbeatManager)
[2021-03-31 23:25:57,960] INFO Kafka Server started (kafka.server.KafkaRaftServer)
[2021-03-31 23:25:57,963] INFO [Controller 1] Unfenced broker: UnfenceBrokerRecord(id=1, epoch=0) (org.apache.kafka.controller.ClusterControlManager)
[2021-03-31 23:25:57,989] INFO [BrokerLifecycleManager id=1] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager)

It’s alive. A topic can be created using the usual tooling:

1
2
3
4
5
$ ~/dev/kafka-2.8/bin/kafka-topics.sh --create \
    --topic test-topic \
    --partitions 1 \
    --replication-factor 1 \
    --bootstrap-server localhost:9092
[2021-03-31 23:28:21,836] INFO [Controller 1] createTopics result(s): CreatableTopic(name='test-topic', numPartitions=1, replicationFactor=1, assignments=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager)
[2021-03-31 23:28:21,838] INFO [Controller 1] Created topic test-topic with ID H6YadkN7SUGXheMu6MJ1uA. (org.apache.kafka.controller.ReplicationControlManager)
[2021-03-31 23:28:21,838] INFO [Controller 1] Created partition H6YadkN7SUGXheMu6MJ1uA:0 with PartitionControlInfo(replicas=[1], isr=[1], removingReplicas=null, addingReplicas=null, leader=1, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager)
[2021-03-31 23:28:21,904] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(test-topic-0) (kafka.server.ReplicaFetcherManager)
[2021-03-31 23:28:21,926] INFO [Log partition=test-topic-0, dir=/tmp/kraft-combined-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-03-31 23:28:21,930] INFO Created log for partition test-topic-0 in /tmp/kraft-combined-logs/test-topic-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, etention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-03-31 23:28:21,931] INFO [Partition test-topic-0 broker=1] No checkpointed highwatermark is found for partition test-topic-0 (kafka.cluster.Partition)
[2021-03-31 23:28:21,932] INFO [Partition test-topic-0 broker=1] Log loaded for partition test-topic-0 with initial high watermark 0 (kafka.cluster.Partition)

Impressive.