Monday, May 15, 2017

Securing Apache Kafka with Kerberos

Last year, I wrote a series of blog articles based on securing Apache Kafka. The articles covered how to secure access to the Apache Kafka broker using TLS client authentication, and how to implement authorization policies using Apache Ranger and Apache Sentry. Recently I wrote another article giving a practical demonstration how to secure HDFS using Kerberos. In this post I will look at how to secure Apache Kafka using Kerberos, using a test-case based on Apache Kerby. For more information on securing Kafka with kerberos, see the Kafka security documentation.

1) Set up a KDC using Apache Kerby

A github project that uses Apache Kerby to start up a KDC is available here:
  • bigdata-kerberos-deployment: This project contains some tests which can be used to test kerberos with various big data deployments, such as Apache Hadoop etc.
The KDC is a simple junit test that is available here. To run it just comment out the "org.junit.Ignore" annotation on the test method. It uses Apache Kerby to define the following principals:
  • zookeeper/localhost@kafka.apache.org
  • kafka/localhost@kafka.apache.org
  • client@kafka.apache.org
Keytabs are created in the "target" folder. Kerby is configured to use a random port to lauch the KDC each time, and it will create a "krb5.conf" file containing the random port number in the target directory. 

2) Configure Apache Zookeeper

Download Apache Kafka and extract it (0.10.2.1 was used for the purposes of this tutorial). Edit 'config/zookeeper.properties' and add the following properties:
  • authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
  • requireClientAuthScheme=sasl 
  • jaasLoginRenew=3600000
Now create 'config/zookeeper.jaas' with the following content:

Server {
        com.sun.security.auth.module.Krb5LoginModule required refreshKrb5Config=true useKeyTab=true keyTab="/path.to.kerby.project/target/zookeeper.keytab" storeKey=true principal="zookeeper/localhost";
};

Before launching Zookeeper, we need to point to the JAAS configuration file above and also to the krb5.conf file generated in the Kerby test-case above. This can be done by setting the "KAFKA_OPTS" system property with the JVM arguments:
  • -Djava.security.auth.login.config=/path.to.zookeeper/config/zookeeper.jaas 
  • -Djava.security.krb5.conf=/path.to.kerby.project/target/krb5.conf
Now start Zookeeper via:
  • bin/zookeeper-server-start.sh config/zookeeper.properties 
3) Configure Apache Kafka broker

Create 'config/kafka.jaas' with the content:

KafkaServer {
            com.sun.security.auth.module.Krb5LoginModule required refreshKrb5Config=true useKeyTab=true keyTab="/path.to.kerby.project/target/kafka.keytab" storeKey=true principal="kafka/localhost";
};

Client {
        com.sun.security.auth.module.Krb5LoginModule required refreshKrb5Config=true useKeyTab=true keyTab="/path.to.kerby.project/target/kafka.keytab" storeKey=true principal="kafka/localhost";
};

The "Client" section is used to talk to Zookeeper. Now edit  'config/server.properties' and add the following properties:
  • listeners=SASL_PLAINTEXT://localhost:9092
  • security.inter.broker.protocol=SASL_PLAINTEXT 
  • sasl.mechanism.inter.broker.protocol=GSSAPI 
  • sasl.enabled.mechanisms=GSSAPI 
  • sasl.kerberos.service.name=kafka 
We will just concentrate on using SASL for authentication, and hence we are using "SASL_PLAINTEXT" as the protocol. For "SASL_SSL" please follow the keystore generation as outlined in the following article. Again, we need to set the "KAFKA_OPTS" system property with the JVM arguments:
  • -Djava.security.auth.login.config=/path.to.kafka/config/kafka.jaas 
  • -Djava.security.krb5.conf=/path.to.kerby.project/target/krb5.conf
Now we can start the server and create a topic as follows:
  • bin/kafka-server-start.sh config/server.properties
  • bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
4) Configure Apache Kafka producers/consumers

To make the test-case simpler we added a single principal "client" in the KDC for both the producer and consumer. Create a file called "config/client.jaas" with the content:

KafkaClient {
        com.sun.security.auth.module.Krb5LoginModule required refreshKrb5Config=true useKeyTab=true keyTab="/path.to.kerby.project/target/client.keytab" storeKey=true principal="client";
};

Edit *both* 'config/producer.properties' and 'config/consumer.properties' and add:
  • security.protocol=SASL_PLAINTEXT
  • sasl.mechanism=GSSAPI 
  • sasl.kerberos.service.name=kafka
Now set the "KAFKA_OPTS" system property with the JVM arguments:
  • -Djava.security.auth.login.config=/path.to.kafka/config/client.jaas 
  • -Djava.security.krb5.conf=/path.to.kerby.project/target/krb5.conf
We should now be all set. Start the producer and consumer via:
  • bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
  • bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/consumer.properties --new-consumer

1 comment:

  1. Hadoop is trending technology in today's world and your blog gives lots of information to me. Keep sharing more about Hadoop.
    Hadoop Training in Chennai

    ReplyDelete