1) Install the Apache Ranger Kafka plugin
The first step is to download Apache Ranger (0.6.1-incubating was used in this post). Verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting plugin to a location where you will configure and install it:
- tar zxvf apache-ranger-incubating-0.6.1.tar.gz
- cd apache-ranger-incubating-0.6.1
- mvn clean package assembly:assembly -DskipTests
- tar zxvf target/ranger-0.6.1-kafka-plugin.tar.gz
- mv ranger-0.6.1-kafka-plugin ${ranger.kafka.home}
- COMPONENT_INSTALL_DIR_NAME: The location of your Kafka installation
- POLICY_MGR_URL: Set this to "http://localhost:6080"
- REPOSITORY_NAME: Set this to "KafkaTest".
2) Configure authorization in the broker
Configure Apache Kafka as per the first tutorial. There are a number of steps we need to follow to configure the Ranger Kafka plugin before it is operational:
- Edit 'config/server.properties' and add the following: authorizer.class.name=org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer
- Add the Kafka "config" directory to the classpath, so that we can pick up the Ranger configuration files: export CLASSPATH=$KAFKA_HOME/config
- Copy the Apache Commons Logging jar into $KAFKA_HOME/libs.
- The ranger plugin will try to store policies by default in "/etc/ranger/KafkaTest/policycache". As we installed the plugin as "root" make sure that this directory is accessible to the user that is running the broker.
- bin/kafka-server-start.sh config/server.properties
At this point we should have configured the broker so that the Apache Ranger plugin is used to communicate with the Apache Ranger admin service to download authorization policies. So we need to install and configure the Apache Ranger admin service. Please refer to this blog post for how to do this. Assuming the admin service is already installed, start it via "sudo ranger-admin start". Open a browser and log on to "localhost:6080" with the credentials "admin/admin".
First lets add some new users that match the SSL principals we have created in the first tutorial. Click on "Settings" and "Users/Groups". Add new users for the principals:
- CN=Client,O=Apache,L=Dublin,ST=Leinster,C=IE
- CN=Service,O=Apache,L=Dublin,ST=Leinster,C=IE
- CN=Broker,O=Apache,L=Dublin,ST=Leinster,C=IE
4) Test authorization
Now lets test the authorization logic. Bear in mind that by default the Kafka plugin reloads policies from the admin service every 30 seconds, so you may need to wait that long or to restart the broker to download the newly created policies. Start the producer:
- bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
- bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/consumer.properties --new-consumer
Hi,
ReplyDeleteI have tried implementing the above post, I am looking to authorize users to producer, I assume from above post every time new producer and consumer added to the cluster it has to go through part I (SSL certificates and keystores) added to the broker and then part III