Tuesday, February 12, 2019

Deploying an Apache Camel route to Apache Karaf

In the previous blog post, we showed how to use Apache Camel to query an Apache Kafka broker, which is secured using kerberos. In this post, we will build on the previous blog post by showing how to deploy our Camel route to Apache Karaf. Karaf is an application runtime container that makes it incredibly easy to deploy simple applications via its "hot deploy" feature. As always, there are a few slightly tricky considerations when using kerberos, which is the purpose of this post.

As a pre-requisite to this article, please follow the previous blog post to set up Apache Kafka using kerberos, and test that the Camel route can retrieve from the topic we created successfully.

1) Configuring the Kerberos JAAS Login Module in Karaf

Download and extract the latest version of the Apache Karaf runtime (4.2.3 was used in this post). Before starting Karaf, we need to pass through a system property pointing to the krb5.conf file created in our Kerby KDC. This step is not necessary if you are using the standard location in the filesystem for krb5.conf. Open 'bin/karaf' and add the following to the list of system properties:
  • -Djava.security.krb5.conf=/path.to.kerby.project/target/krb5.conf \
Now start Karaf via "bin/karaf". Karaf uses JAAS for authentication (see the documentation here). In the console, enter "jaas:" and hit 'tab' to see the possibilities. For example, "jaas:realm-list" displays the JAAS realms that are currently configured.

Recall that our Camel route needs to configure a JAAS LoginModule for Kerberos. In the example given in the previous post, this was configured by setting the Java System property "java.security.auth.login.config" to point to the JAAS configuration file. We don't want to do that with Karaf, as otherwise we will end up overriding the other JAAS LoginModules that are installed.

Instead, we will take advantage of Karaf's "hot deploy" feature to add the Kerberos Login Module we need to Karaf. Drop the following blueprint XML file into Karaf's deploy directory, changing the keytab location with the correct path to the keytab file:

For Karaf to pick this up, we need to register the blueprint feature via "feature:install aries-blueprint". Now we should see our LoginModule configured via "jaas:realm-list":


2) Configuring the Camel route in Karaf

Next we will hot deploy our Camel route as a blueprint file in Karaf. Copy the following file into the deploy directory:

Then we need to install a few dependencies in Karaf. Add the Camel repo via "repo-add camel 2.23.1", and install the relevant Camel dependencies via: "feature:install camel camel-kafka". Our Camel route should then automatically start, and will retrieve the messages from the Kafka topic and write them to the filesystem, as configured in the route. The message payload and headers are logged in "data/log/karaf.log".

Thursday, February 7, 2019

Using the Apache Camel Kafka component with Kerberos

Apache Camel is a well-known integration framework available at the Apache Software Foundation. It comes with a huge number of components to integrate with pretty much anything you can think of. Naturally, it has a dedicated component to communicate with the popular Apache Kafka project. In this blog entry, we'll show first how to use Apache Camel as a consumer for a Kafka topic. Then we will show how to configure things when we are securing the Kafka broker with kerberos, something that often causes problems.

1) Setting up Apache Kafka

First let's set up Apache Kafka. Download and install it (this blog post uses Kafka 2.0.0), and then start up Zookeeper and the broker, as well as creating a "test" topic and a producer for that topic as follows:
  • bin/zookeeper-server-start.sh config/zookeeper.properties
  • bin/kafka-server-start.sh config/server.properties
  • bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
  • bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
Type a few messages into the producer console to make sure that it is working.

2) Consuming from Kafka using Apache Camel

Now we'll look at how to set up Apache Camel to consume from Kafka. I put a project up on github here for this purpose. The Camel route is defined in Spring, and uses the Camel Kafka component to retrieve messages from the broker, and to write them out to the target/results folder:
Simply run "mvn clean install" and observe the logs indicating that Camel has retrieved the messages you put into the topic with the producer above. Then check "target/results" to see the files containing the message bodies.

3) Securing Apache Kafka with Kerberos

So far so good. Now let's look at securing the Kafka broker using kerberos. I wrote a previous blog post to show how to use Apache Kerby as a KDC with Kafka, so please follow the steps outlined here, skipping the parts about configuring the consumer.

4) Consuming from Kafka using Apache Camel and Kerberos

To make our Camel route work with Kafka and Kerberos, a few changes are required. Just as we did for the Kafka producer, we need to set the "java.security.auth.login.config" and "java.security.krb5.conf" system properties for Camel. You can do this in the example by editing the "pom.xml" and adding something like this under "systemPropertyVariables" of the surefire configuration:
  • <java.security.auth.login.config>/path.to.kafka.project/config/client.jaas</java.security.auth.login.config
  • <java.security.krb5.conf>/path.to.kerby.project/target/krb5.conf</java.security.krb5.conf>
Replacing the paths to Kafka and Kerby appropriately (refer to the previous blog post on Kafka + Kerberos if this does not make sense). Next we need to make some changes to the Camel route itself. Add the following configuration to the Camel configuration for the Kafka component:
  • &amp;saslKerberosServiceName=kafka&amp;securityProtocol=SASL_PLAINTEXT
Camel uses "GSSAPI" as the default SASL mechanism, and so we don't have to configure that. Now re-run "mvn clean install" and you will see the Camel route get a ticket from the Kerby KDC and consuming messages successfully from the Kafka topic.

Wednesday, February 6, 2019

Validating kerberos tokens from different realms in Apache CXF

We've covered on this blog before how to configure an Apache CXF service to validate kerberos tokens. However, what if we have a use-case where we want to have multiple endpoints validate kerberos tokens that are in different realms? As Java uses system properties to configure kerberos, things can get a bit tricky if we want to co-locate the services in the same JVM. In this article we'll show how it's done.

1) The test scenario

The scenario is that we have two KDCs. The first KDC has realm "realma.apache.org", with users "alice" and "bob/service.realma.apache.org". The second KDC has realm "realmb.apache.org", with users "carol" and "dave/service.realmb.apache.org". We have a single service with two different endpoints - one which will authenticate users in "realma.apache.org", and the second that will authenticate users in "realmb.apache.org". Both endpoints have keytabs that we have exported from the KDC for "bob" and "dave".

2) Kerberos configuration

Both endpoints have to share the same Kerberos configuration, due to the fact that Java uses system properties to set up JAAS with the Krb5LoginModule. We need to set the following system properties:
  • java.security.auth.login.config - The path to the JAAS configuration file for the Krb5LoginModule
  • java.security.krb5.conf - The path to the krb5.conf kerberos configuration file
The JAAS configuration file for our service looks like the following:


Here we have two entries for "bob" and "dave", each pointing to a keytab file. Note that the principal contains the realm name. This is important as we have no default_realm in the krb5.conf file. The krb5.conf file looks like this:


Here we configure how to reach both KDCs for our different realms.

3) Service configuration

Next, we'll look at how to configure the services. We will show how it's done for a JAX-WS service, but similar configuration exists for JAX-RS. The client will pass the kerberos token in a BinarySecurityToken security header in the message, according to the WS-Security specs. We'll assume the service is using a WS-SecurityPolicy that requires a kerberos token (for more details see here). Here is a sample spring configuration for an endpoint for "dave":

We have a JAX-WS endpoint with a "ws-security.bst.validator" property which points to a KerberosTokenValidator instance. This tells CXF to process a received BinarySecurityToken with the KerberosTokenValidator.

The KerberosTokenValidator is configured with a CallbackHandler implementation, to supply a username and password (see here for a sample implementation). Note that this is not required normally when we have a keytab file, but it appears to be required when we do not define a default realm. The KerberosTokenValidator instance also defines the JAAS context name, as well as the fully qualified principal name. As this is in service name form, we have to set the property "usernameServiceNameForm" to "true" as well.

If we set up the endpoint for "bob" with similar configuration, then our krb5.conf doesn't need the "default_realm" property and we can successfully validate tickets for both realms.

Friday, September 21, 2018

Exploring Apache Knox - part VIII

This is the eighth and final post in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at how to authorize access to Apache Knox using Apache Ranger. We have also previously looked at how to achieve single sign-on using the Knox SSO service. In this post we will combine aspects of both, to show how we can use Knox SSO to achieve single sign-on for the Apache Ranger admin service UI.

As a prerequisite to this tutorial, follow the first tutorial to set up and run Apache Knox.

1) Configure the Apache Knox SSO service

First we'll make a few changes to the Apache Knox SSO Service to get it working with Apache Ranger. Copy "conf/topologies/knoxsso.xml" to "conf/topologies/knoxsso-ranger.xml". Change the "redirectToUrl" parameter in the "ShiroProvider" to redirect to "knoxsso-ranger" instead of "knoxsso". We also need to make some changes to the "KNOXSSO" service configuration, due to the fact that we have not configured the Ranger Admin Service to run on TLS. Change the "KNOXSSO" service in the topology file as follows (note: this should not be done in production as it is not secure to set "knoxsso.cookie.secure.only" to "false"):
Apache Ranger must be configured to trust the signing certificate of the Knox SSO service. In ${knox.home}/data/security/keystores, export the certificate from the jks file via (specifying the master secret as the password):
  • keytool -keystore gateway.jks -export-cert -file gateway.cer -alias gateway-identity -rfc
2) Configure Apache Ranger to use the Knox SSO service

Next we'll look at configuring Apache Ranger to use the Knox SSO Service. Edit 'conf/ranger-admin-site.xml' and add/edit the following properties:
  • ranger.truststore.file - ${knox.home}/data/security/keystores/gateway.jks
  • ranger.truststore.password - the truststore password
  • ranger.sso.enabled - true
  • ranger.sso.providerurl - https://localhost:8443/gateway/knoxsso-ranger/api/v1/websso
  • ranger.sso.publicKey - Edit gateway.cer we exported above and paste in the content between the BEGIN + END part here.
3) Log in to the Ranger Admin Service UI using Knox SSO

Now we're reading to log in to the Ranger Admin Service UI. Start Ranger via "sudo ranger-admin start" and open a browser at "http://localhost:6080". You will be re-directed to the Knox SSO login page. Login with credentials of "admin/admin-password". We will be redirected back to the Ranger Admin UI and logged in automatically as the "admin" user.

4) Some additional configuration parameters

Finally, there are some additional configuration parameters we can set on both the Knox and Ranger sides. It's possible to enforce that the KNOX SSO (JWT) token has a required audience claim in Ranger, by setting the "ranger.sso.audiences" configuration parameter in "conf/ranger-admin-site.xml". The audience claim can be set in the "KNOXSSO" service configuration via the "knoxsso.token.audiences" configuration property. It is also possible to change the default signature algorithm by specifying "ranger.sso.expected.sigalg" in Ranger (for example "RS512") and "knoxsso.token.sigalg" in Knox.

Wednesday, September 19, 2018

Exploring Apache Knox - part VII

This is the seventh in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at how to achieve single sign-on using the Knox SSO service, where the Knox SSO service was configured to authenticate the user to a third party SAML SSO provider. In this post we are going to move away from authenticating users, and look at how we can authorize access to Apache Knox using Apache Ranger.

As a prerequisite to this tutorial, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from.

1) Install the Apache Ranger Knox plugin

First we will install the Apache Ranger Knox plugin. Download Apache Ranger and verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting plugin to a location where you will configure and install it:
  • mvn clean package assembly:assembly -DskipTests
  • tar zxvf target/ranger-${version}-knox-plugin.tar.gz
  • mv ranger-${version}-knox-plugin ${ranger.knox.home}
Now go to ${ranger.knox.home} and edit "install.properties". You need to specify the following properties:
  • POLICY_MGR_URL: Set this to "http://localhost:6080"
  • REPOSITORY_NAME: Set this to "KnoxTest".
  • KNOX_HOME: The location of your Apache Knox installation
Save "install.properties" and install the plugin as root via "sudo ./enable-knox-plugin.sh". The Apache Ranger Knox plugin should now be successfully installed. One thing to check for is that the user who is running Apache Knox has the correct permissions to read the policy cache ("/etc/ranger/KnoxTest/policycache"). Now restart Apache Knox before proceeding.

2) Create a topology in Apache Knox for authorization

Even though we have installed the Apache Ranger plugin in Knox, we need to enable it explicitly in a topology. Copy "conf/topologies/sandbox.xml" to "conf/topologies/sandbox-ranger.xml" and add the following provider:
Now let's try to access the file using the admin credentials:
  • curl -u admin:admin-password -kL https://localhost:8443/gateway/sandbox-ranger/webhdfs/v1/data/LICENSE.txt?op=OPEN
You should get a 403 Forbidden error due to an authorization failure.

3) Create authorization policies in the Apache Ranger Admin console

Next we will use the Apache Ranger admin console to create authorization policies for Apache Knox. Follow the steps in this tutorial to install the Apache Ranger admin service. Before starting the Ranger admin service, edit 'conf/ranger-admin-site.xml' and add the following properties:
  • ranger.truststore.file - ${knox.home}/data/security/keystores/gateway.jks
  • ranger.truststore.password - security
Start the Apache Ranger admin service with "sudo ranger-admin start" and open a browser and go to "http://localhost:6080/" and log on with "admin/admin". Add a new Knox service in the Ranger admin UI with the following configuration values:
  • Service Name: KnoxTest
  • Username: admin
  • Password: admin-password
  • knox.url: https://localhost:8443/gateway/admin/api/v1/topologies
Now click on the "KnoxTest" service that we have created. Click on the policy that is automatically created, and note that the "admin" user already has the "Allow" permission for all Knox topologies and services. Wait for the policy to sync to the plugin, and the curl call we executed above should now work:
  • curl -u admin:admin-password -kL https://localhost:8443/gateway/sandbox-ranger/webhdfs/v1/data/LICENSE.txt?op=OPEN
whereas using the "guest" credentials ("guest"/"guest-password") should be denied, as we have not created a matching authorization policy in Ranger.

Wednesday, September 12, 2018

Exploring Apache Knox - part VI

This is the sixth in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at how to achieve single sign-on using the Knox SSO service, where the Knox SSO service was configured to authenticate the user to a third party Identity Provider using OpenId Connect. In this post we will show instead how to configure Knox SSO to redirect the user instead to a SAML SSO Identity Provider.

As a prerequisite to this tutorial, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from.

1) Configure the Apache Knox SSO service

For the purposes of this tutorial we are going to use the www.testshib.org SAML SSO Identity Provider. First we'll configure the Knox SSO service. Copy "conf/topologies/knoxsso.xml" to "conf/topologies/knoxssopac4jsaml.xml". Now edit it and delete the "ShiroProvider" provider and add the following provider instead (which leverages the Pac4j project):
Note that one of the configuration parameters references the SAML SSO RP (Relying Party) Metadata file, which we will need to configure the IdP. Luckily Apache Knox will generate this for us on the first call. Open a browser and navigate to the following URL:
  • https://localhost:8443/gateway/knoxssopac4jsaml/api/v1/websso
You should see an error page on the TestShib site, as it has not yet been configured with our Metadata file. However, Knox has now generated this file at the location we specified via "saml.serviceProviderMetadataPath". Go to "https://www.testshib.org/register.html" and upload the generated metadata file:


2) Secure a topology using the "SSOCookieProvider" provider

In section 2 of this earlier tutorial, we showed how to secure a topology using the "SSOCookieProvider" provider. Copy "conf/topologies/sandbox-sso.xml" to "conf/topologies/sandbox-ssopac4jsaml.xml" and change the value of the "sso.authentication.provider.url" parameter to:
  • https://localhost:8443/gateway/knoxssopac4jsaml/api/v1/websso
Now start Apache Knox and navigate to the following URL:
  • https://localhost:8443/gateway/sandbox-ssopac4jsaml/webhdfs/v1/data/LICENSE.txt?op=OPEN
You will be redirected to the Knox SSO service and then on to the TestShib IdP and authenticate with "myself" / "myself".The browser will then be redirected back to the "sandbox-ssopac4jsaml" topology and "LICENSE.txt" should be successfully downloaded.

Monday, September 10, 2018

Exploring Apache Knox - part V

This is the fifth in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at how to achieve single sign-on using the Knox SSO service, where the Knox SSO service was configured to authenticate the user to an LDAP backend using the "ShiroProvider" configured in "knoxsso.xml". However, Knox SSO supports more sophisticated scenarios, where the user is redirected to a third-party Identity Provider for authentication instead. In this post we will cover how to set up the Knox SSO service so that a user is redirected to an OpenId Connect IdP.

As a prerequisite to this tutorial, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from.

1) Configuring the Apache CXF Fediz OIDC IdP

For the purposes of this tutorial, we will use the Apache CXF Fediz OpenId Connect IdP deployed in docker. Follow section (1) of this post about starting the Apache CXF Fediz IdP in docker. Once the IdP has started via "docker-compose up", open a browser and navigate to "https://localhost:10002/fediz-oidc/console/clients". This is the client registration page of the Fediz OIDC IdP. Authenticate using credentials "alice" (password "ecila") and register a new client for Apache Knox using the following redirect URI:
  • https://localhost:8443/gateway/knoxssopac4j/api/v1/websso?pac4jCallback=true&client_name=OidcClient
Click on the registered client and save the client Id and Secret for later.

 
2) Configure the Apache Knox SSO service

The next step is to configure the Knox SSO service to work with our OpenId Connect IdP. Copy "conf/topologies/knoxsso.xml" to "conf/topologies/knoxssopac4j.xml". Now edit it and delete the "ShiroProvider" provider and add the following provider instead (which leverages the Pac4j project):
where the values for "oidc.id" and "oidc.secret" are the values saved from Fediz above when registering the client.

Before starting Knox we need to trust the certificate associated with the TLS endpoint of the OpenId Connect IdP (which in our demo is just a locally issued certificate). To do this we will add the certificate to our Java cacerts file (note: not a good idea in production - this is just for test purposes). Download "idp-ssl-trust.jks" which is available with the docker configuration for Fediz here and add the certificate to your Java cacerts as follows (destination password: "changeit", source password: "ispass"):
  • keytool -keystore $JAVA_HOME/jre/lib/security/cacerts -import-keystore -srckeystore ./idp-ssl-trust.jks

3) Secure a topology using the "SSOCookieProvider" provider

In section 2 of the previous tutorial, we showed how to secure a topology using the "SSOCookieProvider" provider. Copy "conf/topologies/sandbox-sso.xml" to "conf/topologies/sandbox-ssopac4j.xml" and change the value of the "sso.authentication.provider.url" parameter to:
  • https://localhost:8443/gateway/knoxssopac4j/api/v1/websso
Now start Apache Knox and navigate to the following URL:
  • https://localhost:8443/gateway/sandbox-ssopac4j/webhdfs/v1/data/LICENSE.txt?op=OPEN
You will be redirected to the Knox SSO service and then on to the Fediz IdP. Authenticate with "alice" / "ecila" and grant permission to issue a token. The browser will then be redirected back to the "sandbox-ssopac4j" topology and "LICENSE.txt" should be successfully downloaded.