Friday, September 21, 2018

Exploring Apache Knox - part VIII

This is the eighth and final post in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at how to authorize access to Apache Knox using Apache Ranger. We have also previously looked at how to achieve single sign-on using the Knox SSO service. In this post we will combine aspects of both, to show how we can use Knox SSO to achieve single sign-on for the Apache Ranger admin service UI.

As a prerequisite to this tutorial, follow the first tutorial to set up and run Apache Knox.

1) Configure the Apache Knox SSO service

First we'll make a few changes to the Apache Knox SSO Service to get it working with Apache Ranger. Copy "conf/topologies/knoxsso.xml" to "conf/topologies/knoxsso-ranger.xml". Change the "redirectToUrl" parameter in the "ShiroProvider" to redirect to "knoxsso-ranger" instead of "knoxsso". We also need to make some changes to the "KNOXSSO" service configuration, due to the fact that we have not configured the Ranger Admin Service to run on TLS. Change the "KNOXSSO" service in the topology file as follows (note: this should not be done in production as it is not secure to set "knoxsso.cookie.secure.only" to "false"):
Apache Ranger must be configured to trust the signing certificate of the Knox SSO service. In ${knox.home}/data/security/keystores, export the certificate from the jks file via (specifying the master secret as the password):
  • keytool -keystore gateway.jks -export-cert -file gateway.cer -alias gateway-identity -rfc
2) Configure Apache Ranger to use the Knox SSO service

Next we'll look at configuring Apache Ranger to use the Knox SSO Service. Edit 'conf/ranger-admin-site.xml' and add/edit the following properties:
  • ranger.truststore.file - ${knox.home}/data/security/keystores/gateway.jks
  • ranger.truststore.password - the truststore password
  • ranger.sso.enabled - true
  • ranger.sso.providerurl - https://localhost:8443/gateway/knoxsso-ranger/api/v1/websso
  • ranger.sso.publicKey - Edit gateway.cer we exported above and paste in the content between the BEGIN + END part here.
3) Log in to the Ranger Admin Service UI using Knox SSO

Now we're reading to log in to the Ranger Admin Service UI. Start Ranger via "sudo ranger-admin start" and open a browser at "http://localhost:6080". You will be re-directed to the Knox SSO login page. Login with credentials of "admin/admin-password". We will be redirected back to the Ranger Admin UI and logged in automatically as the "admin" user.

4) Some additional configuration parameters

Finally, there are some additional configuration parameters we can set on both the Knox and Ranger sides. It's possible to enforce that the KNOX SSO (JWT) token has a required audience claim in Ranger, by setting the "ranger.sso.audiences" configuration parameter in "conf/ranger-admin-site.xml". The audience claim can be set in the "KNOXSSO" service configuration via the "knoxsso.token.audiences" configuration property. It is also possible to change the default signature algorithm by specifying "ranger.sso.expected.sigalg" in Ranger (for example "RS512") and "knoxsso.token.sigalg" in Knox.

Wednesday, September 19, 2018

Exploring Apache Knox - part VII

This is the seventh in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at how to achieve single sign-on using the Knox SSO service, where the Knox SSO service was configured to authenticate the user to a third party SAML SSO provider. In this post we are going to move away from authenticating users, and look at how we can authorize access to Apache Knox using Apache Ranger.

As a prerequisite to this tutorial, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from.

1) Install the Apache Ranger Knox plugin

First we will install the Apache Ranger Knox plugin. Download Apache Ranger and verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting plugin to a location where you will configure and install it:
  • mvn clean package assembly:assembly -DskipTests
  • tar zxvf target/ranger-${version}-knox-plugin.tar.gz
  • mv ranger-${version}-knox-plugin ${ranger.knox.home}
Now go to ${ranger.knox.home} and edit "install.properties". You need to specify the following properties:
  • POLICY_MGR_URL: Set this to "http://localhost:6080"
  • REPOSITORY_NAME: Set this to "KnoxTest".
  • KNOX_HOME: The location of your Apache Knox installation
Save "install.properties" and install the plugin as root via "sudo ./enable-knox-plugin.sh". The Apache Ranger Knox plugin should now be successfully installed. One thing to check for is that the user who is running Apache Knox has the correct permissions to read the policy cache ("/etc/ranger/KnoxTest/policycache"). Now restart Apache Knox before proceeding.

2) Create a topology in Apache Knox for authorization

Even though we have installed the Apache Ranger plugin in Knox, we need to enable it explicitly in a topology. Copy "conf/topologies/sandbox.xml" to "conf/topologies/sandbox-ranger.xml" and add the following provider:
Now let's try to access the file using the admin credentials:
  • curl -u admin:admin-password -kL https://localhost:8443/gateway/sandbox-ranger/webhdfs/v1/data/LICENSE.txt?op=OPEN
You should get a 403 Forbidden error due to an authorization failure.

3) Create authorization policies in the Apache Ranger Admin console

Next we will use the Apache Ranger admin console to create authorization policies for Apache Knox. Follow the steps in this tutorial to install the Apache Ranger admin service. Before starting the Ranger admin service, edit 'conf/ranger-admin-site.xml' and add the following properties:
  • ranger.truststore.file - ${knox.home}/data/security/keystores/gateway.jks
  • ranger.truststore.password - security
Start the Apache Ranger admin service with "sudo ranger-admin start" and open a browser and go to "http://localhost:6080/" and log on with "admin/admin". Add a new Knox service in the Ranger admin UI with the following configuration values:
  • Service Name: KnoxTest
  • Username: admin
  • Password: admin-password
  • knox.url: https://localhost:8443/gateway/admin/api/v1/topologies
Now click on the "KnoxTest" service that we have created. Click on the policy that is automatically created, and note that the "admin" user already has the "Allow" permission for all Knox topologies and services. Wait for the policy to sync to the plugin, and the curl call we executed above should now work:
  • curl -u admin:admin-password -kL https://localhost:8443/gateway/sandbox-ranger/webhdfs/v1/data/LICENSE.txt?op=OPEN
whereas using the "guest" credentials ("guest"/"guest-password") should be denied, as we have not created a matching authorization policy in Ranger.

Wednesday, September 12, 2018

Exploring Apache Knox - part VI

This is the sixth in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at how to achieve single sign-on using the Knox SSO service, where the Knox SSO service was configured to authenticate the user to a third party Identity Provider using OpenId Connect. In this post we will show instead how to configure Knox SSO to redirect the user instead to a SAML SSO Identity Provider.

As a prerequisite to this tutorial, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from.

1) Configure the Apache Knox SSO service

For the purposes of this tutorial we are going to use the www.testshib.org SAML SSO Identity Provider. First we'll configure the Knox SSO service. Copy "conf/topologies/knoxsso.xml" to "conf/topologies/knoxssopac4jsaml.xml". Now edit it and delete the "ShiroProvider" provider and add the following provider instead (which leverages the Pac4j project):
Note that one of the configuration parameters references the SAML SSO RP (Relying Party) Metadata file, which we will need to configure the IdP. Luckily Apache Knox will generate this for us on the first call. Open a browser and navigate to the following URL:
  • https://localhost:8443/gateway/knoxssopac4jsaml/api/v1/websso
You should see an error page on the TestShib site, as it has not yet been configured with our Metadata file. However, Knox has now generated this file at the location we specified via "saml.serviceProviderMetadataPath". Go to "https://www.testshib.org/register.html" and upload the generated metadata file:


2) Secure a topology using the "SSOCookieProvider" provider

In section 2 of this earlier tutorial, we showed how to secure a topology using the "SSOCookieProvider" provider. Copy "conf/topologies/sandbox-sso.xml" to "conf/topologies/sandbox-ssopac4jsaml.xml" and change the value of the "sso.authentication.provider.url" parameter to:
  • https://localhost:8443/gateway/knoxssopac4jsaml/api/v1/websso
Now start Apache Knox and navigate to the following URL:
  • https://localhost:8443/gateway/sandbox-ssopac4jsaml/webhdfs/v1/data/LICENSE.txt?op=OPEN
You will be redirected to the Knox SSO service and then on to the TestShib IdP and authenticate with "myself" / "myself".The browser will then be redirected back to the "sandbox-ssopac4jsaml" topology and "LICENSE.txt" should be successfully downloaded.

Monday, September 10, 2018

Exploring Apache Knox - part V

This is the fifth in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at how to achieve single sign-on using the Knox SSO service, where the Knox SSO service was configured to authenticate the user to an LDAP backend using the "ShiroProvider" configured in "knoxsso.xml". However, Knox SSO supports more sophisticated scenarios, where the user is redirected to a third-party Identity Provider for authentication instead. In this post we will cover how to set up the Knox SSO service so that a user is redirected to an OpenId Connect IdP.

As a prerequisite to this tutorial, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from.

1) Configuring the Apache CXF Fediz OIDC IdP

For the purposes of this tutorial, we will use the Apache CXF Fediz OpenId Connect IdP deployed in docker. Follow section (1) of this post about starting the Apache CXF Fediz IdP in docker. Once the IdP has started via "docker-compose up", open a browser and navigate to "https://localhost:10002/fediz-oidc/console/clients". This is the client registration page of the Fediz OIDC IdP. Authenticate using credentials "alice" (password "ecila") and register a new client for Apache Knox using the following redirect URI:
  • https://localhost:8443/gateway/knoxssopac4j/api/v1/websso?pac4jCallback=true&client_name=OidcClient
Click on the registered client and save the client Id and Secret for later.

 
2) Configure the Apache Knox SSO service

The next step is to configure the Knox SSO service to work with our OpenId Connect IdP. Copy "conf/topologies/knoxsso.xml" to "conf/topologies/knoxssopac4j.xml". Now edit it and delete the "ShiroProvider" provider and add the following provider instead (which leverages the Pac4j project):
where the values for "oidc.id" and "oidc.secret" are the values saved from Fediz above when registering the client.

Before starting Knox we need to trust the certificate associated with the TLS endpoint of the OpenId Connect IdP (which in our demo is just a locally issued certificate). To do this we will add the certificate to our Java cacerts file (note: not a good idea in production - this is just for test purposes). Download "idp-ssl-trust.jks" which is available with the docker configuration for Fediz here and add the certificate to your Java cacerts as follows (destination password: "changeit", source password: "ispass"):
  • keytool -keystore $JAVA_HOME/jre/lib/security/cacerts -import-keystore -srckeystore ./idp-ssl-trust.jks

3) Secure a topology using the "SSOCookieProvider" provider

In section 2 of the previous tutorial, we showed how to secure a topology using the "SSOCookieProvider" provider. Copy "conf/topologies/sandbox-sso.xml" to "conf/topologies/sandbox-ssopac4j.xml" and change the value of the "sso.authentication.provider.url" parameter to:
  • https://localhost:8443/gateway/knoxssopac4j/api/v1/websso
Now start Apache Knox and navigate to the following URL:
  • https://localhost:8443/gateway/sandbox-ssopac4j/webhdfs/v1/data/LICENSE.txt?op=OPEN
You will be redirected to the Knox SSO service and then on to the Fediz IdP. Authenticate with "alice" / "ecila" and grant permission to issue a token. The browser will then be redirected back to the "sandbox-ssopac4j" topology and "LICENSE.txt" should be successfully downloaded.

Friday, September 7, 2018

Exploring Apache Knox - part IV

This is the fourth in a series of blog posts exploring some of the security features of Apache Knox. The previous couple of posts looked at authenticating to the (REST API) of Apache Knox using a token, obtained from either the Apache Knox token service or a third party JWT provider. Authenticating using a token works well when we have a client application invoking on Apache Knox, but what about if we want to use a browser instead? In this post we will look at how to achieve single sign-on using the Knox SSO service.

1) Set up the Apache Knox SSO service

To start with, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from. There is no need to create a new topology file in Apache Knox for the Knox SSO service, as it already ships with a "knoxsso.xml" file. Note that it contains a "KNOXSSO" service as well as a "knoxauth" application. The idea is that the user first browses to the Knox topology secured with a special provider that redirects the browser to the Knox SSO service. The user then authenticates to the LDAP backend using a form (knoxauth). The Knox SSO service then issues a cookie that can be used to access the desired service, and redirects back to the service.

2) Secure a topology using the "SSOCookieProvider" provider

Next we need to create a topology which is secured using a cookie issued by Knox SSO. Copy "conf/topologies/sandbox.xml" to "conf/topologies/sandbox-sso.xml". Remove the existing Shiro authentication provider and instead add the "SSOCookieProvider" as follows:
Note that this is configured with a parameter ("sso.authentication.provider.url") which corresponds to the URL to redirect the browser to for authentication. Now open a browser and navigate to:
  • https://localhost:8443/gateway/sandbox-sso/webhdfs/v1/data/LICENSE.txt?op=OPEN
Authenticate to Knox SSO using the LDAP credentials "guest" and "guest-password" and click on "Sign in". A cookie will be created and the browser redirected to the original URL where "LICENSE.txt" can be downloaded.


Thursday, September 6, 2018

Exploring an interop test scenario with Apache CXF Fediz in docker

I recently covered on this blog how to deploy the Apache CXF Fediz IdP in docker, as well as a "hello-world" application which uses the IdP for authentication (using either the WS-Federation and SAML SSO protocols). In that post, the user is instructed to select "realm A" (the home realm of the Idp) when prompted, and so the user is authenticated locally. In this article, we are going to take things a step further, and instead authenticate the user in "realm B", which will be an Apache CXF Fediz OpenId Connect IdP, also deployed in docker. So the "hello world" web application will speak WS-Federation to the first IdP, which in turn will redirect the user to the second IdP for authentication using the OpenId Connect protocol.

1) Setup the "realm B" IdP

First we'll look at setting up the "realm B" IdP. As our "realm A" IdP will be deployed on "localhost", we will deploy our "realm B" IdP on the domain "www.fediz.org" to avoid problems with cookies. Create an entry in your '/etc/hosts' and map "www.fediz.org" to your localhost IP address. Now clone the following project in github:
  • fediz-idp: A sample project to deploy the Fediz IdP
Create the STS image (no changes needed here from the previous blog entry):
  • cd sts; docker build -t coheigea/fediz-sts .
We'll need to change the IdP to instead load the "entities-realmb.xml" definitions. This contains two applications - the OpenId Connect IdP as well as the realm A IdP. Edit the Dockerfile and uncomment the line about copying "realm.properties" - this will switch the IdP to use "entities-realmb.xml" instead. In addition, change the references to "sts" to "stsrealmb". Now rebuild with:
  • cd idp; docker build -t coheigea/fediz-idp-realmb .
We also need to make a small change to the OIDC image to reflect the fact that it is running on "www.fediz.org" instead of "locahost". Edit "fediz_config.xml" and change the "Issuer" URL to:
  • https://www.fediz.org:20001/fediz-idp/federation (changing both the domain + port)
Now rebuild with:
  • cd oidc; docker build -t coheigea/fediz-oidc .
Finally we copy the following docker-compose.yml + launch the IdP via "docker-compose up":
Once the IdP has started, open a browser and navigate to "https://www.fediz.org:20002/fediz-oidc/console/clients". This is the client registration page of the Fediz OIDC IdP. Authenticate using credentials "ALICE" (password "ECILA") and register a new client for the "realm A" IdP using the redirect URI "https://localhost:10001/fediz-idp/federation". Click on the registered client and save the client Id and Secret for later.

2) Setup the "realm A" IdP

Now we will set up the "realm A" IdP. Discard the changes that were made in the "idp" directory. A pre-configured "entities-realma-oidc.xml" is available which contains the configuration necessary to connect to the realm B OIDC service. Edit this file and search for the "trusted-idp-realmB" bean definition. Change the "client.id" and "client.secret" values to match those saved above when creating the client in the "realm B" client registration page. Next edit the Dockerfile and add the following line to copy the "entities-realma-oidc.xml" into the IdP configuration:
  • COPY entities-realma-oidc.xml $TOMCAT_HOME/webapps/fediz-idp/WEB-INF/classes/entities-realma.xml
Then rebuild the IdP image:
  • docker build -t coheigea/fediz-idp .
Before launching the "realm A" IdP via the docker-compose.yml in github, we need to edit it so that it launches on the same network as the "realm B" IdP in order to be able to reach it. Find the running docker instances via "docker ps" and then run "docker inspect" on one of the Ids. Look for the "Network" section and note the network name (for example: "tmp_default").

Now add the following configuration to docker-compose.yml and then launch the IdP via "docker-compose up":
  • networks:
      default:
        external:
          name: tmp_default
3) Run the "fediz-helloworld" application

Finally, we need to make one small tweak to the "fediz-helloworld" application. Edit the "fediz_config.xml" file and change the role "ClaimType" to be optional. This is because the "STS" in "realm A" is not configured to map or retrieve a role claim for users in "realm B". Rebuild and launch the helloworld application:
  • docker build -t coheigea/fediz-helloworld .
  • docker run -p 8443:8443 coheigea/fediz-helloworld
Open a browser and navigate to "https://localhost:8443/fedizhelloworld/secure/fedservlet". Select "Realm B Description" when asked to choose a home realm in the "realm A" IdP. The browser is then redirected to the "Realm B" IdP (authencate using "ALICE" and "ECILA"). The "Realm A" IdP will obtain an IdToken from the "Realm B" OIDC for the user "ALICE", and then swap it for a SAML Token via the STS. This is then returned to the "fediz-helloworld" application via WS-Federation. Note that the landing page now shows the user as "ALICE" (whereas before the realm A user was the lowercase "alice").