Friday, August 31, 2018

Exploring Apache Knox - part III

This is the third in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at accessing a file stored in HDFS via Apache Knox, where the Apache Knox gateway authenticated the user using a (JWT) token obtained from the Knox token service. However, the token enforcement in the Knox REST API is not tightly coupled to the Knox token service, a third-party JWT provider can be used instead. In this post, we will show how to authenticate a user to Apache Knox using a token obtained from the Apache CXF Security Token Service (STS).

1) Deploy the Apache CXF STS in docker

Apache CXF ships with a powerful and flexible STS that can issue, renew, validate, cancel tokens of different types via the (SOAP) WS-Trust interface. In addition, it also has a flexible REST interface. I created a sample github project which builds the CXF STS with the REST interface enabled:
  • sts-rest: Project to deploy a CXF REST STS web application in docker
The STS is configured to authenticate users via HTTP Basic authentication, and it can issue both JWT and SAML tokens. Clone the project, and then build and deploy the project in docker using Apache Tomcat as follows:
  • mvn clean install
  • docker build -t coheigea/cxf-sts-rest .
  • docker run -p 8080:8080 coheigea/cxf-sts-rest
To test it's working correctly, open a browser and obtain a SAML and JWT token respectively via the following GET requests (authenticating using "alice" and "security"):
  • http://localhost:8080/cxf-sts-rest/SecurityTokenService/token/saml
  • http://localhost:8080/cxf-sts-rest/SecurityTokenService/token/jwt
2) Invoking on the REST API of Apache Knox using a token issued by the STS

Now we'll look at how to modify the previous tutorial so that the REST API is secured by a token issued by the Apache CXF STS, instead of the Knox token service. To start with, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from. Then follow part (2) of the previous tutorial to set up the "sandbox-token" topology. Now copy "conf/topologies/sandbox-token.xml" to "conf/topologies/sandbox-token-cxf.xml". We need to make a few changes to the "JWTProvider" to support validating tokens issued by the CXF STS.

Edit "conf/topologies/sandbox-token.xml" and add the following parameters to the "JWTProvider", i.e.:
"knox.token.verification.pem" is the PEM encoding of the certificate to be used to verify the signature on the received token. You can obtain this in the sts-rest project in github here, simply paste in the content between the "-----BEGIN/END CERTIFICATE-----" into the parameter vaue. "jwt.expected.issuer" is a constraint on the "iss" claim of the token.

Now save the topology file and we can get a token from CXF STS using curl as follows:
  • curl -u alice:security -H "Accept: text/plain" http://localhost:8080/cxf-sts-rest/SecurityTokenService/token/jwt
Save the (raw) token that is returned. Then invoke on the REST API using the token as follows:
  • curl -kL -H "Authorization: Bearer <access token>" https://localhost:8443/gateway/sandbox-token-cxf/webhdfs/v1/data/LICENSE.txt?op=OPEN

Wednesday, August 29, 2018

Exploring Apache Knox - part II

This is the second in a series of blog posts exploring some of the security features of Apache Knox. The first post looked at accessing a file stored in HDFS via Apache Knox, where the Apache Knox gateway authenticated the user via Basic Authentication. In this post we will look at authenticating to the REST API of Apache Knox using a token rather than using Basic Authentication. Apache Knox ships with a token service which allows an authenticated user to obtain a token, which can then be used to invoke on the REST API.

1) Set up the Apache Knox token service

To start with, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from. Now we will create a new topology configuration file in Apache Knox to launch the token service. Copy "conf/topologies/sandbox.xml" to a new file called "conf/topologies/token.xml". Leave the 'gateway/provider' section as it is, as we want the user to authenticate to the token service using basic authentication as for the REST API in the previous post. Remove all of the 'service' definitions and add a service definition for the Knox token service, e.g.:
Restart Apache Knox. We can then obtain a token via the token service as follows using curl:
  • curl -u guest:guest-password -k https://localhost:8443/gateway/token/knoxtoken/api/v1/token
This returns a JSON structure containing an access token (in JWT format), as well as a "token_type" attribute of "Bearer" and an expiry timestamp. The access token itself can be introspected (via e.g. https://jwt.io/). In the example above, it contains a header "RS256" indicating it is a signed token (RSA + SHA-256), as well as payload attributes identifying the subject ("guest"), issuer ("KNOXSSO") and an expiry timestamp.

2) Invoking on the REST API using a token

The next step is to invoke on the REST API using a token, instead of using basic authentication as in the example given in the previous tutorial. Copy "conf/topologies/sandbox.xml" to "conf/topologies/sandbox-token.xml". Remove the Shiro provider and instead add the following provider:
Now restart the Apache Knox gateway again (edit: as Larry McCay points out in the comments this is not required, as long as we are not using Ambari to manage the topologies). First obtain a token using curl:
  • curl -u guest:guest-password -k https://localhost:8443/gateway/token/knoxtoken/api/v1/token
Copy the access token that is returned. Then you can invoke on the REST API using the token as follows:
  • curl -kL -H "Authorization: Bearer <access token>" https://localhost:8443/gateway/sandbox-token/webhdfs/v1/data/LICENSE.txt?op=OPEN

Tuesday, August 28, 2018

Exploring Apache Knox - part I

Apache Knox is an application gateway that works with the REST APIs and User Interfaces of a large number of the most popular big data projects. It can be convenient to enforce that REST or browser clients interact with Apache Knox rather than different components of an Apache Hadoop cluster for example. In particular, Apache Knox supports a wide range of different mechanisms for securing access to the backend cluster. In this series of posts, we will look at different ways of securing access to an Apache Hadoop filesystem via Apache Knox. In this first post we will look at accessing a file stored in HDFS via Apache Knox, where the Apache Knox gateway authenticates the user via Basic Authentication.

1) Set up Apache Hadoop

To start we assume that an Apache Hadoop cluster is already running, with a file stored in "/data/LICENSE.txt" that we want to access. To see how to set up Apache Hadoop in such a way, please refer to part 1 of this earlier post. Ensure that you can download the LICENSE.txt file in a browser directly from Apache Hadoop via:
  • http://localhost:9870/webhdfs/v1/data/LICENSE.txt?op=OPEN
Note that the default port for Apache Hadoop 2.x is "50070" instead.

2) Set up Apache Knox

Next we will see how to access the file above via Apache Knox. Download and extract Apache Knox (Gateway Server binary archive - version 1.1.0 was used in this tutorial). First we create a master secret via:
  • bin/knoxcli.sh create-master
Next we start a demo LDAP server that ships with Apache Knox for convenience:
  • bin/ldap.sh start
We can authenticate using the credentials "guest" and "guest-password" that are stored in the LDAP backend.

Apache Knox stores the "topologies" configuration in the directory "conf/topologies". We will re-use the default "sandbox.xml" configuration for the purposes of this post. This configuration maps to the URI "gateway/sandbox". It contains the authentication configuration for the topology (HTTP basic authentication), and which maps the received credentials to the LDAP backend we have started above. It then defines the backend services that are supported by this topology. We are interested in the "WEBHDFS" service which maps to "http://localhost:50070/webhdfs". Change this port to "9870" if using Apache Hadoop 3.0.0 as in the first section of this post. Then start the gateway via:
  • bin/gateway.sh start
Now we can access our file directly via Knox, using credentials of "guest" / "guest-password" via:
  • https://localhost:8443/gateway/sandbox/webhdfs/v1/data/LICENSE.txt?op=OPEN
Or alternatively using Curl:
  • curl -u guest:guest-password -kL https://localhost:8443/gateway/sandbox/webhdfs/v1/data/LICENSE.txt?op=OPEN

Friday, August 24, 2018

OpenId Connect support for the Apache Syncope admin console

Apache Syncope is a powerful open source Identity Management project at the Apache Software Foundation. Last year I wrote a blog entry about how to log in to the Syncope admin and end-user web consoles using SAML SSO, showing how it works using Apache CXF Fediz as the SAML SSO IdP. In addition to SAML SSO, Apache Syncope supports logging in using OpenId Connect from version 2.0.9. In this post we will show how to configure this using the docker image for Apache CXF Fediz that we covered recently.

1) Configuring the Apache CXF Fediz OIDC IdP

First we will show how to set up the Apache CXF Fediz OpenId Connect IdP. Follow section (1) of this post about starting the Apache CXF Fediz IdP in docker. Once the IdP has started via "docker-compose up", open a browser and navigate to "https://localhost:10002/fediz-oidc/console/clients". This is the client registration page of the Fediz OIDC IdP. Authenticate using credentials "alice" (password "ecila") and register a new client for Apache Syncope using the redirect URI "http://localhost:9080/syncope-console/oidcclient/code-consumer". Click on the registered client and save the client Id and Secret for later:

2) Configuring Apache Syncope to support OpenId Connect

In this section, we will cover setting up Apache Syncope to support OpenId Connect. Download and extract the most recent standalone distribution release of Apache Syncope (2.1.1 was used in this post). Before starting Apache Syncope, we need to configure a truststore corresponding to the certificate used by the Apache CXF Fediz OIDC IdP. This can be done on linux via for example:
  • export CATALINA_OPTS="-Djavax.net.ssl.trustStore=./idp-ssl-trust.jks -Djavax.net.ssl.trustStorePassword=ispass"
where "idp-ssl-trust.jks" is available with the docker configuration for Fediz here. Start the embedded Apache Tomcat instance and then open a web browser and navigate to "http://localhost:9080/syncope-console", logging in as "admin" and "password".

Apache Syncope is configured with some sample data to show how it can be used. Click on "Users" and add a new user called "alice" by clicking on the subsequent "+" button. Specify a password for "alice" and then select the default values wherever possible (you will need to specify some required attributes, such as "surname"). Now in the left-hand column, click on "Extensions" and then "OIDC Client". Add a new OIDC Client, specifying the client ID + Secret that you saved earlier and click "Next". Then specify the following values (obtained from "https://localhost:10002/fediz-oidc/.well-known/openid-configuration"):
  • Issuer: https://localhost:10002
  • Authorization Endpoint: https://localhost:10002/fediz-oidc/idp/authorize
  • Token Endpoint: https://localhost:10002/fediz-oidc/oauth2/token
  • JWKS URI: https://localhost:10002/fediz-oidc/jwk/keys
Click "Next". Now we need to add a mapping from the user we authenticated at the IdP and the internal user in Syncope ("alice"). Add a mapping from internal attribute "username" to external attribute "preferred_username" as follows:

Now log out and select the "Open Id Connect" dialogue that should have appeared. You will be redirected to the Apache CXF Fediz OIDC IdP for authentication and then redirected back to Apache Syncope, where you will be automatically logged in as the user "alice".

Thursday, August 23, 2018

SAML SSO Logout support in Apache CXF Fediz

SAML SSO support was added to the Apache CXF Fediz IdP in version 1.3.0. In addition, SAML SSO support was added to the Tomcat 8 plugin from the 1.4.4 release. However, unlike for the WS-Federation protocol, support was not included for SAML SSO logout. That's going to change from the next 1.4.5 release. In this post we will cover how logout works in general for both protocols, across both the IdP and Relying Party (RP) plugins.

1) Logging out from the Apache CXF Fediz IdP

a) WS-Federation

Follow the previous post I wrote about experimenting with Apache CXF Fediz in docker and start the Fediz IdP and the 'fedizhelloworld' application (supporting WS-Federation and not SAML SSO) in docker. Login to the 'fedizhelloworld' application (and to the IdP) by navigating to 'https://localhost:8443/fedizhelloworld/secure/fedservlet' in a browser and logging on with credentials of 'alice'/'ecila'.

We can log out directly to the IdP by navigating to 'https://localhost:10001/fediz-idp/federation?wa=wsignout1.0'. As our IdpEntity configuration in 'entities-realma.xml' has the property "rpSingleSignOutConfirmation" set to "true", a sign out confirmation page is displayed asking us if we want to log out from the 'fedizhelloworld' application.

If we click on the "Logout" button then what happens next depends on whether we supplied a "wreply" parameter or not. If no parameter is supplied then a successful logout page is shown at the IdP. Otherwise we have the option of supplying a "wreply" parameter to return to the RP application after logout is successful. For this to work, the IdPEntity configuration bean must have the property "automaticRedirectToRpAfterLogout" set to "true". In addition, the "wreply" address must match a regular expression supplied by the "logoutEndpointConstraint" property of the matching "ApplicationEntity" bean for 'fedizhelloworld'.

b) SAML SSO

Support was added to the Apache CXF Fediz for SAML SSO logout in the forthcoming 1.4.5 release. The client sends a LogoutRequest to the IdP as follows:
After checking the Signature and doing some validation on the request (e.g. checking the destination), then a sign out confirmation page is displayed as per the WS-Federation case above (if the property "rpSingleSignOutConfirmation" set to "true). Once the user clicks on "Logout" then either a logout page is displayed on the IdP, or else a LogoutResponse is returned to the client (if the property "automaticRedirectToRpAfterLogout" set to "true"). In addition, the URL to redirect back to must be specified in the 'ApplicationEntity' configuration in "entities-realma.xml" under the property "logoutEndpoint".



2) Logging out from the RP application

a) WS-Federation

Next we'll turn our attention to logging out from the 'fedizhelloworld' application, secured by WS-Federation. Log in again to the application by navigating to 'https://localhost:8443/fedizhelloworld/secure/fedservlet'. There are a number of different ways of logging out from the application:
  • Specify a "wa=wsignout1.0" query parameter. This logs the user out and redirects to the IdP to log the user out there.
  • Specify a "wa=wsignoutcleanup1.0" query parameter. This logs the user out and either redirects to a URL supplied by the "wreply" parameter (which must match the configuration item "logoutRedirectTo" or "logoutRedirectToConstraint"), or alternatively to the "logoutRedirectTo" configuration item if no "wreply" parameter is specified. 
  • If the URL matches the configuration item "logoutURL". The default behaviour here is to log the user out and redirect to the IdP to log the user out there as well.
Feel free to experiment with these options with 'fedizhelloworld'.

b) SAML SSO

Support was added for SAML SSO logout support in the Tomcat plugin for the forthcoming 1.4.5 release. If the user navigates to the logout URL configured in fediz_config.xml ("logoutURL") then the user is logged out and a 'LogoutRequest' is sent to the IdP. If a 'LogoutResponse' is received from the IdP then it is processed and the user is redirected to the page specified in the "logoutRedirectTo" configuration item afterwards.

Follow the steps in the previous post to change the Fediz IdP and 'fedizhelloworld' docker images to use SAML SSO. When changing the IdP configuration, edit 'entities-realma.xml' and change the value for 'automaticRedirectToRpAfterLogout' to 'true'. Also add the following property to the ApplicationEntity bean for "srv-fedizhelloworld":
  • <property name="logoutEndpoint" value="https://localhost:8443/fedizhelloworld/index.html"/>
Now log on to the RP via 'https://localhost:8443/fedizhelloworld/secure/fedservlet' and log out via 'https://localhost:8443/fedizhelloworld/secure/logout'. You will be logged out of both the RP and the IdP and redirected to a landing page on the RP side.

Monday, August 20, 2018

Experimenting with Apache CXF Fediz in docker

I have covered the capabilities of Apache CXF Fediz many times on this blog, giving instructions of how to deploy the IdP or a sample secured web application to a container such as Apache Tomcat. However such instructions can be quite complex, ranging from building Fediz from scratch and deploying the resulting web applications, to configuring jars + keys in Tomcat, etc. Wouldn't it be great to just be able to build a few docker images and launch them instead? In this post we will show how to easily deploy the Fediz IdP and STS to docker, as well as how to deploy a sample application secured using WS-Federation. Then we show how easy it is to switch the IdP and the application to use SAML SSO instead.

1) The Apache CXF Fediz Identity Provider

The Apache CXF Fediz Identity Provider (IdP) actually consists of two web applications - the IdP itself which can handle both WS-Federation and SAML SSO login requests, as well as an Apache CXF-based Security Token Service (STS) to authenticate the end users. In addition, we also have a third web application, which is the Apache CXF Fediz OpenId Connect IdP, but we will cover that in a future post. It is possible to build docker images for each of these components with the following project on github:
  • fediz-idp: A sample project to deploy the Fediz IdP
To launch the IdP in docker, build each of the individual components and then launch using docker-compose, e.g.:
  • cd sts; docker build -t coheigea/fediz-sts .
  • cd idp; docker build -t coheigea/fediz-idp .
  • cd oidc; docker build -t coheigea/fediz-oidc .
  • docker-compose up
Please note that this project is provided as a quick and easy way to play around with the Apache CXF Fediz IdP. It should not be deployed in production as it uses default security credentials, etc.

2) The Apache CXF Fediz 'fedizhelloworld' application

Now that the IdP is configured, we will configure a sample application which is secured using the Fediz plugin (for Apache Tomcat). The project is also available on github here:
  • fediz-helloworld: Dockerfile to deploy a WS-Federation secured 'fedizhelloworld' application
The docker image can be built and run via:
  • docker build -t coheigea/fediz-helloworld .
  • docker run -p 8443:8443 coheigea/fediz-helloworld
Now just open a browser and navigate to 'https://localhost:8443/fedizhelloworld/secure/fedservlet'. You will be redirected to the IdP for authentication. Select the default home realm and use the credentials "alice" (password: "ecila") to log in. You should be successfully authenticated and redirected back to the web application.

3) Switching to use SAML SSO instead of WS-Federation

Let's also show how we can switch the security protocol to use SAML SSO instead of WS-Federation. Edit the Dockerfile for the fediz-idp project and uncomment the final two lines (to copy entities-realma.xml and mytomrpkey.cert into the docker image). 'mytomrpkey.cert' is used to validate the Signature of the SAML AuthnRequest, something that is not needed for the WS-Federation case as the client request is not signed. Rebuild the IdP image (docker build -t coheigea/fediz-idp .) and re-launch the IdP again via "docker-compose up".

To switch the 'fedizhelloworld' application we need to make some changes to the 'fediz_config.xml'. These changes are already made in the file 'fediz_config_saml.xml':

Copy 'fediz_config_saml.xml' to 'fediz_config.xml' and rebuild the docker image:
  • docker build -t coheigea/fediz-helloworld .
  • docker run -p 8443:8443 coheigea/fediz-helloworld
Open a browser and navigate to 'https://localhost:8443/fedizhelloworld/secure/fedservlet' again. Authentication should succeed as before, but this time using SAML SSO as the authentication protocol instead of WS-Federation.