Friday, September 21, 2018

Exploring Apache Knox - part VIII

This is the eighth and final post in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at how to authorize access to Apache Knox using Apache Ranger. We have also previously looked at how to achieve single sign-on using the Knox SSO service. In this post we will combine aspects of both, to show how we can use Knox SSO to achieve single sign-on for the Apache Ranger admin service UI.

As a prerequisite to this tutorial, follow the first tutorial to set up and run Apache Knox.

1) Configure the Apache Knox SSO service

First we'll make a few changes to the Apache Knox SSO Service to get it working with Apache Ranger. Copy "conf/topologies/knoxsso.xml" to "conf/topologies/knoxsso-ranger.xml". Change the "redirectToUrl" parameter in the "ShiroProvider" to redirect to "knoxsso-ranger" instead of "knoxsso". We also need to make some changes to the "KNOXSSO" service configuration, due to the fact that we have not configured the Ranger Admin Service to run on TLS. Change the "KNOXSSO" service in the topology file as follows (note: this should not be done in production as it is not secure to set "knoxsso.cookie.secure.only" to "false"):
Apache Ranger must be configured to trust the signing certificate of the Knox SSO service. In ${knox.home}/data/security/keystores, export the certificate from the jks file via (specifying the master secret as the password):
  • keytool -keystore gateway.jks -export-cert -file gateway.cer -alias gateway-identity -rfc
2) Configure Apache Ranger to use the Knox SSO service

Next we'll look at configuring Apache Ranger to use the Knox SSO Service. Edit 'conf/ranger-admin-site.xml' and add/edit the following properties:
  • ranger.truststore.file - ${knox.home}/data/security/keystores/gateway.jks
  • ranger.truststore.password - the truststore password
  • ranger.sso.enabled - true
  • ranger.sso.providerurl - https://localhost:8443/gateway/knoxsso-ranger/api/v1/websso
  • ranger.sso.publicKey - Edit gateway.cer we exported above and paste in the content between the BEGIN + END part here.
3) Log in to the Ranger Admin Service UI using Knox SSO

Now we're reading to log in to the Ranger Admin Service UI. Start Ranger via "sudo ranger-admin start" and open a browser at "http://localhost:6080". You will be re-directed to the Knox SSO login page. Login with credentials of "admin/admin-password". We will be redirected back to the Ranger Admin UI and logged in automatically as the "admin" user.

4) Some additional configuration parameters

Finally, there are some additional configuration parameters we can set on both the Knox and Ranger sides. It's possible to enforce that the KNOX SSO (JWT) token has a required audience claim in Ranger, by setting the "ranger.sso.audiences" configuration parameter in "conf/ranger-admin-site.xml". The audience claim can be set in the "KNOXSSO" service configuration via the "knoxsso.token.audiences" configuration property. It is also possible to change the default signature algorithm by specifying "ranger.sso.expected.sigalg" in Ranger (for example "RS512") and "knoxsso.token.sigalg" in Knox.

Wednesday, September 19, 2018

Exploring Apache Knox - part VII

This is the seventh in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at how to achieve single sign-on using the Knox SSO service, where the Knox SSO service was configured to authenticate the user to a third party SAML SSO provider. In this post we are going to move away from authenticating users, and look at how we can authorize access to Apache Knox using Apache Ranger.

As a prerequisite to this tutorial, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from.

1) Install the Apache Ranger Knox plugin

First we will install the Apache Ranger Knox plugin. Download Apache Ranger and verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting plugin to a location where you will configure and install it:
  • mvn clean package assembly:assembly -DskipTests
  • tar zxvf target/ranger-${version}-knox-plugin.tar.gz
  • mv ranger-${version}-knox-plugin ${ranger.knox.home}
Now go to ${ranger.knox.home} and edit "install.properties". You need to specify the following properties:
  • POLICY_MGR_URL: Set this to "http://localhost:6080"
  • REPOSITORY_NAME: Set this to "KnoxTest".
  • KNOX_HOME: The location of your Apache Knox installation
Save "install.properties" and install the plugin as root via "sudo ./enable-knox-plugin.sh". The Apache Ranger Knox plugin should now be successfully installed. One thing to check for is that the user who is running Apache Knox has the correct permissions to read the policy cache ("/etc/ranger/KnoxTest/policycache"). Now restart Apache Knox before proceeding.

2) Create a topology in Apache Knox for authorization

Even though we have installed the Apache Ranger plugin in Knox, we need to enable it explicitly in a topology. Copy "conf/topologies/sandbox.xml" to "conf/topologies/sandbox-ranger.xml" and add the following provider:
Now let's try to access the file using the admin credentials:
  • curl -u admin:admin-password -kL https://localhost:8443/gateway/sandbox-ranger/webhdfs/v1/data/LICENSE.txt?op=OPEN
You should get a 403 Forbidden error due to an authorization failure.

3) Create authorization policies in the Apache Ranger Admin console

Next we will use the Apache Ranger admin console to create authorization policies for Apache Knox. Follow the steps in this tutorial to install the Apache Ranger admin service. Before starting the Ranger admin service, edit 'conf/ranger-admin-site.xml' and add the following properties:
  • ranger.truststore.file - ${knox.home}/data/security/keystores/gateway.jks
  • ranger.truststore.password - security
Start the Apache Ranger admin service with "sudo ranger-admin start" and open a browser and go to "http://localhost:6080/" and log on with "admin/admin". Add a new Knox service in the Ranger admin UI with the following configuration values:
  • Service Name: KnoxTest
  • Username: admin
  • Password: admin-password
  • knox.url: https://localhost:8443/gateway/admin/api/v1/topologies
Now click on the "KnoxTest" service that we have created. Click on the policy that is automatically created, and note that the "admin" user already has the "Allow" permission for all Knox topologies and services. Wait for the policy to sync to the plugin, and the curl call we executed above should now work:
  • curl -u admin:admin-password -kL https://localhost:8443/gateway/sandbox-ranger/webhdfs/v1/data/LICENSE.txt?op=OPEN
whereas using the "guest" credentials ("guest"/"guest-password") should be denied, as we have not created a matching authorization policy in Ranger.

Wednesday, September 12, 2018

Exploring Apache Knox - part VI

This is the sixth in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at how to achieve single sign-on using the Knox SSO service, where the Knox SSO service was configured to authenticate the user to a third party Identity Provider using OpenId Connect. In this post we will show instead how to configure Knox SSO to redirect the user instead to a SAML SSO Identity Provider.

As a prerequisite to this tutorial, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from.

1) Configure the Apache Knox SSO service

For the purposes of this tutorial we are going to use the www.testshib.org SAML SSO Identity Provider. First we'll configure the Knox SSO service. Copy "conf/topologies/knoxsso.xml" to "conf/topologies/knoxssopac4jsaml.xml". Now edit it and delete the "ShiroProvider" provider and add the following provider instead (which leverages the Pac4j project):
Note that one of the configuration parameters references the SAML SSO RP (Relying Party) Metadata file, which we will need to configure the IdP. Luckily Apache Knox will generate this for us on the first call. Open a browser and navigate to the following URL:
  • https://localhost:8443/gateway/knoxssopac4jsaml/api/v1/websso
You should see an error page on the TestShib site, as it has not yet been configured with our Metadata file. However, Knox has now generated this file at the location we specified via "saml.serviceProviderMetadataPath". Go to "https://www.testshib.org/register.html" and upload the generated metadata file:


2) Secure a topology using the "SSOCookieProvider" provider

In section 2 of this earlier tutorial, we showed how to secure a topology using the "SSOCookieProvider" provider. Copy "conf/topologies/sandbox-sso.xml" to "conf/topologies/sandbox-ssopac4jsaml.xml" and change the value of the "sso.authentication.provider.url" parameter to:
  • https://localhost:8443/gateway/knoxssopac4jsaml/api/v1/websso
Now start Apache Knox and navigate to the following URL:
  • https://localhost:8443/gateway/sandbox-ssopac4jsaml/webhdfs/v1/data/LICENSE.txt?op=OPEN
You will be redirected to the Knox SSO service and then on to the TestShib IdP and authenticate with "myself" / "myself".The browser will then be redirected back to the "sandbox-ssopac4jsaml" topology and "LICENSE.txt" should be successfully downloaded.

Monday, September 10, 2018

Exploring Apache Knox - part V

This is the fifth in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at how to achieve single sign-on using the Knox SSO service, where the Knox SSO service was configured to authenticate the user to an LDAP backend using the "ShiroProvider" configured in "knoxsso.xml". However, Knox SSO supports more sophisticated scenarios, where the user is redirected to a third-party Identity Provider for authentication instead. In this post we will cover how to set up the Knox SSO service so that a user is redirected to an OpenId Connect IdP.

As a prerequisite to this tutorial, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from.

1) Configuring the Apache CXF Fediz OIDC IdP

For the purposes of this tutorial, we will use the Apache CXF Fediz OpenId Connect IdP deployed in docker. Follow section (1) of this post about starting the Apache CXF Fediz IdP in docker. Once the IdP has started via "docker-compose up", open a browser and navigate to "https://localhost:10002/fediz-oidc/console/clients". This is the client registration page of the Fediz OIDC IdP. Authenticate using credentials "alice" (password "ecila") and register a new client for Apache Knox using the following redirect URI:
  • https://localhost:8443/gateway/knoxssopac4j/api/v1/websso?pac4jCallback=true&client_name=OidcClient
Click on the registered client and save the client Id and Secret for later.

 
2) Configure the Apache Knox SSO service

The next step is to configure the Knox SSO service to work with our OpenId Connect IdP. Copy "conf/topologies/knoxsso.xml" to "conf/topologies/knoxssopac4j.xml". Now edit it and delete the "ShiroProvider" provider and add the following provider instead (which leverages the Pac4j project):
where the values for "oidc.id" and "oidc.secret" are the values saved from Fediz above when registering the client.

Before starting Knox we need to trust the certificate associated with the TLS endpoint of the OpenId Connect IdP (which in our demo is just a locally issued certificate). To do this we will add the certificate to our Java cacerts file (note: not a good idea in production - this is just for test purposes). Download "idp-ssl-trust.jks" which is available with the docker configuration for Fediz here and add the certificate to your Java cacerts as follows (destination password: "changeit", source password: "ispass"):
  • keytool -keystore $JAVA_HOME/jre/lib/security/cacerts -import-keystore -srckeystore ./idp-ssl-trust.jks

3) Secure a topology using the "SSOCookieProvider" provider

In section 2 of the previous tutorial, we showed how to secure a topology using the "SSOCookieProvider" provider. Copy "conf/topologies/sandbox-sso.xml" to "conf/topologies/sandbox-ssopac4j.xml" and change the value of the "sso.authentication.provider.url" parameter to:
  • https://localhost:8443/gateway/knoxssopac4j/api/v1/websso
Now start Apache Knox and navigate to the following URL:
  • https://localhost:8443/gateway/sandbox-ssopac4j/webhdfs/v1/data/LICENSE.txt?op=OPEN
You will be redirected to the Knox SSO service and then on to the Fediz IdP. Authenticate with "alice" / "ecila" and grant permission to issue a token. The browser will then be redirected back to the "sandbox-ssopac4j" topology and "LICENSE.txt" should be successfully downloaded.

Friday, September 7, 2018

Exploring Apache Knox - part IV

This is the fourth in a series of blog posts exploring some of the security features of Apache Knox. The previous couple of posts looked at authenticating to the (REST API) of Apache Knox using a token, obtained from either the Apache Knox token service or a third party JWT provider. Authenticating using a token works well when we have a client application invoking on Apache Knox, but what about if we want to use a browser instead? In this post we will look at how to achieve single sign-on using the Knox SSO service.

1) Set up the Apache Knox SSO service

To start with, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from. There is no need to create a new topology file in Apache Knox for the Knox SSO service, as it already ships with a "knoxsso.xml" file. Note that it contains a "KNOXSSO" service as well as a "knoxauth" application. The idea is that the user first browses to the Knox topology secured with a special provider that redirects the browser to the Knox SSO service. The user then authenticates to the LDAP backend using a form (knoxauth). The Knox SSO service then issues a cookie that can be used to access the desired service, and redirects back to the service.

2) Secure a topology using the "SSOCookieProvider" provider

Next we need to create a topology which is secured using a cookie issued by Knox SSO. Copy "conf/topologies/sandbox.xml" to "conf/topologies/sandbox-sso.xml". Remove the existing Shiro authentication provider and instead add the "SSOCookieProvider" as follows:
Note that this is configured with a parameter ("sso.authentication.provider.url") which corresponds to the URL to redirect the browser to for authentication. Now open a browser and navigate to:
  • https://localhost:8443/gateway/sandbox-sso/webhdfs/v1/data/LICENSE.txt?op=OPEN
Authenticate to Knox SSO using the LDAP credentials "guest" and "guest-password" and click on "Sign in". A cookie will be created and the browser redirected to the original URL where "LICENSE.txt" can be downloaded.


Thursday, September 6, 2018

Exploring an interop test scenario with Apache CXF Fediz in docker

I recently covered on this blog how to deploy the Apache CXF Fediz IdP in docker, as well as a "hello-world" application which uses the IdP for authentication (using either the WS-Federation and SAML SSO protocols). In that post, the user is instructed to select "realm A" (the home realm of the Idp) when prompted, and so the user is authenticated locally. In this article, we are going to take things a step further, and instead authenticate the user in "realm B", which will be an Apache CXF Fediz OpenId Connect IdP, also deployed in docker. So the "hello world" web application will speak WS-Federation to the first IdP, which in turn will redirect the user to the second IdP for authentication using the OpenId Connect protocol.

1) Setup the "realm B" IdP

First we'll look at setting up the "realm B" IdP. As our "realm A" IdP will be deployed on "localhost", we will deploy our "realm B" IdP on the domain "www.fediz.org" to avoid problems with cookies. Create an entry in your '/etc/hosts' and map "www.fediz.org" to your localhost IP address. Now clone the following project in github:
  • fediz-idp: A sample project to deploy the Fediz IdP
Create the STS image (no changes needed here from the previous blog entry):
  • cd sts; docker build -t coheigea/fediz-sts .
We'll need to change the IdP to instead load the "entities-realmb.xml" definitions. This contains two applications - the OpenId Connect IdP as well as the realm A IdP. Edit the Dockerfile and uncomment the line about copying "realm.properties" - this will switch the IdP to use "entities-realmb.xml" instead. In addition, change the references to "sts" to "stsrealmb". Now rebuild with:
  • cd idp; docker build -t coheigea/fediz-idp-realmb .
We also need to make a small change to the OIDC image to reflect the fact that it is running on "www.fediz.org" instead of "locahost". Edit "fediz_config.xml" and change the "Issuer" URL to:
  • https://www.fediz.org:20001/fediz-idp/federation (changing both the domain + port)
Now rebuild with:
  • cd oidc; docker build -t coheigea/fediz-oidc .
Finally we copy the following docker-compose.yml + launch the IdP via "docker-compose up":
Once the IdP has started, open a browser and navigate to "https://www.fediz.org:20002/fediz-oidc/console/clients". This is the client registration page of the Fediz OIDC IdP. Authenticate using credentials "ALICE" (password "ECILA") and register a new client for the "realm A" IdP using the redirect URI "https://localhost:10001/fediz-idp/federation". Click on the registered client and save the client Id and Secret for later.

2) Setup the "realm A" IdP

Now we will set up the "realm A" IdP. Discard the changes that were made in the "idp" directory. A pre-configured "entities-realma-oidc.xml" is available which contains the configuration necessary to connect to the realm B OIDC service. Edit this file and search for the "trusted-idp-realmB" bean definition. Change the "client.id" and "client.secret" values to match those saved above when creating the client in the "realm B" client registration page. Next edit the Dockerfile and add the following line to copy the "entities-realma-oidc.xml" into the IdP configuration:
  • COPY entities-realma-oidc.xml $TOMCAT_HOME/webapps/fediz-idp/WEB-INF/classes/entities-realma.xml
Then rebuild the IdP image:
  • docker build -t coheigea/fediz-idp .
Before launching the "realm A" IdP via the docker-compose.yml in github, we need to edit it so that it launches on the same network as the "realm B" IdP in order to be able to reach it. Find the running docker instances via "docker ps" and then run "docker inspect" on one of the Ids. Look for the "Network" section and note the network name (for example: "tmp_default").

Now add the following configuration to docker-compose.yml and then launch the IdP via "docker-compose up":
  • networks:
      default:
        external:
          name: tmp_default
3) Run the "fediz-helloworld" application

Finally, we need to make one small tweak to the "fediz-helloworld" application. Edit the "fediz_config.xml" file and change the role "ClaimType" to be optional. This is because the "STS" in "realm A" is not configured to map or retrieve a role claim for users in "realm B". Rebuild and launch the helloworld application:
  • docker build -t coheigea/fediz-helloworld .
  • docker run -p 8443:8443 coheigea/fediz-helloworld
Open a browser and navigate to "https://localhost:8443/fedizhelloworld/secure/fedservlet". Select "Realm B Description" when asked to choose a home realm in the "realm A" IdP. The browser is then redirected to the "Realm B" IdP (authencate using "ALICE" and "ECILA"). The "Realm A" IdP will obtain an IdToken from the "Realm B" OIDC for the user "ALICE", and then swap it for a SAML Token via the STS. This is then returned to the "fediz-helloworld" application via WS-Federation. Note that the landing page now shows the user as "ALICE" (whereas before the realm A user was the lowercase "alice").

Friday, August 31, 2018

Exploring Apache Knox - part III

This is the third in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at accessing a file stored in HDFS via Apache Knox, where the Apache Knox gateway authenticated the user using a (JWT) token obtained from the Knox token service. However, the token enforcement in the Knox REST API is not tightly coupled to the Knox token service, a third-party JWT provider can be used instead. In this post, we will show how to authenticate a user to Apache Knox using a token obtained from the Apache CXF Security Token Service (STS).

1) Deploy the Apache CXF STS in docker

Apache CXF ships with a powerful and flexible STS that can issue, renew, validate, cancel tokens of different types via the (SOAP) WS-Trust interface. In addition, it also has a flexible REST interface. I created a sample github project which builds the CXF STS with the REST interface enabled:
  • sts-rest: Project to deploy a CXF REST STS web application in docker
The STS is configured to authenticate users via HTTP Basic authentication, and it can issue both JWT and SAML tokens. Clone the project, and then build and deploy the project in docker using Apache Tomcat as follows:
  • mvn clean install
  • docker build -t coheigea/cxf-sts-rest .
  • docker run -p 8080:8080 coheigea/cxf-sts-rest
To test it's working correctly, open a browser and obtain a SAML and JWT token respectively via the following GET requests (authenticating using "alice" and "security"):
  • http://localhost:8080/cxf-sts-rest/SecurityTokenService/token/saml
  • http://localhost:8080/cxf-sts-rest/SecurityTokenService/token/jwt
2) Invoking on the REST API of Apache Knox using a token issued by the STS

Now we'll look at how to modify the previous tutorial so that the REST API is secured by a token issued by the Apache CXF STS, instead of the Knox token service. To start with, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from. Then follow part (2) of the previous tutorial to set up the "sandbox-token" topology. Now copy "conf/topologies/sandbox-token.xml" to "conf/topologies/sandbox-token-cxf.xml". We need to make a few changes to the "JWTProvider" to support validating tokens issued by the CXF STS.

Edit "conf/topologies/sandbox-token.xml" and add the following parameters to the "JWTProvider", i.e.:
"knox.token.verification.pem" is the PEM encoding of the certificate to be used to verify the signature on the received token. You can obtain this in the sts-rest project in github here, simply paste in the content between the "-----BEGIN/END CERTIFICATE-----" into the parameter vaue. "jwt.expected.issuer" is a constraint on the "iss" claim of the token.

Now save the topology file and we can get a token from CXF STS using curl as follows:
  • curl -u alice:security -H "Accept: text/plain" http://localhost:8080/cxf-sts-rest/SecurityTokenService/token/jwt
Save the (raw) token that is returned. Then invoke on the REST API using the token as follows:
  • curl -kL -H "Authorization: Bearer <access token>" https://localhost:8443/gateway/sandbox-token-cxf/webhdfs/v1/data/LICENSE.txt?op=OPEN

Wednesday, August 29, 2018

Exploring Apache Knox - part II

This is the second in a series of blog posts exploring some of the security features of Apache Knox. The first post looked at accessing a file stored in HDFS via Apache Knox, where the Apache Knox gateway authenticated the user via Basic Authentication. In this post we will look at authenticating to the REST API of Apache Knox using a token rather than using Basic Authentication. Apache Knox ships with a token service which allows an authenticated user to obtain a token, which can then be used to invoke on the REST API.

1) Set up the Apache Knox token service

To start with, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from. Now we will create a new topology configuration file in Apache Knox to launch the token service. Copy "conf/topologies/sandbox.xml" to a new file called "conf/topologies/token.xml". Leave the 'gateway/provider' section as it is, as we want the user to authenticate to the token service using basic authentication as for the REST API in the previous post. Remove all of the 'service' definitions and add a service definition for the Knox token service, e.g.:
Restart Apache Knox. We can then obtain a token via the token service as follows using curl:
  • curl -u guest:guest-password -k https://localhost:8443/gateway/token/knoxtoken/api/v1/token
This returns a JSON structure containing an access token (in JWT format), as well as a "token_type" attribute of "Bearer" and an expiry timestamp. The access token itself can be introspected (via e.g. https://jwt.io/). In the example above, it contains a header "RS256" indicating it is a signed token (RSA + SHA-256), as well as payload attributes identifying the subject ("guest"), issuer ("KNOXSSO") and an expiry timestamp.

2) Invoking on the REST API using a token

The next step is to invoke on the REST API using a token, instead of using basic authentication as in the example given in the previous tutorial. Copy "conf/topologies/sandbox.xml" to "conf/topologies/sandbox-token.xml". Remove the Shiro provider and instead add the following provider:
Now restart the Apache Knox gateway again (edit: as Larry McCay points out in the comments this is not required, as long as we are not using Ambari to manage the topologies). First obtain a token using curl:
  • curl -u guest:guest-password -k https://localhost:8443/gateway/token/knoxtoken/api/v1/token
Copy the access token that is returned. Then you can invoke on the REST API using the token as follows:
  • curl -kL -H "Authorization: Bearer <access token>" https://localhost:8443/gateway/sandbox-token/webhdfs/v1/data/LICENSE.txt?op=OPEN

Tuesday, August 28, 2018

Exploring Apache Knox - part I

Apache Knox is an application gateway that works with the REST APIs and User Interfaces of a large number of the most popular big data projects. It can be convenient to enforce that REST or browser clients interact with Apache Knox rather than different components of an Apache Hadoop cluster for example. In particular, Apache Knox supports a wide range of different mechanisms for securing access to the backend cluster. In this series of posts, we will look at different ways of securing access to an Apache Hadoop filesystem via Apache Knox. In this first post we will look at accessing a file stored in HDFS via Apache Knox, where the Apache Knox gateway authenticates the user via Basic Authentication.

1) Set up Apache Hadoop

To start we assume that an Apache Hadoop cluster is already running, with a file stored in "/data/LICENSE.txt" that we want to access. To see how to set up Apache Hadoop in such a way, please refer to part 1 of this earlier post. Ensure that you can download the LICENSE.txt file in a browser directly from Apache Hadoop via:
  • http://localhost:9870/webhdfs/v1/data/LICENSE.txt?op=OPEN
Note that the default port for Apache Hadoop 2.x is "50070" instead.

2) Set up Apache Knox

Next we will see how to access the file above via Apache Knox. Download and extract Apache Knox (Gateway Server binary archive - version 1.1.0 was used in this tutorial). First we create a master secret via:
  • bin/knoxcli.sh create-master
Next we start a demo LDAP server that ships with Apache Knox for convenience:
  • bin/ldap.sh start
We can authenticate using the credentials "guest" and "guest-password" that are stored in the LDAP backend.

Apache Knox stores the "topologies" configuration in the directory "conf/topologies". We will re-use the default "sandbox.xml" configuration for the purposes of this post. This configuration maps to the URI "gateway/sandbox". It contains the authentication configuration for the topology (HTTP basic authentication), and which maps the received credentials to the LDAP backend we have started above. It then defines the backend services that are supported by this topology. We are interested in the "WEBHDFS" service which maps to "http://localhost:50070/webhdfs". Change this port to "9870" if using Apache Hadoop 3.0.0 as in the first section of this post. Then start the gateway via:
  • bin/gateway.sh start
Now we can access our file directly via Knox, using credentials of "guest" / "guest-password" via:
  • https://localhost:8443/gateway/sandbox/webhdfs/v1/data/LICENSE.txt?op=OPEN
Or alternatively using Curl:
  • curl -u guest:guest-password -kL https://localhost:8443/gateway/sandbox/webhdfs/v1/data/LICENSE.txt?op=OPEN

Friday, August 24, 2018

OpenId Connect support for the Apache Syncope admin console

Apache Syncope is a powerful open source Identity Management project at the Apache Software Foundation. Last year I wrote a blog entry about how to log in to the Syncope admin and end-user web consoles using SAML SSO, showing how it works using Apache CXF Fediz as the SAML SSO IdP. In addition to SAML SSO, Apache Syncope supports logging in using OpenId Connect from version 2.0.9. In this post we will show how to configure this using the docker image for Apache CXF Fediz that we covered recently.

1) Configuring the Apache CXF Fediz OIDC IdP

First we will show how to set up the Apache CXF Fediz OpenId Connect IdP. Follow section (1) of this post about starting the Apache CXF Fediz IdP in docker. Once the IdP has started via "docker-compose up", open a browser and navigate to "https://localhost:10002/fediz-oidc/console/clients". This is the client registration page of the Fediz OIDC IdP. Authenticate using credentials "alice" (password "ecila") and register a new client for Apache Syncope using the redirect URI "http://localhost:9080/syncope-console/oidcclient/code-consumer". Click on the registered client and save the client Id and Secret for later:

2) Configuring Apache Syncope to support OpenId Connect

In this section, we will cover setting up Apache Syncope to support OpenId Connect. Download and extract the most recent standalone distribution release of Apache Syncope (2.1.1 was used in this post). Before starting Apache Syncope, we need to configure a truststore corresponding to the certificate used by the Apache CXF Fediz OIDC IdP. This can be done on linux via for example:
  • export CATALINA_OPTS="-Djavax.net.ssl.trustStore=./idp-ssl-trust.jks -Djavax.net.ssl.trustStorePassword=ispass"
where "idp-ssl-trust.jks" is available with the docker configuration for Fediz here. Start the embedded Apache Tomcat instance and then open a web browser and navigate to "http://localhost:9080/syncope-console", logging in as "admin" and "password".

Apache Syncope is configured with some sample data to show how it can be used. Click on "Users" and add a new user called "alice" by clicking on the subsequent "+" button. Specify a password for "alice" and then select the default values wherever possible (you will need to specify some required attributes, such as "surname"). Now in the left-hand column, click on "Extensions" and then "OIDC Client". Add a new OIDC Client, specifying the client ID + Secret that you saved earlier and click "Next". Then specify the following values (obtained from "https://localhost:10002/fediz-oidc/.well-known/openid-configuration"):
  • Issuer: https://localhost:10002
  • Authorization Endpoint: https://localhost:10002/fediz-oidc/idp/authorize
  • Token Endpoint: https://localhost:10002/fediz-oidc/oauth2/token
  • JWKS URI: https://localhost:10002/fediz-oidc/jwk/keys
Click "Next". Now we need to add a mapping from the user we authenticated at the IdP and the internal user in Syncope ("alice"). Add a mapping from internal attribute "username" to external attribute "preferred_username" as follows:

Now log out and select the "Open Id Connect" dialogue that should have appeared. You will be redirected to the Apache CXF Fediz OIDC IdP for authentication and then redirected back to Apache Syncope, where you will be automatically logged in as the user "alice".

Thursday, August 23, 2018

SAML SSO Logout support in Apache CXF Fediz

SAML SSO support was added to the Apache CXF Fediz IdP in version 1.3.0. In addition, SAML SSO support was added to the Tomcat 8 plugin from the 1.4.4 release. However, unlike for the WS-Federation protocol, support was not included for SAML SSO logout. That's going to change from the next 1.4.5 release. In this post we will cover how logout works in general for both protocols, across both the IdP and Relying Party (RP) plugins.

1) Logging out from the Apache CXF Fediz IdP

a) WS-Federation

Follow the previous post I wrote about experimenting with Apache CXF Fediz in docker and start the Fediz IdP and the 'fedizhelloworld' application (supporting WS-Federation and not SAML SSO) in docker. Login to the 'fedizhelloworld' application (and to the IdP) by navigating to 'https://localhost:8443/fedizhelloworld/secure/fedservlet' in a browser and logging on with credentials of 'alice'/'ecila'.

We can log out directly to the IdP by navigating to 'https://localhost:10001/fediz-idp/federation?wa=wsignout1.0'. As our IdpEntity configuration in 'entities-realma.xml' has the property "rpSingleSignOutConfirmation" set to "true", a sign out confirmation page is displayed asking us if we want to log out from the 'fedizhelloworld' application.

If we click on the "Logout" button then what happens next depends on whether we supplied a "wreply" parameter or not. If no parameter is supplied then a successful logout page is shown at the IdP. Otherwise we have the option of supplying a "wreply" parameter to return to the RP application after logout is successful. For this to work, the IdPEntity configuration bean must have the property "automaticRedirectToRpAfterLogout" set to "true". In addition, the "wreply" address must match a regular expression supplied by the "logoutEndpointConstraint" property of the matching "ApplicationEntity" bean for 'fedizhelloworld'.

b) SAML SSO

Support was added to the Apache CXF Fediz for SAML SSO logout in the forthcoming 1.4.5 release. The client sends a LogoutRequest to the IdP as follows:
After checking the Signature and doing some validation on the request (e.g. checking the destination), then a sign out confirmation page is displayed as per the WS-Federation case above (if the property "rpSingleSignOutConfirmation" set to "true). Once the user clicks on "Logout" then either a logout page is displayed on the IdP, or else a LogoutResponse is returned to the client (if the property "automaticRedirectToRpAfterLogout" set to "true"). In addition, the URL to redirect back to must be specified in the 'ApplicationEntity' configuration in "entities-realma.xml" under the property "logoutEndpoint".



2) Logging out from the RP application

a) WS-Federation

Next we'll turn our attention to logging out from the 'fedizhelloworld' application, secured by WS-Federation. Log in again to the application by navigating to 'https://localhost:8443/fedizhelloworld/secure/fedservlet'. There are a number of different ways of logging out from the application:
  • Specify a "wa=wsignout1.0" query parameter. This logs the user out and redirects to the IdP to log the user out there.
  • Specify a "wa=wsignoutcleanup1.0" query parameter. This logs the user out and either redirects to a URL supplied by the "wreply" parameter (which must match the configuration item "logoutRedirectTo" or "logoutRedirectToConstraint"), or alternatively to the "logoutRedirectTo" configuration item if no "wreply" parameter is specified. 
  • If the URL matches the configuration item "logoutURL". The default behaviour here is to log the user out and redirect to the IdP to log the user out there as well.
Feel free to experiment with these options with 'fedizhelloworld'.

b) SAML SSO

Support was added for SAML SSO logout support in the Tomcat plugin for the forthcoming 1.4.5 release. If the user navigates to the logout URL configured in fediz_config.xml ("logoutURL") then the user is logged out and a 'LogoutRequest' is sent to the IdP. If a 'LogoutResponse' is received from the IdP then it is processed and the user is redirected to the page specified in the "logoutRedirectTo" configuration item afterwards.

Follow the steps in the previous post to change the Fediz IdP and 'fedizhelloworld' docker images to use SAML SSO. When changing the IdP configuration, edit 'entities-realma.xml' and change the value for 'automaticRedirectToRpAfterLogout' to 'true'. Also add the following property to the ApplicationEntity bean for "srv-fedizhelloworld":
  • <property name="logoutEndpoint" value="https://localhost:8443/fedizhelloworld/index.html"/>
Now log on to the RP via 'https://localhost:8443/fedizhelloworld/secure/fedservlet' and log out via 'https://localhost:8443/fedizhelloworld/secure/logout'. You will be logged out of both the RP and the IdP and redirected to a landing page on the RP side.

Monday, August 20, 2018

Experimenting with Apache CXF Fediz in docker

I have covered the capabilities of Apache CXF Fediz many times on this blog, giving instructions of how to deploy the IdP or a sample secured web application to a container such as Apache Tomcat. However such instructions can be quite complex, ranging from building Fediz from scratch and deploying the resulting web applications, to configuring jars + keys in Tomcat, etc. Wouldn't it be great to just be able to build a few docker images and launch them instead? In this post we will show how to easily deploy the Fediz IdP and STS to docker, as well as how to deploy a sample application secured using WS-Federation. Then we show how easy it is to switch the IdP and the application to use SAML SSO instead.

1) The Apache CXF Fediz Identity Provider

The Apache CXF Fediz Identity Provider (IdP) actually consists of two web applications - the IdP itself which can handle both WS-Federation and SAML SSO login requests, as well as an Apache CXF-based Security Token Service (STS) to authenticate the end users. In addition, we also have a third web application, which is the Apache CXF Fediz OpenId Connect IdP, but we will cover that in a future post. It is possible to build docker images for each of these components with the following project on github:
  • fediz-idp: A sample project to deploy the Fediz IdP
To launch the IdP in docker, build each of the individual components and then launch using docker-compose, e.g.:
  • cd sts; docker build -t coheigea/fediz-sts .
  • cd idp; docker build -t coheigea/fediz-idp .
  • cd oidc; docker build -t coheigea/fediz-oidc .
  • docker-compose up
Please note that this project is provided as a quick and easy way to play around with the Apache CXF Fediz IdP. It should not be deployed in production as it uses default security credentials, etc.

2) The Apache CXF Fediz 'fedizhelloworld' application

Now that the IdP is configured, we will configure a sample application which is secured using the Fediz plugin (for Apache Tomcat). The project is also available on github here:
  • fediz-helloworld: Dockerfile to deploy a WS-Federation secured 'fedizhelloworld' application
The docker image can be built and run via:
  • docker build -t coheigea/fediz-helloworld .
  • docker run -p 8443:8443 coheigea/fediz-helloworld
Now just open a browser and navigate to 'https://localhost:8443/fedizhelloworld/secure/fedservlet'. You will be redirected to the IdP for authentication. Select the default home realm and use the credentials "alice" (password: "ecila") to log in. You should be successfully authenticated and redirected back to the web application.

3) Switching to use SAML SSO instead of WS-Federation

Let's also show how we can switch the security protocol to use SAML SSO instead of WS-Federation. Edit the Dockerfile for the fediz-idp project and uncomment the final two lines (to copy entities-realma.xml and mytomrpkey.cert into the docker image). 'mytomrpkey.cert' is used to validate the Signature of the SAML AuthnRequest, something that is not needed for the WS-Federation case as the client request is not signed. Rebuild the IdP image (docker build -t coheigea/fediz-idp .) and re-launch the IdP again via "docker-compose up".

To switch the 'fedizhelloworld' application we need to make some changes to the 'fediz_config.xml'. These changes are already made in the file 'fediz_config_saml.xml':

Copy 'fediz_config_saml.xml' to 'fediz_config.xml' and rebuild the docker image:
  • docker build -t coheigea/fediz-helloworld .
  • docker run -p 8443:8443 coheigea/fediz-helloworld
Open a browser and navigate to 'https://localhost:8443/fedizhelloworld/secure/fedservlet' again. Authentication should succeed as before, but this time using SAML SSO as the authentication protocol instead of WS-Federation.

Wednesday, July 4, 2018

Two new security advsories for Apache CXF

Two new security advisories have been published recently for Apache CXF:
  • CVE-2018-8039: Apache CXF TLS hostname verification does not work correctly with com.sun.net.ssl.*:
It is possible to configure CXF to use the com.sun.net.ssl implementation via: System.setProperty("java.protocol.handler.pkgs", "com.sun.net.ssl.internal.www.protocol");

When this system property is set, CXF uses some reflection to try to make the HostnameVerifier work with the old com.sun.net.ssl.HostnameVerifier interface. However, the default HostnameVerifier implementation in CXF does not implement the method in this interface, and an exception is thrown. However, the exception is caught in the reflection code and not properly propagated.

What this means is that if you are using the com.sun.net.ssl stack with CXF, an error with TLS hostname verification will not be thrown, leaving a CXF client subject to man-in-the-middle attacks.
  • CVE-2018-8038: Apache CXF Fediz is vulnerable to DTD based XML attacks:
The fix for advisory CVE-2015-5175 in Apache CXF Fediz 1.1.3 and 1.2.1 prevented DoS style attacks via DTDs. However, it did not fully disable DTDs, meaning that the Fediz plugins could potentially be subject to a DTD-based XML attack.

In addition, the Apache CXF Fediz IdP is also potentially subject to DTD-based XML attacks for some of the WS-Federation request parameters.
Please upgrade to the latest releases to pick up fixes for these advisories. The full CVEs are available on the CXF security advisories page.

Wednesday, June 27, 2018

Securing web services using Talend's Open Studio for ESB - part VII

This is the seventh and final article in a series on securing web services using Talend's Open Studio for ESB. First we covered how to create and secure a SOAP service, client job and route in the Studio, and how to deploy them to the Talend runtime container. In the previous post we looked instead at how to implement a REST service and client in the Studio. In this post we will build on the previous post by showing some different ways to secure our REST service when it is deployed in the Talend container.

1) Secure the REST "double-it" webservice using HTTP B/A

Previously we saw how to secure the SOAP "double-it" service in the container using WS-Security UsernameTokens. In this section we'll also secure our REST service using a username and password that the client supplies - this time using HTTP Basic Authentication. Open the REST service we have created in the Studio, and click on the 'tRESTRequest' component. Select "Use Authentication" and then pick the default "Basic HTTP" option. Save the job and build it by right clicking on the job name and selecting "Build job".


Start the runtime container and deploy the job. Now open our REST client job in the Studio. Click on the 'tRESTClient' component and select "Use Authentication" as per 'tRESTRequest' above. Select 'tesb' for the username and password (see section 2 of the SAML tutorial for an explanation of how authentication works in the container). Now build the job and deploy it to the container. The client job should successfully run. See below for a log of a successful request where the client credentials can be seen in the "Basic" HTTP header:


2) Secure the REST "double-it" webservice using SAML

As for SOAP services, we can also secure our REST webservice using SAML. Instead of having the REST client to create a SAML Assertion, we will leverage the Talend Security Token Service (STS). The REST client will use the same mechanism (WS-Trust) to authenticate and obtain a SAML Token from the Talend STS as for the SOAP-case. Then the REST client inserts the SAML Token into the authorization header of the service request. The service parses the header and validates the signature on the SAML Token in exactly the same way as for the REST request.

In the Studio, edit the 'tRESTRequest' and 'tRESTClient' components in our jobs as for the "Basic Authentication" example above, except this time select "SAML Token" for "Use Authentication". Save the jobs and build them and deploy the service to the container. Before deploying the client job, we need to start the STS via:
  • tesb:start-sts
Then deploy the client job and it should work correctly:



Monday, June 18, 2018

Securing web services using Talend's Open Studio for ESB - part VI

This is the sixth article in a series on securing web services using Talend's Open Studio for ESB. Up to now we have seen how to create and secure a SOAP service, client job and route in the Studio, and how to deploy them to the Talend runtime container. For the remaining articles in this series, we will switch our focus to REST web services instead. In this article we will look at how to implement a REST service and client in the Studio.

1) Implement a "double-it" REST Service in the Studio

First let's look at how we can create implement the "double-it" service as a REST service instead. Open the Studio and right click on "Job Designs" and select "Create job". Create a new job called "DoubleItRESTService". Drag the 'tRESTRequest', 'tXMLMap' and 'tRESTResponse' components from the palette into the central window. Connect them by right-clicking on 'tRESTRequest' and selecting "Row / New Output" and drag the link to 'tXMLMap', calling the output 'Request'. Right-click on 'tXMLMap' and select "Row / New Output" and drag the link to 'tRESTResponse', calling the output 'Response':


Now let's design the REST endpoint by clicking on 'tRESTRequest'. Our simple "double-it" service will accept a path parameter corresponding to the number to double. It will return an XML or JSON response containing the doubled number wrapped in a "result" tag. Edit the 'REST endpoint' to add "/doubleit" at the end of the URL. In the REST API mapping, edit the "URI Pattern" to be "/{number}". Now click on the "Output Flow" for "Request" and click on the three dots that appear. Click the "+" button and change the column name to "number" and the Type to "Integer":


Click "OK" and then double-click on 'tXMLMap'. Left-click on the "Number" column on the left-hand side, and drag it over to the right-hand side to the "body" column. Select "Add Linker to Target Node". Now click on "Request.number" on the right-hand side and then on the three dots. Change the expression to "2 * Request.number" to implement the "doubling" logic. Finally, rename the "root" element to "result":


Finally click "OK", save the job and run it. We can test via a console that the job is working OK using a tool such as curl:
  • curl -H "Accept: application/xml" http://localhost:8088/doubleit/15
  • Response: <?xml version="1.0" encoding="UTF-8"?><result>30</result>
  • Response if we ask for JSON: {"result":30}
2) Implement a "double-it" REST client in the Studio

Now we'll design a client job for the "double-it" REST service in the Studio. Right-click on "Job Designs" and create a new job called "DoubleItRESTClient". Drag a 'tFixedFlowInput', 'tRESTClient' and two 'tLogRow' components from the palette into the central window. Link the components, sending the 'tRESTClient'
"Response" to one 'tLogRow' component and the "Error" to the other:

Now click on 'tFixedFlowInput' and then 'Edit Schema'. Add a new column called "number" of type "Integer", and click "yes" to propagate the changes. In the inline table, add a value for the number. Finally, click on 'tRESTClient' and specify "http://localhost:8088/doubleit/" for the URL, and row1.number for the relative path. Keep the default HTTP Method of "GET" and "XML" for the "Accept Type":


Now save the job and run it. The service response should be displayed in the window of the run tab. In the next article, we'll look at how to secure this REST service in the Studio when deploying it to the Talend runtime container.

Friday, June 15, 2018

Securing web services using Talend's Open Studio for ESB - part V

This is the fifth article in a series on securing web services using Talend's Open Studio for ESB. So far we have seen how to design a SOAP service and client in the Studio, how to deploy them to the Talend runtime container, and how to secure them using a UsernameToken and SAML token. In addition to designing 'jobs', the Studio also offers the ability to create a 'route'. Routes leverage the capabilities and components of Apache Camel, which is a popular integration framework. In this article, we will design a route to invoke on the SAML-secured service we configured in the previous tutorial, instead of using a job.

1) Create a route to invoke on the "double-it" service

In the Studio, right-click on 'Routes' in the left-hand pane, and select 'Create Route' and create a new route called 'DoubleItClientRoute'. Select the 'cTimer', 'cSetBody', 'cSOAP' and 'cLog' components from the palette on the right-hand side and drag them into the route window from left to right. Link the components up by right clicking on each component, and selecting 'Row' and then 'Route' and left-clicking on the next component over:


Now let's configure each component in turn. The 'cTimer' component is used to start the route. You can run the route an arbitrary number of times with a specified delay, or else specify a start time to run the route. For now just enter '1' for 'Repeat' as we want to run the route once. Now click on the 'cSetBody' component. This is used to specify the Body of the request we are going to make on the remote (SOAP) service. For simplicity we will just hard-code the SOAP Body, so select 'CONSTANT' as the Language and input '"<ns2:DoubleItRequest xmlns:ns2=\"http://www.talend.org/service/\">60</ns2:DoubleItRequest>"' for the expression:


Now we will configure the 'cSOAP' component. First, deploy the SAML-secured SOAP service on the container (see previous tutorial) so that we have access to the WSDL. Double-click 'cSOAP' and enter 'http://localhost:8040/services/DoubleIt?wsdl' for the WSDL and hit the reload icon on the right-hand side and click 'Finish'. We will use the default dataformat of 'PAYLOAD' (the SOAP Body contents we set in 'cSetBody'). Select 'Use Authentication' and then pick "SAML Token". Input 'tesb' for the Username and Password values, and save the route.


2) Deploy the route to the container

Right click on the route name in the left-hand pane and select 'Build Route' to build the .kar file. In the container where the SAML-secured service should already be running, start the STS with 'tesb:start-sts', and then copy the client route .kar file into the 'deploy' folder. Consult the log in 'log/tesb.log' and you will see the successful service response as follows:


Wednesday, June 13, 2018

Combining Keycloak with the Apache CXF STS

The Apache CXF STS (Security Token Service) is a web service (both SOAP and REST are supported) that issues tokens (e.g. SAML, JWT) to authenticated users. It can also validate, renew and cancel tokens. To invoke successfully on the STS, a user must present credentials to the STS for authentication. The STS must be configured in turn to authenticate the user credentials to some backend. Another common requirement is to retrieve claims relating to the authenticated user from some backend to insert into the issued token.

In this post we will look at how the STS could be combined with Keycloak to both authenticate users and to retrieve the roles associated with a given user. Typically, Keycloak is used as an IdM for authentication using the SAML SSO or OpenId Connect protocols. However in this post we will leverage the Admin REST API.

I have created a project on github to deploy the CXF STS and Keycloak via docker here.

1) Configuring the STS

Checkout the project from github. The STS is configured is a web application that is contained in the 'src' folder. The WSDL defines a single endpoint with a security policy that requires the user to authenticate via a WS-Security UsernameToken. The STS is configured in spring. Essentially we define a custom 'validator' to validate the UsernameToken, as well as a custom ClaimsHandler to handle retrieving role claims from Keycloak. We also configure the STS to issue SAML tokens.

UsernameTokens are authenticated via the KeycloakUTValidator in the project source. This class is configured with the Keycloak address and realm and authenticates received tokens as follows:

Here we use the Keycloak REST API to search for the user matching the given username, using the given username and password as credentials. What the client API is actually doing behind the scenes here is to obtain an access token from Keycloak using the OAuth 2.0 resource owner password credentials grant, something that can be replicated with a tool like curl as follows:
  • curl --data "client_id=admin-cli&grant_type=password&username=admin&password=password" http://localhost:9080/auth/realms/master/protocol/openid-connect/token -v
  • curl -H "Authorization: bearer <access token>" http://localhost:9080/auth/admin/realms/master/users -H "Accept: application/json" -v
Keycloak will return a HTTP status code of 401 if authentication fails. We allow the case that Keycloak returns 403 unauthorized, as the user may not be authorized to invoke on the admin-cli client. A better approach would be to emulate Apache Syncope and have a "users/self" endpoint to allow users to retrieve information about themselves, but I could not find an analogous endpoint in Keycloak.

Role claims are retrieved via the KeycloakRoleClaimsHandler. This uses the admin credentials to search for the (already authenticated) user, and obtains the effective "realm-level" roles to add to the claim.

2) Running the testcase in docker

First build the STS war and create a docker image for the STS as follows:
  • mvn clean install
  • docker build -t coheigea/cxf-sts-keycloak . 
This latter command just deploys the war that was built into a Tomcat docker image via this Dockerfile. Then pull the official Keycloak docker image and start both via docker-compose (see here):
  • docker pull jboss/keycloak
  • docker-compose up
This starts the STS on port 8080 and Keycloak on port 9080. Log on to the Keycloak administration console at http://localhost:9080/auth/ using the username "admin" and password "password". Click on "Roles" and add a role for a user (e.g. "employee"). The click on "Users" and add a new user. After saving, click on "Credentials" and specify a password (unselecting "Temporary"). Then click on "Role Mappings" and select the role you created above for the user.

Now we will use SoapUI to invoke on the STS. Download it and create a new SOAP project using the WSDL of the STS (http://localhost:8080/cxf-sts-keycloak/UT?wsdl). Click on 'Issue' and select the request. We need to edit the SOAP Body of the request to instruct the STS to issue a SAML Token with a Role Claim using the standard WS-Trust parameters:

<ns:RequestSecurityToken>
     <t:TokenType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</t:TokenType>
     <t:KeyType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer</t:KeyType>
     <t:RequestType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</t:RequestType>
     <t:Claims xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512" Dialect="http://schemas.xmlsoap.org/ws/2005/05/identity">
        <ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role"/>
     </t:Claims>
</ns:RequestSecurityToken>

Click in the Properties box in the lower left-hand corner and specify the username and password for the user you created in Keycloak. Finally, right click on the request and select "Add WSS UsernameToken" and hit "OK" and send the request. If the request was successful you should see the SAML Assertion issued by the STS on the right-hand side. In particular, note that the Assertion contains a number of Attributes corresponding to the roles of that particular user.


Monday, June 11, 2018

Running the Apache Kerby KDC in docker

Apache Kerby is a subproject of the Apache Directory project, and is a complete open-source KDC written entirely in Java. Apache Kerby 1.1.1 has been released recently. Last year I wrote a blog post about how to configure and launch Apache Kerby, by first obtaining the source distribution and building it using Apache Maven. In this post we will cover an alternative approach, which is to download and run a docker image I have prepared which is based on Apache Kerby 1.1.1.

The project is available on github here and the resulting docker image is available here. Note that this is not an official docker image - and so it provided just for testing or experimentation purposes. First clone the github repository and either build the image from scratch or download it from dockerhub:
  • docker build . -t coheigea/kerby
 or:
  • docker pull coheigea/kerby
The docker image builds a KDC based on Apache Kerby and runs it when started. However, it expects a directory to be supplied as the first argument (defaults to '/kerby-data/conf') containing the configuration files for Kerby. The github repository contains the relevant files in the 'kerby-data' directory. As well as the configuration files, it stores the admin keytab and a JSON file containing the default principals for the KDC.

Start the KDC by mapping the kerby-data directory to a volume on the container:
  • docker run -it -p 4000:88 -v `pwd`/kerby-data:/kerby-data coheigea/kerby
Now we can log into the docker image and create a user for our tests:
  • docker exec -it <id> bash
  • stty rows 24 columns 80 (required to run jline in docker)
  • sh bin/kadmin.sh /kerby-data/conf/ -k /kerby-data/keytabs/admin.keytab
  • Then: addprinc -pw password alice@EXAMPLE.COM
To test the KDC from outside the container you can use the MIT kinit tool. Set the KRB5_CONFIG environment variable to point to the "krb5.conf" file included in the github repository, e.g:
  • export KRB5_CONFIG=`pwd`/krb5.conf
  • kinit alice
This will get you a ticket for "alice", that can be inspected via "klist".

Friday, June 8, 2018

Securing web services using Talend's Open Studio for ESB - part IV

This is the fourth article in a series on securing web services using Talend's Open Studio for ESB. In the previous article, we looked at how to secure a SOAP webservice in the Talend container, by requiring the client to authenticate using a WS-Security UsernameToken. In this post we will look at an alternative means of authenticating clients using a SAML token, which the client obtains from a Security Token Service (STS) also deployed in the Talend container. This is more sophisticated than the UsernameToken approach, as we can embed claims as attributes in the SAML Assertion, thus allowing the service provider to also make authorization decisions. However, in this article we will just focus on authentication.

1) Secure the "double-it" webservice by requiring clients to authenticate

As in the previous article, first we will secure the "double-it" webservice we have designed in the Studio in the first article, by requiring clients to authenticate using a SAML Token, which is conveyed in the security header of the request. SAML authentication can be configured for a service in the Studio, by right-clicking on the "DoubleIt 0.1" Service in the left-hand menu and selecting "ESB Runtime Options". Under "ESB Service Security" select "SAML Token". Select "OK" and export the service again as detailed in the second article.

Now start the container and deploy the modified service. Note that what selecting the "SAML Token" actually does in the container is to enforce the policy that is stored in 'etc/org.talend.esb.job.saml.policy', which is a WS-SecurityPolicy assertion that requires that a SAML 2.0 token containing an X.509 certificate associated with the client (subject) must be sent to the service. In addition, a Timestamp must be included in the security header of the request, and signed by the private key associated with the X.509 certificate in the Assertion.

2) Update the client job to include a SAML Token in the request

Next we have to update the client job to include a SAML Token in the Studio. Open the "tESBConsumer" component and select "Use Authentication", and then select the "SAML Token" authentication type. The propagation options are not required for this task - they are used when a SOAP Service is an intermediary service, and wishes to get a new SAML Token "On Behalf Of" a token that it received. Enter "tesb" for the username and password values (this is one of the default users defined in 'etc/users.properties' in the container). Now save the job and build it.



3) Start the STS in the container and deploy the client job

Once the client job has been deployed to the container, it will first attempt to get a SAML Token from the STS. Various properties used by the client to communicate with the STS are defined in 'etc/org.talend.esb.job.client.sts.cfg'. The Talend runtime container ships with a fully fledged STS. Clients can obtain a SAML Token by including a username/password in the request, which the STS in turn authenticates using JAAS (see section 2 in the previous article). Start the STS in container via:
  • tesb:start-sts
Now deploy the client job, and it should succeed, with the response message printed in the console. The log 'log/tesb.log' includes the client request and service response messages - in the client request you can see the SAML Assertion included in the security header of the message.

Tuesday, May 29, 2018

Securing web services using Talend's Open Studio for ESB - part III

This is the third article in a series on securing web services using Talend's Open Studio for ESB. In the first article, we looked at how to design and test a SOAP web service in the Studio, and how to create a client job to invoke on it. In the second article we looked at deploying the jobs in the Talend ESB runtime container. In this article, we will look at how to secure the SOAP webservice we are deploying in the container, by requiring the client to authenticate using a WS-Security UsernameToken.

1) Secure the "double-it" webservice by requiring clients to authenticate

First we will secure the "double-it" webservice we have designed in the Studio in the first article, by requiring clients to authenticate using a WS-Security UsernameToken. Essentially what this means is that the client adds a SOAP header to the request containing username and password values, which then must be authenticated by the service. UsernameToken authentication can be configured for a service in the Studio, by right-clicking on the "DoubleIt 0.1" Service in the left-hand menu and selecting "ESB Runtime Options". Under "ESB Service Security" select "Username/Password". Select "OK" and export the service again as detailed in the second article.

Now start the container and deploy the modified service. Note that what selecting the "Username/Password" actually does in the container is to enforce the policy that is stored in 'etc/org.talend.esb.job.token.policy', which is a WS-SecurityPolicy assertion that requires that a UsernameToken must always be sent to the service. Now deploy the client job - you will see an error in the Console along the lines of:

{http://schemas.xmlsoap.org/soap/envelope/}Server|These policy alternatives can not be satisfied:
{http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702}SupportingTokens
{http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702}UsernameToken

This is due to the fact that we have not yet configured the client job to send a
UsernameToken in the request.

2) How authentication works in the container

So far we have required clients to authenticate to the service, but we have not said anything about how the service actually authenticates the credentials that it receives. Apache Karaf uses JAAS realms to handle authentication and authorization. Typing "jaas:realm-list" in the container shows the list of JAAS realms that are installed:

Here we can see that the (default) JAAS realm of "karaf" has been configured with a number of JAAS Login Modules. In particular, in index 1, the PropertiesLoginModule authenticates users against entries in 'etc/users.properties'. This file contains entries that map a username to a password, as well as an optional number of groups. It also contains entries mapping groups to roles. In this example though we are solely concerned with authentication. The service will extract the username and password from the security header of the request and will compare them to the values in 'etc/users.properties'. If there is a match then a user is deemed to be authenticated and the request can proceed.

In a real-world deployment, we can authenticate to users stored in a database or in an LDAP directory server, by configuring a JAAS Realm with the appropriate LoginModules (see the Karaf security guide for a list of available Login Modules).

3) Update the client job to include a UsernameToken

Finally we have to update the client job to include a UsernameToken in the Studio. Open the "tESBConsumer" component and select "Use Authentication", and then select the "Username Token" authentication type. Enter "tesb" for the username and password values (this is one of the default users defined in 'etc/users.properties' in the container).



Now save the job and build and deploy it as per the second tutorial. The job request should succeed, with the response message printed in the console. Examining 'log/tesb.log' it is possible to see what the client request looks like:

In the next article we'll look at authentication using SAML tokens.

Monday, May 21, 2018

Securing web services using Talend's Open Studio for ESB - part II

This is the second article in a series on securing web services using Talend's Open Studio for ESB. In the first article, we looked at how Talend's Open Studio for ESB can be used to design and test a SOAP web service, and also how we can create a client job that invokes on this service. In this article, we will show how to deploy the service and client we created previously in the Talend ESB runtime container.

1) The Talend ESB runtime container

When we downloaded Talend Open Studio for ESB (see the first article), we launched the Talend Studio via the "Studio" directory to design and test our "double it" SOAP service. However, the ability to "Run" the SOAP Service in the Studio is only suitable for testing the design of the service. Once we are ready to deploy a service or client we have created in the Studio, we will need a suitable runtime container, something that is available in the "Runtime_ESBSE" directory in the Talend Open Studio for ESB distribution. The runtime container in question is a powerful and enterprise-ready container based on Apache Karaf. We can start it in the "Runtime_ESBSE/container" directory via "bin/trun":

By default, the Talend ESB runtime starts with a set of default "bundles" (which can be viewed with "la"). All of the libraries that we require will be started automatically, so no further work is required here.

2) Export the service and client job from the Studio

To deploy the SOAP "double it" service, and client job, we need to export them from the Studio. Right click on the "Double It" service in the left-hand menu, and first select "ESB Runtime Options", ticking "Log Messages" so that we can see the input/output messages of the service when we look at the logs. Then, right click again on "Double It" and select "Export Service" and save the resulting .kar file locally.

Before exporting the client job, we need to make one minor change. The default port that the Studio used for the "double it" SOAP service (8090) is different to that of Karaf (8040). Click on "tESBConsumer" and change the port number in the address to "8040". Then after saving, right click on the double it client job and select "Build job". Under "Build Type" select "OSGI bundle for ESB", and click "Finish" to export the job:

3) Deploy the service and client jobs to the Runtime Container

Finally, we need to deploy the service and client jobs to the Runtime Container. First, copy the service .kar file into "Runtime_ESBSE/container/deploy". This will automatically deploy the service in Karaf (something that can be verified by running "la" in the console - you should see the service as the last bundle on the list). Then also copy the client jar into the "deploy" directory. The response will be output in the console window (due to the tLogRow component), and the full message can be seen in the server logs ("log/tesb.log"):