tag:blogger.com,1999:blog-73917837041663480522024-03-19T00:32:40.409-07:00Open Source SecurityColm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.comBlogger311125tag:blogger.com,1999:blog-7391783704166348052.post-49665962424104782292024-03-07T06:31:00.000-08:002024-03-07T06:31:25.117-08:00Improving license detection when generating SBOMs<p>I blogged last year about <a href="https://coheigea.blogspot.com/2023/10/publishing-sboms-for-open-source.html">generating a Software Bill of Material (SBOM)</a> for an <a href="https://maven.apache.org/">Apache Maven</a> project using the cyclonedx-maven-plugin. It's ideal to generate an SBOM at build time in this way, as you have access to an accurate dependency graph (from Maven in this case). However, sometimes you want to create an SBOM from a third-party binary artifact, such as a jar, zip or docker image. <a href="https://github.com/anchore/syft">Anchore Syft</a> is ideal for this purpose. However, I found that it generated somewhat limited licensing information for jars. In this post I'll examine a series of contributions I made to Syft over the end of 2023 that majorly improved this. As an aside, I found the Syft community to be very helpful and responsive, so it was an enjoyable process!<br /></p><h4 style="text-align: left;">Improvements Contributed<br /></h4><div><p>The initial Syft release I looked at (Syft v0.92.0) only detected a license for a jar if the Java Manifest.MF contained in the jar had an OSGi Bundle-License tag detailing the license used. As many projects don't support OSGi it meant that relatively few licenses were detected. Here are the improvements I contributed and the versions they were released in:</p><ul style="text-align: left;"><li>v0.93.0: <a href="https://github.com/anchore/syft/pull/2115">Added support</a> to get the license if specified in a pom.xml included in the jar.</li><li>v0.94.0: <a href="https://github.com/anchore/syft/pull/2213">Added support</a> to read a license file in the root directory or in META-INF and <a href="https://github.com/anchore/syft/pull/2227">added support</a> for different common license filenames.</li><li>v0.95.0: <a href="https://github.com/anchore/syft/pull/2235">Perform</a> case insensitive matching on Java License files, go to Maven Central to f<a href="https://github.com/anchore/syft/pull/2228">ind a license defined in a parent pom</a>, <a href="https://github.com/anchore/syft/pull/2231">parse multiple poms in a jar</a>. Also <a href="https://github.com/anchore/syft/pull/2274">added recursive support</a> to find a license from parent poms in Maven Central.</li><li>v0.96.0: Also <a href="https://github.com/anchore/syft/pull/2302">check Maven Central for licenses</a> defined in parent poms for embedded dependencies.</li><li>v0.97.0: If no pom.xml or pom.properties, <a href="https://github.com/anchore/syft/pull/2295">fall back to use the Java metadata</a> to find the correct artifact in Maven Central.<br /></li></ul><p>An additional improvement (not by me) was made in a subsequent release (v0.103.1) to fix <a href="https://github.com/anchore/syft/issues/2563">a bug with underscores in artifacts</a> that resulted in licenses not being found.</p><p>One point to make is that going to Maven Central to find poms with license information is not enabled by default, this is what I have in a local .syft.yaml:</p><p>java:<br /> maven-url: "https://repo1.maven.org/maven2"<br /> max-parent-recursive-depth: 8<br /> use-network: true <br /></p><h4 style="text-align: left;">Testcase</h4><p style="text-align: left;">As a test-case, I chose <a href="https://spark.apache.org/">Apache Spark</a>
as an example of a project containing a large number of third-party
(Java-based) dependencies, specifically the distribution
spark-3.5.1-bin-hadoop3.tgz. Using Syft v0.92.0 as a starting point, I
generated a cyclonedx-json SBOM using Syft via:</p><ul style="text-align: left;"><li style="text-align: left;">syft
packages ./spark-3.5.1-bin-hadoop3.tgz -o cyclonedx-json >
spark.json (note: newer versions of Syft use "scan" instead of
"packages")</li></ul><p>Then I used jq to generate a CSV consisting of
the dependencies found in the SBOM and their license detected, or
"unknown-license" if no license was found:</p><ul style="text-align: left;"><li>
jq -r '.components[] | .group + "/" + .name + ":" + .version + "," +
try(.licenses[] | .license? | flatten | join(" ")) // .group + "/" +
.name + ":" + .version + "," + .licenses?[]?.expression // .group + "/" +
.name + ":" + .version + ",unknown-license"' spark.json</li></ul><h4 style="text-align: left;">Results</h4><p></p><p style="text-align: left;">For the Apache Spark distributed detailed above, these are the results:</p>
<table>
<tbody><tr>
<th>Syft version</th>
<th>Dependencies detected</th>
<th>Unknown licenses</th>
<th>% licenses detected</th>
</tr>
<tr>
<td>v0.92.0</td>
<td>440</td>
<td>306</td>
<td>30.4%</td>
</tr>
<tr>
<td>v0.93.0</td>
<td>442</td>
<td>245</td>
<td>44.5%</td>
</tr>
<tr>
<td>v0.94.0</td>
<td>470</td>
<td>203</td>
<td>56.8%</td>
</tr>
<tr>
<td>v0.95.0</td>
<td>444</td>
<td>157</td>
<td>64.6%</td>
</tr>
<tr>
<td>v0.96.0</td>
<td>468</td>
<td>32</td>
<td>93.1%</td>
</tr>
<tr>
<td>v0.97.0</td>
<td>468</td>
<td>27</td>
<td>94.2%</td>
</tr>
<tr>
<td>v1.103.1</td>
<td>467</td>
<td>11</td>
<td>97.4%</td>
</tr>
</tbody></table>
<p>Going from less than a third of dependencies getting their license detected correctly to almost 100% is pretty good! The remaining 11 dependencies don't contain any pom.xml or pom.properties or any other metadata that allow Syft to find the correct pom.xml in Maven Central. Possibly some improvements could be made in looking at the package names to try to find the correct path in Maven Central.<br /></p>
</div>Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-10468852349947570392023-10-23T20:29:00.000-07:002023-10-23T20:29:02.202-07:00CVE-2023-44483 in Apache Santuario - XML Security for Java<p>A new CVE has been published for the recent <a href="https://santuario.apache.org/">Apache Santuario</a> - XML Security for Java releases (4.0.0, 3.0.3, 2.3.4 and 2.2.6):</p><ul><li><a href="https://santuario.apache.org/secadv.data/CVE-2023-44483.txt.asc?version=1&modificationDate=1697782758000&api=v2">CVE-2023-44483</a>: Apache Santuario: Private Key disclosure in debug-log output</li></ul><p>"A private key may be disclosed in log files when generating an XML Signature and logging with debug level is enabled. Users are recommended to upgrade to version 2.2.6, 2.3.4, or 3.0.3, which fixes this issue."<br /></p>Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-6694520488352485802023-10-10T04:11:00.000-07:002023-10-10T04:11:05.207-07:00Publishing SBOMs for open-source projects<p>Software Bill of Materials (SBOMs) are a recent hot topic, in part due to an <a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/">executive order</a> by the US government which references making an SBOM available on a public site. Making a signed SBOM available publicly allows downstream projects to consume the SBOM automatically using tooling to list dependency names and versions, what licenses they use, what vulnerabilities are known about them, etc. </p><p>When looking at a recent <a href="https://commons.apache.org/proper/commons-codec/">Apache commons-codec</a> version in Maven Central recently, I noticed that it was <a href="https://repo.maven.apache.org/maven2/commons-codec/commons-codec/1.16.0/">publishing signed SBOMs</a> in both <a href="https://cyclonedx.org/">CycloneDX</a> and <a href="https://spdx.dev/">SPDX</a> formats. Inspired by this, I've started to add similar functionality to the other ASF projects I contribute to, starting at first with CycloneDX support. It's very easy to do this for Java-based projects by just adding Maven plugins, see <a href="https://github.com/apache/santuario-xml-security-java/pull/202/files">for example.</a><br /></p><p>Last month, version 4.0.0-M1 of the Apache XML Security for Java library was released. The <a href="https://repo1.maven.org/maven2/org/apache/santuario/xmlsec/4.0.0-M1/xmlsec-4.0.0-M1-cyclonedx.json">SBOM</a> is available on Maven Central along with the released artifacts.</p><p>Let's see what we can do with this SBOM by hand. Firstly let's download it and the signature:</p><ul style="text-align: left;"><li>wget https://repo1.maven.org/maven2/org/apache/santuario/xmlsec/4.0.0-M1/xmlsec-4.0.0-M1-cyclonedx.json</li><li>wget https://repo1.maven.org/maven2/org/apache/santuario/xmlsec/4.0.0-M1/xmlsec-4.0.0-M1-cyclonedx.json.asc</li></ul><p>Validate the signature (using the <a href="https://repo1.maven.org/maven2/org/apache/santuario/KEYS">KEYS</a> file):<br /></p><ul style="text-align: left;"><li>gpg --verify xmlsec-4.0.0-M1-cyclonedx.json.asc</li></ul><p>Now we're ready to answer some questions about the library. </p><p>What library name, versions and licenses are third-party dependencies of Apache XML Security for Java 4.0.0-M1? We can extract this information using the jq tool and some hacking:</p><ul style="text-align: left;"><li>jq -r '.components[] | .group + "/" + .name + ":" + .version + "," + (.licenses?[]?.license | flatten | join(" "))' xmlsec-4.0.0-M1-cyclonedx.json<br /></li></ul><script src="https://gist.github.com/coheigea/4420847c44efd72df3a80d00ee8c3ac7.js"></script><p>Next we might want to know if any of these dependencies have publicly known vulnerabilities. For this we can use the excellent <a href="https://github.com/anchore/grype">Grype</a> tool, which parses the sbom directly:</p><ul style="text-align: left;"><li>grype sbom:./xmlsec-4.0.0-M1-cyclonedx.json</li></ul><p>For now at least, it outputs that no vulnerabilities were found. So by using the SBOM with some third-party open-source tools we can find out what the third-party dependencies are, re-assure ourselves that they are available under a business-friendly open-source license, and ensure that there are no known vulnerabilities associated with them. Hopefully more open-source projects will roll-out having publicly available SBOMs for their releases to make answering these questions easier.<br /></p>Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-86025738238537143642023-04-14T00:18:00.001-07:002023-04-14T00:18:22.416-07:00Open Source Software Composition Analysis<p>Software Composition Analysis (SCA) is the process of figuring out which third-party dependencies are used in your project. It's an essential part of the software security process as it helps you to answer questions like:</p><ul style="text-align: left;"><li>Does my project contain third party dependencies with known vulnerabilities (CVEs)?</li><li>Does my project contain third party dependencies with risky licenses?</li><li>Does my project comply with all legal requirements imposed by the upstream projects?</li></ul><p>In this post we'll look at some popular open-source SCA options. It's not intended to be comprehensive, let me know if I missed anything! Adding one or more of these projects to your CI/CD process will really improve your Supply Chain Security process.<br /></p><h3 style="text-align: left;">GitHub Dependabot </h3><p style="text-align: left;">If your project is hosted on GitHub, then the first port of call for SCA is to enable <a href="https://docs.github.com/en/code-security/dependabot">GitHub Dependabot</a>. You have the options to just enable alerts, which let you know if your dependencies have known CVEs, and also to have Dependabot automatically create pull requests to upgrade the dependencies in question to fix the vulnerability. Adding CI using GitHub Actions to this process to verify that the updates don't break any of the tests/build means fixing CVEs is a straightforward process. Dependabot has support for a wide range of software ecosystems.</p><p style="text-align: left;">GitHub recently added support as well to download an <a href="https://spdx.dev/">SPDX</a> SBOM for a GitHub repository via e.g for Apache CXF https://github.com/apache/cxf/dependency-graph/sbom. <br /></p><h3 style="text-align: left;">OWASP Dependency-Check</h3><p style="text-align: left;"><a href="https://owasp.org/www-project-dependency-check/">OWASP Dependency-Check</a> is another tool that can help you find CVEs in your dependencies. It's useful as an alternative to dependabot if you don't have access to the security tab of a GitHub project, or if Dependabot is otherwise not enabled. You can run it on a <a href="https://maven.apache.org/">Maven</a> project via:</p><ul style="text-align: left;"><li>mvn org.owasp:dependency-check-maven:check</li></ul><h3 style="text-align: left;">Trivy</h3><p style="text-align: left;">Aqua <a href="https://github.com/aquasecurity/trivy">Trivy</a> is a really useful tool for SCA as it can help with a wide range of scenarios:</p><ul style="text-align: left;"><li>Scan a docker image for CVEs/Secrets: trivy image tomcat:9.0 </li><ul><li>Exclude secret scanning: trivy --security-checks vuln image tomcat:9.0</li><li>Exclude OS level CVEs: trivy --security-checks vuln --vuln-type library image tomcat:9.0</li></ul><li>Scan a GitHub repository: trivy repository https://github.com/apache/cxf</li><li>Scan the filesystem at the current working directory: trivy fs .</li></ul><h3 style="text-align: left;">Syft</h3><p style="text-align: left;">Anchore <a href="https://github.com/anchore/syft">Syft</a> is a tool which can help you with generating an SBOM from an image or filesystem:</p><ul style="text-align: left;"><li>Generate a <a href="https://cyclonedx.org/">CycloneDX</a> SBOM from a docker image: syft -o cyclonedx-json tomcat:9.0</li><li>Generate an SBOM from a war file: syft packages ./fedizhelloworld.war</li></ul><h3 style="text-align: left;">Grype</h3><p style="text-align: left;">Anchore <a href="https://github.com/anchore/grype">Grype</a> is another super-useful tool that works well with Syft:</p><ul style="text-align: left;"><li>Scan the current working directory for CVEs: grype dir:.</li><li>Scan a docker image for CVEs: grype tomcat:9.0</li><li>Scan a CycloneDX SBOM produced by Syft for CVEs: grype sbom:./sbom.json<br /></li></ul><h3 style="text-align: left;" tabindex="-1">OSV-Scanner</h3><p style="text-align: left;" tabindex="-1">Yet another tool is Google's <a href="https://github.com/google/osv-scanner">OSV-Scanner</a>: <br /></p><h3 style="text-align: left;"></h3><ul style="text-align: left;"><li>Scan a docker image for CVEs: osv-scanner --docker tomcat:9.0</li><li>Scan the local filesystem for CVEs: osv-scanner -r .<br /> </li></ul><p style="text-align: left;"></p>Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-5311556829497029162023-03-16T04:55:00.000-07:002023-03-16T04:55:35.147-07:00OpenSSF Allstar<p>In the previous <a href="https://coheigea.blogspot.com/2023/02/openssf-scorecard.html">blog post</a>, I looked at how to use <a href="https://github.com/ossf/scorecard">OpenSSF Scorecard</a> to improve the security posture of your open-source GitHub projects. This is a really useful tool when working at the level of individual repositories. However, what if you want to apply security policies to many repositories in a GitHub organization? This is where <a href="https://github.com/ossf/allstar">OpenSSF Allstar</a> comes in.</p><h2 style="text-align: left;"><span style="font-weight: normal;">Getting Started</span></h2><p style="text-align: left;">Detailed installation instructions are available <a href="https://github.com/ossf/allstar#installation-options">here</a>. The easiest way of getting started is to install the <a href="https://github.com/apps/allstar-app">OpenSSF AllStar GitHub app</a> in your organization. However you may not wish to grant access to your internal/private repositories to this instance, in which case it's pretty easy to <a href="https://github.com/ossf/allstar/blob/main/operator.md">manually install it</a>. </p><h2 style="text-align: left;"><span style="font-weight: normal;">General Configuration</span></h2><p style="text-align: left;"><span style="font-weight: normal;">Allstar reads configuration from a GitHub repo called ".allstar" in your GitHub organization. Here an "allstar.yaml" file defines the general configuration for the tool, e.g.:</span></p><p style="text-align: left;"></p><p>
<script src="https://gist.github.com/coheigea/08a2f1b812ab76d87296a9f667dd2c8c.js"></script>This configuration uses the <a href="https://github.com/ossf/allstar#org-level-options">"Opt out" strategy</a>, meaning that all repositories are included in the organization unless you explicitly opt them out. Archived and forked repos are excluded as you may not care about applying security policies to these types of repositories. Finally the configuration blocks individual repositories overriding the allstar configuration.</p><h2 style="text-align: left;"><span style="font-weight: normal;">Policies</span></h2><p><span style="font-weight: normal;">Allstar policies are added by checking in the corresponding yaml file to the .allstar repository. Each policy allows you to define whether to just log the issue or whether to create a GitHub issue for it in the repository where a policy violation was found. GitHub issues are labelled with "allstar", making it easy to search for them across all repositories in your organization.<br /></span></p><p><span style="font-weight: normal;">Here are some of the policies Allstar currently supports:</span></p><ul style="text-align: left;"><li><span style="font-weight: normal;"><a href="https://github.com/ossf/allstar#binary-artifacts">binary_artifacts.yaml</a>: Enforce that binary artifacts aren't checked in to source control. </span></li><li><span style="font-weight: normal;"><a href="https://github.com/ossf/allstar#branch-protection">branch_protection.yaml</a>: Enforce branch protection requirements on repos, for example:<br /></span><ul><li><span style="font-weight: normal;">Default branches are covered by branch protection.</span><span style="font-weight: normal;"> </span></li><li><span style="font-weight: normal;">Approval is required for pull requests</span><span style="font-weight: normal;"> </span></li><li><span style="font-weight: normal;">Block force pushes</span><span style="font-weight: normal;"> </span></li><li><span style="font-weight: normal;">Require the branch is up to date before merging</span></li></ul></li><li><span style="font-weight: normal;"><a href="https://github.com/ossf/allstar#dangerous-workflow">dangerous_workflow.yaml</a>: Flag dangerous things in github actions workflows.<br /></span></li><li><span style="font-weight: normal;"><a href="https://github.com/ossf/allstar#outside-collaborators">outside.yaml</a>: Enforce that outside collaborators can't be an admin on a repository. </span></li><li><span style="font-weight: normal;"><a href="https://github.com/ossf/allstar#securitymd">security.yaml</a>: Enforce that repositories have a security policy. I use it with "optOutPrivateRepos: true" to only apply this policy to public repos. This helps to let external users of your software know how to report security issues to the project. </span></li></ul><p></p><p></p><h2 style="text-align: left;"><span style="font-weight: normal;">Contributions</span></h2><p style="text-align: left;"><span style="font-weight: normal;">I've found allstar pretty useful and submitted a few contributions to it in the spirit of open-source, that were included in the recent v3.0 release:</span></p><ul style="text-align: left;"><li><span style="font-weight: normal;"> Fixed an issue with repo-level exemptions for the binary artifacts policy: <a href="https://github.com/ossf/allstar/pull/341">https://github.com/ossf/allstar/pull/341</a><br /></span></li><li><span style="font-weight: normal;">Added support to opt out forked repositories: <a href="https://github.com/ossf/allstar/pull/342">https://github.com/ossf/allstar/pull/342</a></span></li><li><span style="font-weight: normal;">Added support to require code owner reviews for the branch protection policy: <a href="https://github.com/ossf/allstar/pull/343">https://github.com/ossf/allstar/pull/343</a></span></li><li><span style="font-weight: normal;">Skipped policy evaluation if the repository wasn't enabled: <a href="https://github.com/ossf/allstar/pull/355">https://github.com/ossf/allstar/pull/355</a><br /></span></li></ul>Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-83123717354600239582023-02-21T03:23:00.000-08:002023-02-21T03:23:07.195-08:00OpenSSF Scorecard<p><a href="https://github.com/ossf/scorecard">OpenSSF Scorecard</a> is a tool that assesses your project against a number of security best practices and assigns a score (out of 10). It is a really useful thing to run on any open-source project you might contribute to, to try to improve the overall security posture of the project, or even to assess how secure a third-party project is that you might want to use. In this post I'll describe how I improved the security posture of a number of ASF projects I contribute to using OpenSSF Scorecard.</p><h2 style="text-align: left;">Getting Started</h2><p style="text-align: left;">The first step is to install the OpenSSF Scorecard GitHub Action. This can be done in the GitHub dashboard, by going to "Actions", then "New Workflow" and searching for "OpenSSF Scorecard". Once this is committed to source control and runs successfully, the findings appear in the GitHub dashboard under "Security" and then "Code scanning". After the first run, you can add a Scorecard badge to the README of your project to display the current score. For example, <a href="https://api.securityscorecards.dev/projects/github.com/apache/santuario-xml-security-java/badge">for Apache Santuario</a>.</p><h2 style="text-align: left;">Improving the score</h2><p style="text-align: left;">After doing the initial run to get the base score, it's time to try to improve the score a bit. Here are some of the actions I performed:</p><ul style="text-align: left;"><li>Enable dependabot. This involves adding dependabot.yml (<a href="https://github.com/apache/santuario-xml-security-java/blob/main/.github/dependabot.yml">for example</a>) to your project to automatically create PRs for updated dependencies. As in the example, it should cover both the package ecosystem of the project (e.g. Maven) as well as GitHub Actions, to keep any GitHub actions up to date as well.</li><li>Automated builds. Any pull request should have the full suite of project tests run on it before being committed. I made sure that all of the projects had Jenkins projects set up to build both maintained branches whenever new commits were made, as well as dedicated jobs to run on PRs. Note that at the ASF, the dependabot user needs to be <a href="https://cwiki.apache.org/confluence/display/INFRA/Git+-+.asf.yaml+features#Git.asf.yamlfeatures-JenkinsPRwhitelisting">explicitly allow-listed</a> in a .asf.yaml file to automatically run Jenkins jobs on submitted PRs. The combination of dependabot and automated builds makes it easy to have confidence in automatically updating your project dependencies, assuming a good test-suite.</li><li>Adding CodeQL (and fixing the findings). CodeQL is a SAST tool that can be run on your project via a GitHub action by searching for "CodeQL". It should be run on the maintained branches of the project, as well as on any pull requests for the maintained branches.</li><li>Adding SECURITY.md. A SECURITY.md (<a href="https://github.com/apache/santuario-xml-security-java/blob/main/SECURITY.md">for example</a>) should be added to source control to describe the supported versions of the project, and how to submit security issues.</li><li>Pin GitHub action commits. It's best practice to pin GitHub action commits so that new updates don't break your project or even introduce a security regression. <a href="https://app.stepsecurity.io/securerepo">https://app.stepsecurity.io/securerepo</a> can be used as a tool to analyse the GitHub actions of your project and to create pull requests with the correct versions pinned. Dependabot is then clever enough to be able to update your GitHub actions based on the pinned commit.</li><li>Adding OpenSSF Best Practices Badge. <a href="https://bestpractices.coreinfrastructure.org/en">https://bestpractices.coreinfrastructure.org/en</a> allows you to obtain a best practices badge for your project and to embed it in the README.<br /></li></ul><p style="text-align: left;"></p><h2 style="text-align: left;">ASF Projects</h2><p style="text-align: left;">Here are some of the ASF projects I applied the above to, and their current OpenSSF Scorecard result at the time of writing:</p><ul style="text-align: left;"><li><a href="https://github.com/apache/santuario-xml-security-java">Apache Santuario</a> - 8.3.</li><li><a href="https://github.com/apache/ws-wss4j">Apache WSS4J</a> - 9.1.</li><li><a href="https://github.com/apache/cxf">Apache CXF</a> - 7.3<br /></li><li><a href="https://github.com/apache/cxf-fediz">Apache CXF Fediz</a> - 8.2<br /></li><li><a href="https://github.com/apache/directory-kerby">Apache Directory Kerby</a> - 8.1</li></ul><p>Future improvements that would improve the score are as follows:<br /></p><ul style="text-align: left;"><li>No fuzzing. <a href="https://google.github.io/oss-fuzz/">https://google.github.io/oss-fuzz/</a> could be used to fuzz the projects.</li><li>No branch protection. Branch protection is not enabled on these projects as traditionally we have followed a CTR approach to development. OpenSSF Scorecard also penalises committing directly to the main branch without a approved PR, so adding branch protection would greatly improve the score of all projects above.</li><li>No packaging. OpenSSF Scorecard's packaging check doesn't support Maven Central, which is where the releases of all the above projects go.</li><li>No signed releases. Again OpenSSF Scorecard doesn't check Maven Central for signed releases.<br /></li></ul><p> </p><div style="text-align: left;"><p style="text-align: left;"><br /></p></div>Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-42357855636145483002022-12-14T02:05:00.001-08:002022-12-14T02:05:40.789-08:00New Apache CXF releases and CVEs published<p>Apache <a href="https://cxf.apache.org/">CXF</a> has <a href="https://cxf.apache.org/download.html">released</a> versions 3.5.5 and 3.4.10. Notable security upgrades in these releases include picking up a fix for <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-40152">CVE-2022-40152</a> in Woodstox, and a fix for <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-40150">CVE-2022-40150</a> in Jettison. In addition, two new CVEs are <a href="https://cxf.apache.org/security-advisories.html">published</a> for issues found directly in Apache CXF itself:</p><ul style="text-align: left;"><li><a href="https://cxf.apache.org/security-advisories.data/CVE-2022-46363.txt?version=1&modificationDate=1670942001000&api=v2">CVE-2022-46363</a>: Apache CXF directory listing / code exfiltration. A vulnerability in Apache CXF before versions 3.5.5 and 3.4.10 allows an attacker to perform a remote directory listing or code exfiltration. The vulnerability only applies when the CXFServlet is configured with both the static-resources-list and redirect-query-check attributes. These attributes are not supposed to be used together, and so the vulnerability can only arise if the CXF service is misconfigured.</li><li><a href="https://cxf.apache.org/security-advisories.data/CVE-2022-46364.txt?version=1&modificationDate=1670944472739&api=v2">CVE-2022-46364</a>: Apache CXF SSRF Vulnerability. A SSRF vulnerability in parsing the href attribute of XOP:Include in MTOM requests in versions of Apache CXF before 3.5.5 and 3.4.10 allows an attacker to perform SSRF style attacks on webservices that take at least one parameter of any type.</li></ul>Thanks to thanat0s from Beijin Qihoo 360 adlab for reporting both issues to the project. The first issue is not really applicable in practice as it only arises on a misconfiguration. For the second issue, we restricted following MTOM URLs only to message attachments by default. It can be controlled via a new property "org.apache.cxf.attachment.xop.follow.urls" (which of course defaults to false).<br />Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-3915983556960868552021-09-20T04:15:00.001-07:002021-09-20T04:15:35.633-07:00New CVE (CVE-2021-40690) released for Apache Santuario - XML Security for Java<p>A new CVE has been released for Apache <a href="http://santuario.apache.org/">Santuario</a> - XML Security for Java which is fixed in the latest 2.2.3 and 2.1.7 releases:</p><ul style="text-align: left;"><li>Bypass of the secureValidation property (CVE-2021-40690) - All versions of Apache Santuario - XML Security for Java prior to 2.2.3 and 2.1.7 are vulnerable to an issue where the "secureValidation" property is not passed correctly when creating a KeyInfo from a KeyInfoReference element. This allows an attacker to abuse an XPath Transform to extract any local .xml files in a RetrievalMethod element.</li></ul><p>As part of this fix we do not allow unsigned References to "http" or "file" URIs any more. This is controlled by a new system property </p><ul style="text-align: left;"><li>org.apache.xml.security.allowUnsafeResourceResolving</li></ul><p>The next major release (2.3.0) <a href="https://issues.apache.org/jira/browse/SANTUARIO-573">won't support</a> by default "http" or "file" URIs even when they are signed, it will be necessary to manually add the ResourceResolvers instead (<a href="https://github.com/apache/santuario-xml-security-java/blob/861798760e2a52f7d25d5d208a9006129d73a03b/src/test/java/javax/xml/crypto/test/dsig/BaltimoreIaik2Test.java#L48">for example</a>).<br /></p><p>An important point is to make sure that you are setting the "secure validation" property to "true" in your project. We have decided for the next major release (2.3.0) to enable the "secure validation" property <a href="https://issues.apache.org/jira/browse/SANTUARIO-574">by default</a>. </p><p>We would like to thank An Trinh for alerting us to this security issue.<br /></p>Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-5396902307745033542020-06-29T04:09:00.000-07:002020-06-29T04:09:05.924-07:00Configuring Kerberos for Kafka in Talend Open Studio for ESBA few years back I wrote a <a href="http://coheigea.blogspot.com/2017/05/configuring-kerberos-for-kafka-in.html">blog post</a> about how to create a job in Talend Open Studio for Big Data to read data from an Apache Kafka topic using kerberos. This job made use of the "tKafkaConnection" and "tKafkaInput" components. In Talend Open Studio for ESB, there is a component based on Apache <a href="https://camel.apache.org/">Camel</a> called "cKafka" that can also be used for the same purpose, but configuring it with kerberos is slightly different. In this post, we will show how to use the cKafka component in Talend Open Studio for ESB to read from a Kafka topic using kerberos.<br />
<br />
<b>1) Kafka setup</b><br />
<br />
Follow a <a href="http://coheigea.blogspot.ie/2017/05/securing-apache-kafka-with-kerberos.html">previous tutorial</a>
to setup an Apache Kerby based KDC testcase and to configure Apache
Kafka to require kerberos for authentication. Kafka 2.5.0 was used for the purpose of this tutorial. Create a "test" topic and
write some data to it, and verify with the command-line consumer that
the data can be read correctly.<br />
<br />
<b>2) Download Talend Open Studio for ESB and create a route</b> <br />
<br />
Now we will <a href="https://www.talend.com/products/talend-open-studio/">download</a>
Talend Open Studio for ESB (7.3.1 was used for the purposes of
this tutorial). Unzip the file when it is downloaded and then start the
Studio using one of the platform-specific scripts. It will prompt you to
download some additional dependencies and to accept the licenses. Right click on "Routes" and select "Create Route", entering a name for the route.<br />
<br />
In the search bar under "Palette" on the right hand side enter "kafka"
and hit enter. Drag the "cKafka" component that should appear into the route designer. Next find the "cLog" component under "Miscellaneous" and drag this to the right of the "cKafka" component. Right click the "cKafka" component and select "Row / Route" and connect the resulting arrow with the "cLog" component.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiptoglJ1oMJHRVMm-LZli2TNbbiqUqKl_QeInec66k-564D2RqEQH2g3L3ruAHaNxmjnGeyY4w_dTgceM-T_0XB32z4J768GUX116besoJiYhG9a7VsTkwa7ImmMTbaXlJnBNaqpIEgbRo/s1600/route1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="159" data-original-width="397" height="160" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiptoglJ1oMJHRVMm-LZli2TNbbiqUqKl_QeInec66k-564D2RqEQH2g3L3ruAHaNxmjnGeyY4w_dTgceM-T_0XB32z4J768GUX116besoJiYhG9a7VsTkwa7ImmMTbaXlJnBNaqpIEgbRo/s400/route1.png" width="400" /></a><b> </b><br />
<b>3) Configure the components</b><br />
<br />Now let's configure the individual components. Double-click on the "cKafka" component and enter "test" for the topic. Next, select "Advanced Settings" and scroll down to the kerberos configuration. For "Kerberos Service Name" enter "kafka". Then for "Security Protocol" select "SASL over Plaintext":<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg98o-RbfrdDf40NAz35cKiVjj5jr-zso8uAFRrScBhqgLmjgCUOoNbwntstRUUpf1GNfWO_kjAP9nVeBLtz_1Tm6whdqFtr0zESCvGrKxhTQ-ynzlOfwX5lU1BzRQeAQ6wl_i9DzSOo6Yq/s1600/route3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="102" data-original-width="364" height="89" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg98o-RbfrdDf40NAz35cKiVjj5jr-zso8uAFRrScBhqgLmjgCUOoNbwntstRUUpf1GNfWO_kjAP9nVeBLtz_1Tm6whdqFtr0zESCvGrKxhTQ-ynzlOfwX5lU1BzRQeAQ6wl_i9DzSOo6Yq/s320/route3.png" width="320" /></a></div>
<br />
Next click on the "Run" tab and go to "Advanced Settings". Under "JVM Settings" select the checkbox for "Use specific JVM arguments", and add new arguments as follows:<br />
<ul>
<li>-Djava.security.auth.login.config=<path.to.kafka>/config/client.jaas </li>
<li>-Djava.security.krb5.conf=<path.to.kerby.project>/target/krb5.conf</li>
</ul>
For the first argument, you need to enter the path of the "client.jaas" file as described in the <a href="http://coheigea.blogspot.ie/2017/05/securing-apache-kafka-with-kerberos.html">tutorial</a>
to set up the Kafka test-case. For the second argument, you need to specify the path of the
"krb5.conf" file supplied in the target directory of the Apache Kerby
test-case:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkHZbFSZQuXsbnW_8MqmLpZLBrPE03krYoh7faaov0Mv8QZhmxs43hl26C1MFV18dQZC8_GpnJfCTmKxbRvg_Yc2WQ0GCJ0Bst9xZ22JSHjIK8Vmdz56X8hPByVBXIC3LVDxeqMjPeJNOp/s1600/route2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="409" data-original-width="831" height="196" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkHZbFSZQuXsbnW_8MqmLpZLBrPE03krYoh7faaov0Mv8QZhmxs43hl26C1MFV18dQZC8_GpnJfCTmKxbRvg_Yc2WQ0GCJ0Bst9xZ22JSHjIK8Vmdz56X8hPByVBXIC3LVDxeqMjPeJNOp/s400/route2.png" width="400" /></a></div>
Now we are ready to run the job. Click on the "Run" tab and then hit the
"Run" button. Send some data via the producer to the "test" topic and
you should see the data appear in the Run Window in the Studio.
Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-47205723237994976152020-05-27T02:40:00.000-07:002020-05-27T02:40:37.764-07:00SSH improvements in Apache KarafLast year I contributed a number of SSH improvements to Apache <a href="http://karaf.apache.org/">Karaf</a>, which I never found the time to blog about. In this post I'll cover how to use SSH with Apache Karaf, and also what the improvements were.<br />
<br />
<b>1) Using SSH with Apache Karaf</b><br />
<br />
<a href="http://karaf.apache.org/download.html">Download</a> and extract Apache Karaf (4.2.8 was used for the purposes of this post). Start it by running "bin/karaf". By default, Karaf starts an SSH service which is configured in 'etc/org.apache.karaf.shell.cfg'. Here you can see that the default port is 8101. Karaf uses JAAS to authenticate SSH credentials - the default realm is "karaf". Associated with this realm is a PropertiesLoginModule, which authenticates users against the credentials stored in 'etc/users.properties'. Also note that the user must have a group defined that matches the value for "sshRole" in 'etc/org.apache.karaf.shell.cfg'. So let's try to SSH into Karaf using the default admin credentials, and it should work:<br />
<ul>
<li>ssh karaf@localhost -p 8101 </li>
</ul>
<br />
<b>2) SSH algorithm update </b><br />
<br />
The first improvement, which was <a href="https://github.com/apache/karaf/pull/885">merged</a> for the 4.2.7 release, was to remove support by default for a number of outdated algorithms:<br />
<ul>
<li>SHA-1 algorithms were removed</li>
<li>CBC ciphers were removed</li>
<li>Old ciphers such as 3-DES, Blowfish, Arcfour were removed</li>
</ul>
These can all be configured in 'etc/org.apache.karaf.shell.cfg' if necessary. The configuration values + defaults are now as follows:<br />
<ul>
<li>ciphers = aes256-ctr,aes192-ctr,aes128-ctr</li>
<li>macs = hmac-sha2-512,hmac-sha2-256</li>
<li>kexAlgorithms = ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256</li>
</ul>
So for example, the following now fails with Karaf 4.2.8 using the default configuration:<br />
<ul>
<li>ssh karaf@localhost -p 8101 -c 3des-cbc</li>
</ul>
<br />
<b>3) Elliptic curve support for SSH server keys </b><br />
<br />
The second improvement, which was also <a href="https://github.com/apache/karaf/pull/886">merged</a> for 4.2.7, was to add support to configure Karaf with an elliptic curve SSH key. Previously only RSA keys were supported. When you start Karaf, it will generate an SSH key if one does not already exist, according to the "algorithm" (RSA) and "keySize" (2048) defined in 'etc/org.apache.karaf.shell.cfg', and store it in the "hostKey" (etc/host.key) file. As part of the improvement, the public key is also written out to a new configuration property "hostKeyPub" (etc/host.key.pub).<br />
<br />
To see this in action, delete 'etc/host.key.*' and edit 'etc/org.apache.karaf.shell.cfg' and change:<br />
<ul>
<li>keySize = 256</li>
<li>algorithm = EC</li>
</ul>
Now restart Karaf + try to ssh in using the "-v" parameter. You will see something like: "debug1: Server host key: ecdsa-sha2-nistp256 SHA256:sDa1k...".<br />
<br />
<b>4) Support for elliptic keys in the PublicKeyLoginModule</b><br />
<br />
As well as supporting authentication using a password via the PropertiesLoginModule, Karaf also supports authentication using a public key via the PublickeyLoginModule. The PublickeyLoginModule authenticates a public key for SSH by comparing it to keys stored in 'etc/keys.properties'. I added <a href="https://issues.apache.org/jira/browse/KARAF-6350">support</a> for Karaf 4.2.7 to be able to authenticate using elliptic keys stored in 'etc/key.properties', before only RSA public keys were supported.<br />
<br />
To see how this works, generate a new elliptic curve key with an empty password:<br />
<ul>
<li>ssh-keygen -t ecdsa -f karaf.id_ec</li>
</ul>
Now edit 'etc/keys.properties' and copy the public key that was written in "karaf.id_ec.pub". For example:<br />
<ul>
<li>colm=AAAAE2VjZHNhLXNoY...0=,_g_:sshgroup</li>
<li>_g_\:sshgroup = group,ssh</li>
</ul>
Now we can SSH into Karaf without a password prompt via:<br />
<ul>
<li>ssh colm@localhost -p 8101 -i karaf.id_ec</li>
</ul>
<br />
<b>5) Support for encrypted key password for SSH</b><br />
<br />
Finally, I added <a href="https://issues.apache.org/jira/browse/KARAF-6384">support</a> for encrypted key passwords for SSH. This change <a href="https://issues.apache.org/jira/browse/KARAF-6383">necessitated</a> moving from using not-yet-commons-ssl to BouncyCastle for parsing SSH keys, as the former does not support encrypted keys or newer security algorithms in general. As a result, encrypted key passwords for SSH are not available in Karaf 4.2.x, but will be in the next major release (4.3.0). Note as well that encrypted key passwords only work for when Karaf is reading an externally generated encrypted private key.<br />
<br />
To test this out, grab Karaf 4.3.x and generate a new RSA encrypted private key as follows (specifying a password of "security"):<br />
<ul>
<li>openssl genpkey -out rsa.pem -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -aes256</li>
</ul>
Edit 'etc/org.apache.karaf.shell.cfg' and change it as follows:<br />
<ul>
<li>hostKey = ${karaf.etc}/rsa.pem</li>
<li>hostKeyPassword = security</li>
</ul>
Before starting Karaf, it's also necessary to register BouncyCastle as a security provider. Edit 'etc/config.properties' and add:<br />
<ul>
<li>org.apache.karaf.security.providers = org.bouncycastle.jce.provider.BouncyCastleProvider</li>
</ul>
Now copy the BouncyCastle provider jar (e.g. bcprov-jdk15on-1.65.jar) to lib/ext and restart Karaf. It should be possible then to SSH into Karaf.<br />
<ul>
</ul>
Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-82910957608357195012020-05-19T05:23:00.000-07:002020-05-19T05:23:46.914-07:00Recent deserialization CVEs in Apache CamelThree security advisories have been published recently for Apache <a href="https://camel.apache.org/">Camel</a> that were found partly or entirely by me. <a href="https://camel.apache.org/security/CVE-2020-11971.html">CVE-2020-11971</a> relates to a JMX rebind flaw that'll be the subject of a future blog post. In this post I'll take a quick look at the other two issues, which both relate to Java object deserialization.<br />
<br />
<b>1) CVE-2020-11973</b><br />
<br />
<a href="https://camel.apache.org/security/CVE-2020-11973.html">CVE-2020-11973</a> was caused by the fact that the Camel Netty component enabled Java Object serialization by default, without any whitelisting of acceptable packages associated with classes that are being deserialized. This is problematic as a malicious user could try to exploit a "gadget chain" to perform a remote code execution, or else create a denial of service type attack by creating a recursive object graph.<br />
<br />
The fix was to remove object deserialization by default for the Netty component in 2.25.1 + 3.2.0. Users who still wish to avail of object serialization can explicitly enable object encoders/decoders on the component.<br />
<br />
<b>2) CVE-2020-11972</b><br />
<br />
<a href="https://camel.apache.org/security/CVE-2020-11972.html">CVE-2020-11972</a><b> </b>is for the exact same issue as for CVE-2020-11973 above, but in the Camel RabbitMQ component. From Camel 2.25.1 and 3.2.0, a new configuration option called "<span class="blob-code-inner blob-code-marker" data-code-marker="+"><span class="pl-s"><span class="pl-pds"></span>allowMessageBodySerialization<span class="pl-pds">" </span></span></span>is introduced, which defaults to false.<br />
<br />
Users who wish to avail of object serialization/deserialization can set this configuration option to true, (bearing in mind that this is not secure!) which enables the following behaviour:<br />
<ul>
<li>The outbound message will be serialized on the producer side using Java serialization, if no type converter is available to handle the message body.</li>
<li>On the consumer side, the message body will be deserialized using <span class="blob-code-inner blob-code-marker" data-code-marker="+"><span class="pl-c">Java deserialization if the message contains a "CamelSerialize" header.</span></span></li>
</ul>
<span class="blob-code-inner blob-code-marker" data-code-marker="+"><span class="pl-c">If you are using either the Netty or RabbitMQ components with Camel, then please make sure to update to the latest releases ASAP. </span></span><br /><span class="blob-code-inner blob-code-marker" data-code-marker="+"><span class="pl-c"> </span></span><table class="diff-table js-diff-table tab-size " data-diff-anchor="diff-d1f365369b5efd8acaff4bfc540a73d3" data-paste-markdown-skip="" data-tab-size="8"><tbody>
<tr data-hunk="e6a3101588a2bd446e5e14cd5a7fbc2a"><td class="blob-code blob-code-addition"><br /></td></tr>
<tr data-hunk="e6a3101588a2bd446e5e14cd5a7fbc2a"></tr>
</tbody></table>
Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-10620510958950745002020-04-06T02:33:00.001-07:002020-04-06T02:33:59.433-07:00Improving XML Decryption performanceApache <a href="http://santuario.apache.org/">Santuario</a> - XML Security for Java is a library that provides the ability to sign and encrypt XML. In this post we'll focus solely on encryption. Apache Santuario has two APIs for XML Encryption, an in-memory (DOM) based approach and a streaming (StAX based) API. The streaming API offers very good memory performance as the size of the XML being encrypted scales up. However there is not much difference in speed, particularly for decryption. The streaming API is also less flexible than the DOM API. In this post we will look at different ways to speed up the DOM based API using Serializers.<br />
<br />
<b>1) Serializers in Apache Santuario</b><br />
<br />
The <a href="https://github.com/apache/santuario-java/blob/0707abdd696047a375ce4ebc7d821eadc0895936/src/main/java/org/apache/xml/security/encryption/XMLCipher.java">XMLCipher</a> class in Santuario is the main entry point for encryption and decryption. The XMLCipher class makes use of the <a href="https://github.com/apache/santuario-java/blob/0707abdd696047a375ce4ebc7d821eadc0895936/src/main/java/org/apache/xml/security/encryption/Serializer.java">Serializer</a> interface to serialize DOM elements to byte arrays for encryption, and to deserialize byte arrays to DOM elements for decryption. Santuario ships with two different implementations. The default is <a href="https://github.com/apache/santuario-java/blob/0707abdd696047a375ce4ebc7d821eadc0895936/src/main/java/org/apache/xml/security/encryption/TransformSerializer.java">TransformSerializer</a>, which makes use of the "<span class="pl-smi">javax.xml.transform.Transformer" API and requires Apache <a href="https://xalan.apache.org/">Xalan</a> to work properly. The other alternative is the <a href="https://github.com/apache/santuario-java/blob/0707abdd696047a375ce4ebc7d821eadc0895936/src/main/java/org/apache/xml/security/encryption/DocumentSerializer.java">DocumentSerializer</a>, which uses the standard DOM API. These implementations perform very similarly.</span><br />
<span class="pl-smi"><br /></span>
<span class="pl-smi">Apache <a href="http://cxf.apache.org/">CXF</a> is a web services framework that heavily leverages Apache Santuario to perform XML encryption and decryption of web service messages. CXF is fully streaming based and so it made sense to look to re-use some of this functionality with a custom Serializer implementation. The result is the <a href="https://github.com/apache/cxf/blob/4e981b1f5bb19bc85e3b92e325216148ef043e8c/rt/ws/security/src/main/java/org/apache/cxf/ws/security/wss4j/StaxSerializer.java">StaxSerializer</a>. This largely re-uses the DOM implementation for encryption, but uses a streaming based approach for deserialization. StaxSerializer is used by default in Apache CXF when working with XML encryption/decryption.</span><br />
<span class="pl-smi"><br /></span>
<b><span class="pl-smi">2) Benchmarking</span></b><br />
<span class="pl-smi"><br /></span>
To benchmark the StaxSerializer, I adapted the Santuario benchmarking test-suite just to compare encryption performance using the different Serializers, and put it into github here:<br />
<ul>
<li><a href="https://github.com/coheigea/testcases/tree/master/apache/santuario/santuario-serializer">santuario-serializer-benchmark</a>: This project contains two Junit tests used for benchmarking XML Encryption. In
particular, they measure memory and timing performance for both encryption
and decryption, ranging from small to very large XML files.</li>
</ul>
<b>a) Heap memory consumption during decryption</b><br />
<br />
Here is the result for the (default) TransformSerializer:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyLIRtV8vqt_vRL3YnlViqelAeurs77g4o32Y_wl3QnMBaQpP0rA54hCZRu7xyXxH1suyfsxpYKSAkfkw9VlbeLAoaUKAHyb24fhdnk9_P8d8oi_-gcMzQinXmTZfSKvjiRzYs6jB5bmCM/s1600/encryption-memory-inbound.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="800" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyLIRtV8vqt_vRL3YnlViqelAeurs77g4o32Y_wl3QnMBaQpP0rA54hCZRu7xyXxH1suyfsxpYKSAkfkw9VlbeLAoaUKAHyb24fhdnk9_P8d8oi_-gcMzQinXmTZfSKvjiRzYs6jB5bmCM/s400/encryption-memory-inbound.png" width="400" /></a></div>
<b> </b><br />
<b> </b>In comparison, here is the result for the StaxSerializer:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdG3D8CObRSs74HRkFJ4NP7kA9RDsk7C0KZj_J44uBNi8rG05-DFTeAV_ejdPDKuwkWnkrrdBGbZTf1UBVJtS3SG_oU74DKC3GpLsy1mo9psTLL76vH_KnCzM2nZEhpkHaaqty11rO34Z9/s1600/encryption-memory-inbound.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="800" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdG3D8CObRSs74HRkFJ4NP7kA9RDsk7C0KZj_J44uBNi8rG05-DFTeAV_ejdPDKuwkWnkrrdBGbZTf1UBVJtS3SG_oU74DKC3GpLsy1mo9psTLL76vH_KnCzM2nZEhpkHaaqty11rO34Z9/s400/encryption-memory-inbound.png" width="400" /></a></div>
The most obvious conclusion is that the streaming API is far more efficient in terms of memory consumption, especially when we scale to larger documents. However one would expect also to see less memory being consumed using the StaxSerializer compared to the TransformSerializer. Indeed, one can see that the TransformSerializer almost consumes 600MB for the largest case, wheras the StaxSerializer comes in under 450MB.<br />
<br />
<b>b) Time needed for decryption</b><br />
<br />
Turning now to execution time, here is the result for the (default) TransformSerializer:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0YWg1KuRPVHe435CSbjnxpk_lu8XqXMpuWpyZkGh83RAokRB4RAyq64OVGtprd1fCm6sX9wE-5ByI1TKScXMCbqo_vxayaFCAWI2ovs6ICPwyVlrbUxHLFCJGutuR1wIgwYTxxcsa0CbE/s1600/encryption-times-inbound.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="800" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0YWg1KuRPVHe435CSbjnxpk_lu8XqXMpuWpyZkGh83RAokRB4RAyq64OVGtprd1fCm6sX9wE-5ByI1TKScXMCbqo_vxayaFCAWI2ovs6ICPwyVlrbUxHLFCJGutuR1wIgwYTxxcsa0CbE/s400/encryption-times-inbound.png" width="400" /></a></div>
As stated above, the performance is fairly identical to the streaming layer. This is because the streaming implementation needs to make several passes of the XML for consistency reasons. Here is the result for the StaxSerializer:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMGZmvIIkG4mpEU263pGZ4GXx1AMZi17lXnwVN9WISMBq0Q7W8H7EKZUBQ2rwzzAb0lPNzeew_wNGabX69dGpzSz-dWC-8WHmq9-fIXA2QYTrk5oCik648eT3n8bWVitOvt_vkP__kCUTu/s1600/encryption-times-inbound.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="800" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMGZmvIIkG4mpEU263pGZ4GXx1AMZi17lXnwVN9WISMBq0Q7W8H7EKZUBQ2rwzzAb0lPNzeew_wNGabX69dGpzSz-dWC-8WHmq9-fIXA2QYTrk5oCik648eT3n8bWVitOvt_vkP__kCUTu/s400/encryption-times-inbound.png" width="400" /></a></div>
Here we can see that the StaxSerializer actually offers superior performance to the streaming API. So if XML decryption performance is an issue for you, it might be worth considering using CXF's StaxSerializer.<br /><b></b>Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-77632725436942171722020-01-16T06:19:00.001-08:002020-01-16T06:19:06.487-08:00Two final 2019 CVEs for Apache CXFApache CXF 3.3.5 and 3.2.12 have been <a href="http://cxf.apache.org/download.html">released</a>. These releases contain fixes for two new security advisories:<br />
<ul>
<li><a href="http://cxf.apache.org/security-advisories.data/CVE-2019-12423.txt.asc?version=1&modificationDate=1579178393000&api=v2">CVE-2019-12423</a>: Apache CXF OpenId Connect JWK Keys service returns private/secret credentials if configured with a jwk keystore. </li>
</ul>
<ul>
<li><a href="http://cxf.apache.org/security-advisories.data/CVE-2019-17573.txt.asc?version=1&modificationDate=1579178542000&api=v2">CVE-2019-17573</a>: Apache CXF Reflected XSS in the services listing page. Note that this attack exploits a feature which is not typically not<br />present in modern browsers, who remove dot segments before sending the<br />request. However, Mobile applications may be vulnerable.</li>
</ul>
Please see the CXF <a href="http://cxf.apache.org/security-advisories.html">security advisories</a> page for information on all of the CVEs issued for Apache CXF over the years. Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-53119854686155416182019-11-05T09:50:00.001-08:002019-11-05T09:50:41.849-08:00Two new CVEs released for Apache CXFApache <a href="http://cxf.apache.org/">CXF</a> 3.3.4 and 3.2.11 have been <a href="http://cxf.apache.org/download.html">released</a>. Along with the usual bug fixes and dependency updates, these releases contain fixes for two new CVEs:<br />
<ul>
<li><a href="http://cxf.apache.org/security-advisories.data/CVE-2019-12419.txt.asc?version=2&modificationDate=1572961201241&api=v2">CVE-2019-12419</a>: Apache CXF OpenId Connect token service does not properly validate the clientId. The problem here is that the OAuth access token service didn't validate that the submitted clientId matches that of the authenticated principal, thus allowing a malicious client to obtain an access token using a code issued to another client. Of course, this requires the malicious client to actually obtain the authorization code for the other client somehow.</li>
<li><a data-linked-resource-container-id="27837502" data-linked-resource-container-version="33" data-linked-resource-content-type="text/plain" data-linked-resource-default-alias="CVE-2019-12406.txt.asc" data-linked-resource-id="135859607" data-linked-resource-type="attachment" data-linked-resource-version="1" data-nice-type="Text File" href="http://cxf.apache.org/security-advisories.data/CVE-2019-12406.txt.asc?version=1&modificationDate=1572957147000&api=v2" shape="rect">CVE-2019-12406</a>: Apache CXF does not restrict the number of message attachments. Essentially here CXF did not impose any restrictions on the number of message attachments, meaning that a malicious entity could try to attempt a denial of serice attack by generating a message with a huge number of message attachments.</li>
</ul>
Please update to the latest CXF releases to pick up fixes for these advisories. Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-21918203506482967192019-08-26T09:10:00.000-07:002019-08-26T09:10:35.108-07:00Annotation support with Apache ShiroApache <a href="http://shiro.apache.org/">Shiro</a> is a Java framework to simply authentication, authorization etc. I previously blogged about a test-case I wrote that shows how to use Shiro with Apache <a href="http://cxf.apache.org/">CXF</a> to authenticate and authorize a username and password received as part of a web service request. This post extends the previous post by showing how to use Shiro to enable authorization via annotations on the service implementation.<br />
<br />
The previous post defined some required roles for an endpoint in Spring, and passed them through to a ShiroUTValidator class which checks that the authenticated subject has all of the defined roles:<br />
<br />
<script src="https://gist.github.com/coheigea/18ae0c55995947fa8727171af75b6d69.js"></script>
The problem with this approach is that it's not possible to specify individual roles for different methods in the service implementation - the user must have the role to invoke on any of the methods.<br />
<br />
An alternative is instead to use Shiro's <a href="http://shiro.apache.org/authorization.html#Authorization-AnnotationbasedAuthorization">annotation</a> support. Here we can add annotations to the service endpoint implementation to require that the authenticated user has the correct role (@RequiresRoles) or permissions (@RequiresPermissions). Note that these annotations are specific to Shiro, support is not yet added to support the standard javax.annotation.security annotations (see <a href="https://issues.apache.org/jira/projects/SHIRO/issues/SHIRO-671?filter=allopenissues">here</a>).<br />
<br />
So to change our test-case to use annotations, instead of defining the roles in Spring, we instead define the following annotation in the service <a href="https://github.com/coheigea/testcases/blob/master/apache/cxf/cxf-shiro/src/test/java/org/apache/coheigea/cxf/shiro/annotation/DoubleItPortTypeImpl.java">implementation</a>:<br />
<br />
<script src="https://gist.github.com/coheigea/e7c66183ab5834dac7a4e10fa9150c32.js"></script>In the <a href="https://github.com/coheigea/testcases/blob/master/apache/cxf/cxf-shiro/src/test/resources/org/apache/coheigea/cxf/shiro/annotation/cxf-service.xml">spring configuration</a> for the service, we need to add a few additional interceptors so that the annotation gets processed:<br />
<br />
<script src="https://gist.github.com/coheigea/5f75bfe0e98bd891069395451a753e88.js"></script>That's all that's required to get Shiro annotations working with CXF service implementations. The full test source is available <a href="https://github.com/coheigea/testcases/blob/master/apache/cxf/cxf-shiro/src/test/java/org/apache/coheigea/cxf/shiro/annotation/AnnotationTest.java">here</a>.Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-85187541434714674422019-05-09T06:54:00.001-07:002019-05-09T06:54:56.560-07:00CoAP support in Apache CamelThe Constrained Application Protocol (CoAP) is standardized in <a href="https://tools.ietf.org/html/rfc7252">RFC-7252</a>. It offers REST-like functionality over UDP for constrained devices in the Internet of Things. Apache <a href="http://camel.apache.org/">Camel</a> has had support for the <a href="https://coap.technology/">CoAP</a> protocol since the 2.16 release, by using the eclipse <a href="https://www.eclipse.org/californium/">Californium</a> framework. It offers support for using CoAP in both producer and consumer mode, and also offers integration with the Camel <a href="https://camel.apache.org/rest-dsl.html">REST DSL</a>. In this post, we will cover a number of significant improvements to the Camel CoAP component for the forthcoming 3.0.0 release.<br />
<br />
<b>1) DTLS support</b><br />
<br />
The first significant improvement is that the CoAP component has been <a href="https://issues.apache.org/jira/browse/CAMEL-13402">updated</a> to support DTLS, something that necessitated a major upgrade of the californium dependency. CoAP supports TLS / UDP using a "coaps" scheme, something it is now possible to use in Camel. To see how this all works, take a look at the following github test-case I put together:<br />
<ul>
<li><a href="https://github.com/coheigea/testcases/tree/master/apache/camel/camel-coap">camel-coap</a> - A test-case for the camel-coap component. It shows how to use the coap component with the Camel REST DSL + TLS.</li>
</ul>
It follows the same approach as the <a href="http://coheigea.blogspot.com/2019/04/securing-apache-camel-rest-dsl.html">previous tutorial</a> I wrote on securing the Jetty component in Camel:<br />
<br />
<script src="https://gist.github.com/coheigea/a861999ca651df5827a43a9b047671e6.js"></script>
It uses the Camel REST DSL to create a simple REST service on the "/data" path that produces an XML response when invoked with a GET request. The actual route is omitted above, it just returns a XML document read in from a file. Note the component of the REST DSL is "coap" and it uses a scheme of "coaps". When using a scheme of "coaps", we need to configure the relevant TLS configuration, something that is done by referring to a "<a href="https://camel.apache.org/camel-configuration-utilities.html">sslContextParameters</a>" bean, which in this case contains a reference to a keystore used to retrieve the TLS key. Note that when using a certificate for TLS with CoAP, an elliptic curve key is required - RSA is not supported.<br />
<br />
On the client side, the test-case shows how to configure a Camel producer to invoke on the coaps REST API:<br />
<br />
<script src="https://gist.github.com/coheigea/e60bf16b6367dce05d16151841f852d9.js"></script>
<br />
Note that a scheme of "coaps" is used, and that it refers to a sslContextParameters bean containing the truststore to use, as well as a specific CipherSuite (this is optional, I just put it in to show how to use it). As the logging output does not really show what is going on, here is the Wireshark output which shows that DTLS is in use:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqmINXibHN2-MgC80hsDWoFWtB5NwFlv2KoL7pljYg4dav_DLrA-EqnXijzJ3Qn7qzUpa4ubMwMKaYcr1qpNU0JTDmdb-9yTDMKzSUZc__Ovo8T-TF_HFrnFo3N8WyDcaWG78qLxVFHqOG/s1600/dtls.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="114" data-original-width="984" height="45" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqmINXibHN2-MgC80hsDWoFWtB5NwFlv2KoL7pljYg4dav_DLrA-EqnXijzJ3Qn7qzUpa4ubMwMKaYcr1qpNU0JTDmdb-9yTDMKzSUZc__Ovo8T-TF_HFrnFo3N8WyDcaWG78qLxVFHqOG/s400/dtls.png" width="400" /></a></div>
<br />
<br />
<b>2) Support for Raw Public Keys and </b><b><b>Pre-Shared Keys</b></b><br />
<br />
In the section above, we saw how to configure CoAP with DTLS by referring to a sslContextParameters bean, which in turn refers to keystores to extract private keys and certificates. This is one option to support DTLS. However we also have two other options instead of using certificates.<br />
<br />
The first is called Raw Public Keys. This is when we may not have access to a certificate containing the (trusted) public key of the endpoint. In this case, we can configure TLS using a PrivateKey and/or PublicKey objects. Both are required for a service. The client needs to be configured with a trustedRpkStore parameter, which is an interface supplied by Californium, that determines trust in an identifier. If the service is configured with "clientAuthentication" of "REQUIRE", then the service must configure trustedRpkStore, and the client must also specify a privateKey parameter. Here is a sample code snippet from the Camel tests:<br />
<br />
<script src="https://gist.github.com/coheigea/eb93405c71d6cf98f1e391b2b8f6a49d.js"></script>
The second option is called Pre-Shared Keys. This is when we don't have access either to certificates or public keys, but have some symmetric keys that are shared between both the client and service. In this case, we can use these keys for TLS. Both the client and service are configured with a "pskStore" parameter, which is an interface in Californium that associates a (byte[]) key with an identity. Here is a sample code snippet from the Camel tests:<br />
<br />
<script src="https://gist.github.com/coheigea/4eb3b0d6c452809b95f68bc9cc94b630.js"></script>
<br />
<b>3) Support for TCP / TLS</b><br />
<br />
A newer RFC (<a href="https://tools.ietf.org/html/rfc8323">RFC-8323</a>) extends the original RFC to add support for CoAP over TCP and Websockets. Camel 3.0.0 has added <a href="https://issues.apache.org/jira/browse/CAMEL-13471">support</a> for using CoAP over both TCP and also TLS over TCP. Websocket support is not currently available. RFC-8323 uses two new schemes for TCP, both of which are supported in Camel - "coap+tcp" for CoAP / TCP, and "coaps+tcp" for CoAP / TCP with TLS. Only the certificate method of configuring TLS is supported, which works in exactly the same way as for DTLS above. Pre-shared keys and Raw Public Keys are not supported over TCP now, only UDP.<br />
<br />
To see how it works, simply alter the <a href="https://github.com/coheigea/testcases/blob/master/apache/camel/camel-coap/src/test/resources/camel-coap.xml">configuration</a> in the github testcase and change "coaps" to "<span class="pl-s"><span class="pl-pds"></span>coaps<span class="pl-pds">+tcp" (in both locations). Now run the test-case again and it should work seemlessly:</span></span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkM68jMmwoxTX4rkBSOmbvx2uhVspBkpM-IeCUOw-x0-05CV8p61nT4UpFPW4G4QWqQ_SGBJ9PwjhNN3EGAMnHmpwS7fsxTxsRi7RqmWEl9NgO4sDgHfGeh6-xxA71Lksi8cc2TJexTc6K/s1600/tls.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="117" data-original-width="1142" height="40" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkM68jMmwoxTX4rkBSOmbvx2uhVspBkpM-IeCUOw-x0-05CV8p61nT4UpFPW4G4QWqQ_SGBJ9PwjhNN3EGAMnHmpwS7fsxTxsRi7RqmWEl9NgO4sDgHfGeh6-xxA71Lksi8cc2TJexTc6K/s400/tls.png" width="400" /></a></div>
<span class="pl-s"><span class="pl-pds"><br /></span></span>
<b> </b>Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-37849769751257135282019-04-30T08:42:00.000-07:002019-04-30T08:42:16.765-07:00Securing the Apache Camel REST DSLRecently I put together a simple test-case for Apache <a href="http://camel.apache.org/">Camel</a>'s <a href="https://camel.apache.org/rest-dsl.html">REST DSL</a> and realised that it illustrated quite a few security concepts, as well as various Camel components, that might be interesting to blog about. The test-case is a simple spring-based project, which is available on github here:<br />
<ul>
<li><a href="https://github.com/coheigea/testcases/tree/master/apache/camel/camel-jetty">camel-jetty</a>: A test-case for the Camel Jetty component, TLS, the REST DSL + Jasypt.</li>
</ul>
In particular, the Camel spring configuration is <a href="https://github.com/coheigea/testcases/blob/master/apache/camel/camel-jetty/src/test/resources/camel-jetty.xml">here</a>. Let's take a look at the different pieces one by one.<br />
<br />
<b>1) The Apache Camel REST DSL</b><br />
<br />
Apache Camel offers a <a href="https://camel.apache.org/rest-dsl.html">REST DSL</a> which makes it really easy to create a simple REST service.<br />
<script src="https://gist.github.com/coheigea/6fda5b9302b8ff61746f28e45732d4ca.js"></script>Here we are creating a simple REST service on the "/data" path that produces an XML response when invoked with a GET request. The actual functionality is delegated to a Camel route called "direct:get":<br />
<br />
<script src="https://gist.github.com/coheigea/913c8419759dd5871c1a3a06236ec397.js"></script>
Here we are reading in some files from a directory using the Camel File component and using "pollEnrich" to include the contents of that directory into the message that is returned to the user. Finally we need to tell Camel how to create the REST DSL. Camel supports a wide range of components, but for the purposes of this example we are using the Camel Jetty component:<br />
<br />
<script src="https://gist.github.com/coheigea/991d5b1be724a5e0ffeffc837fc837cd.js"></script>
Note the port is not hard-coded, but instead retrieved from a randomly generated property in the pom using the "reserve-network-port" goal of the "build-helper-maven-plugin".<br />
<br />
<b>2) Getting TLS to work with the Camel REST DSL</b> <br />
<br />
To support TLS with the Camel REST DSL, we need to set the scheme to "https" as above in the "restConfiguration". The REST configuration also refers to a property called "sslContextParameters", which is where we obtain the keys required to support TLS. See the Camel <a href="https://camel.apache.org/camel-configuration-utilities.html">JSSE documentation</a> for more information on this property.<br />
<br />
<script src="https://gist.github.com/coheigea/d04b31fbafc2f830a38c2a92cbb6e739.js"></script>
The sslContextParameters bean definition allows us to define the key managers and trust managers for the TLS endpoint by referring to Java keystore files with the relevant passwords. If we are not supporting client authentication, the trustManagers portion can be omitted.<br />
<br />
<b>3) Using Jasypt to decrypt keystore passwords for use in TLS</b><br />
<br />
Note above that we have not hard-coded the TLS keystore passwords in our Camel spring configuration, but are instead retrieving them from a property. Camel offers the ability to store the passwords in encrypted form, by using the Camel <a href="http://camel.apache.org/jasypt.html">Jasypt component</a> to decrypt them given a master password. The encrypted passwords themselves are stored in a passwords.properties file:<br />
<br />
<script src="https://gist.github.com/coheigea/1676b578a8dc7b45bbaa7e4387e4f075.js"></script>These encrypted passwords are obtained using the camel-jasypt jar (shipped for example in the Camel distribution):<br />
<ul>
<li>java -jar camel-jasypt-2.23.1.jar -c encrypt -p master-secret -i storepass </li>
</ul>
To decrypt the passwords at runtime, we define the following bean in our Camel spring configuration:<br />
<script src="https://gist.github.com/coheigea/c9ec5d553ba9965b341ce148318dc0ec.js"></script>
This retrieves the master password from a system property. For the purposes of this demo, the password is set as a system property in the "maven-surefire-plugin" defined in the pom.<br />
<br />
<b>4) Invoking on our secured REST service using the Camel HTTP4 component</b><br />
<br />
The demo also includes a client route which invokes on the secured REST service we have created. We use the Camel <a href="https://camel.apache.org/http4.html">HTTP4 component</a> for this:<br />
<br />
<script src="https://gist.github.com/coheigea/e5b0f511533819d90c7ec879cfdd20d5.js"></script>
We start the route using the Camel <a href="https://camel.apache.org/timer.html">Timer component</a>, before calling the HTTP4 component. As we have included a query String in the request URI, the http4 component will issue a GET request. As for the REST service, we need to configure the TLS keys using the "sslContextParameters" parameter.<br />
<br />
<script src="https://gist.github.com/coheigea/24fc4cd79877755fe077dbe42bae3a7c.js"></script>
On the client side we only need the trustManagers configuration, unless of course we want to support client authentication. For the purposes of this demo, we also need to configure a custom x509HostnameVerifier property. This is because the TLS certificate the service is using will not be accepted by the client by default, as the common name of the certificate does not match the domain name of the service. We can circumvent this (for testing purposes only, it is not secure!) by using the following hostname verifier:<br />
<br />
<script src="https://gist.github.com/coheigea/c5040841829f4a5e0842bb9e509168cb.js"></script>
Finally we log the service response to the console so we can see the test-case working.Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-77438202553813041132019-04-05T04:52:00.000-07:002019-04-05T04:52:24.731-07:00Performance gain for web service requests in Apache CXFIn this post I want to talk about a recent performance gain for JAX-WS web service requests I made in Apache <a href="http://cxf.apache.org/">CXF</a>. It was prompted by a <a href="https://lists.apache.org/thread.html/484ef0bdb7609fed46890a854fa4c37dd319ab66875caf5aef61afa5@%3Cusers.cxf.apache.org%3E">mail</a> to the CXF users list. The scenario was for a JAX-WS web service where certain requests are secured using WS-SecurityPolicy, and other requests are not. The problem was that the user observed that the security interceptors were always invoked in CXF, even for the requests that had no security applied to the message, and that this resulted in a noticeable performance penalty for large requests.<br />
<br />
The reason for the performance penalty is that CXF needs to convert the request into a Document Object Model to apply WS-Security (note there is also a streaming WS-Security implementation available, but the performance is roughly similar). CXF needs to perform this conversion as it requires access to the full Document to perform XML Signature verification, etc. on the request. So even for the insecure request, it would apply CXF's <a href="https://github.com/apache/cxf/blob/02fbea2b202db56b45efb69b48389c7ad30db391/rt/bindings/soap/src/main/java/org/apache/cxf/binding/soap/saaj/SAAJInInterceptor.java">SAAJInInterceptor</a>. Then it would iterate through the security headers of the request, find that there was none present, and skip security processing.<br />
<br />
However when thinking about this problem, I realised that before invoking the SAAJInInterceptor, we could check to see whether a security header is actually present in the request (and whether it matches the configured "actor" if one is configured). CXF makes the message headers available in DOM form, but not the SOAP Body (unless SAAJInInterceptor is called). If no matching security header is available, then we can skip security processing, and instead just perform WS-SecurityPolicy assertion using a set of empty results.<br />
<br />
This idea is implemented in CXF for the 3.3.2 release via the task <a href="https://issues.apache.org/jira/browse/CXF-8010">CXF-8010</a>. To test what happens, I added a test-case to github <a href="https://github.com/coheigea/testcases/tree/master/apache/cxf/cxf-benchmarks/cxf-security-policy">here</a>. This creates a war file with a service with two operations, one that is not secured, and one that has a WS-SecurityPolicy asymmetric binding applied to the operations. Both operations contain two parameters, an integer and a String description.<br />
<br />
To test it, I added a JMeter test-case <a href="https://github.com/coheigea/testcases/blob/master/apache/cxf/cxf-benchmarks/cxf-security-policy/DoubleItPolicy.jmx">here</a>. It uses 10 threads to call the insecure operation 30,000 times. The description String in each request contains the URL encoded version of the <a href="http://docs.oasis-open.org/wss-m/wss/v1.1.1/wss-SOAPMessageSecurity-v1.1.1.html">WS-Security specification</a> to test what happens with a somewhat large request.<br />
<br />
Here are the results using CXF 3.3.1:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbLMCElJA43m77e3gWa5KnlG2dNAOrKSccgHqNZ1taD6HBlV4_vALwR6qdfGzSHUyu9aweRjc-dqVGDEIetmrswbH5UY4p5uTV5Egy81ILawlot3rSeaDWWqoJT0gSWePTj8zHvf8MaPlt/s1600/cxf-3.3.1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="186" data-original-width="1259" height="58" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbLMCElJA43m77e3gWa5KnlG2dNAOrKSccgHqNZ1taD6HBlV4_vALwR6qdfGzSHUyu9aweRjc-dqVGDEIetmrswbH5UY4p5uTV5Egy81ILawlot3rSeaDWWqoJT0gSWePTj8zHvf8MaPlt/s400/cxf-3.3.1.png" width="400" /></a></div>
and here are the results using the CXF 3.3.2-SNAPSHOT code with the fix for CXF-8010 applied:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhinjCLoIcjkv7NZ-YQicq0Npt2D_UgEoxvjanCbvrIMsDQJbQI7SemFxnPkoVos0o95iZBJ3RmmbuTvKRTjyKK7vf5_bNsQcg5LIsCoQSHucsRcKFHYmkvyY58PbAOi72t1imM1wQaDjTi/s1600/cxf-3.3.2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="187" data-original-width="1258" height="58" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhinjCLoIcjkv7NZ-YQicq0Npt2D_UgEoxvjanCbvrIMsDQJbQI7SemFxnPkoVos0o95iZBJ3RmmbuTvKRTjyKK7vf5_bNsQcg5LIsCoQSHucsRcKFHYmkvyY58PbAOi72t1imM1wQaDjTi/s400/cxf-3.3.2.png" width="400" /></a></div>
Using CXF 3.3.1 the throughput is 1604.25 requests per second, whereas with CXF 3.3.2 the throughput is 1795.26 requests per second, a gain of roughly 9%. For a more complex SOAP Body I would expect the gain to be a lot greater.Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-17032765588645825352019-03-29T05:02:00.001-07:002019-04-25T05:01:25.356-07:00HTTP Signature support in Apache CXFApache <a href="http://cxf.apache.org/">CXF</a> provides support for the <a href="https://tools.ietf.org/html/draft-cavage-http-signatures-10">HTTP Signatures</a> draft spec since the 3.3.0 release. Up to this point, JAX-RS message payloads could be signed using either <a href="http://cxf.apache.org/docs/jax-rs-xml-security.html">XML Security</a> or else using <a href="http://cxf.apache.org/docs/jax-rs-jose.html">JOSE</a>. In particular, the JOSE functionality can be used to also sign HTTP headers. However it doesn't allow the possibility to sign the HTTP method and Path, something that HTTP Signature supports. In this post we'll look at how to use HTTP Signatures with Apache CXF.<br />
<br />
I uploaded a sample project to github to see how HTTP Signature can be used with CXF:<br />
<ul>
<li><a href="https://github.com/coheigea/testcases/tree/master/apache/cxf/cxf-jaxrs-httpsig">cxf-jaxrs-httpsig</a>: This project contains a test that shows how to use the HTTP Signature
functionality in Apache CXF to sign a message to/from a JAX-RS service.</li>
</ul>
<br />
<b>1) Client configuration</b><br />
<br />
The client configuration to both sign the outbound request and verify the service response is configured in the test code:<br />
<br />
<script src="https://gist.github.com/coheigea/8a63ce0da0f2403eb8e2f15a78b5b34e.js"></script>
Two JAX-RS providers are added - <span class="pl-smi"><a href="https://github.com/apache/cxf/blob/master/rt/rs/security/http-signature/src/main/java/org/apache/cxf/rs/security/httpsignature/filters/CreateSignatureInterceptor.java">CreateSignatureInterceptor</a> creates a signature on the outbound request, and </span><span class="pl-smi"><span class="pl-smi"><a href="https://github.com/apache/cxf/blob/master/rt/rs/security/http-signature/src/main/java/org/apache/cxf/rs/security/httpsignature/filters/VerifySignatureClientFilter.java">VerifySignatureClientFilter</a> verifies a signature on the response. The keys used to sign the request and verify the response are configured in properties files, that are referenced via the<span class="pl-s"><span class="pl-pds"> "</span>rs.security.signature.out.properties<span class="pl-pds">" and <span class="pl-s"><span class="pl-pds">"</span>rs.security.signature.in.properties<span class="pl-pds">" configuration tags:</span></span></span></span></span></span><br />
<br />
<script src="https://gist.github.com/coheigea/a18c454864bcd6b3111f79289fbd8d1a.js"></script>
<span class="pl-smi"><span class="pl-smi"><span class="pl-s"><span class="pl-pds"><span class="pl-s"><span class="pl-pds">Here we can see that a keystore is being used to retrieve the private key for signing the outbound request. If you wish to retrieve keys from some other source, then instead of using configuration properties it's best to configure the <a href="https://github.com/apache/cxf/blob/6a4b3777d6432c7d44fdd8ce91b495e51ba1b218/rt/rs/security/http-signature/src/main/java/org/apache/cxf/rs/security/httpsignature/MessageSigner.java">MessageSigner</a> class directly on the CreateSignatureInterceptor.</span></span></span></span></span></span><br />
<br />
<span class="pl-smi"><span class="pl-smi"><span class="pl-s"><span class="pl-pds"><span class="pl-s"><span class="pl-pds">By default CXF will add all HTTP headers to the signature. In addition, a client will also include the HTTP method and path using the "(request-target)" header. Also if the payload is not empty, it will be digested with the digest added to a "Digest" HTTP header, which is also signed. This provides payload integrity. By default, the signature algorithm is "rsa-sha256", of course it is possible to configure this. </span></span></span></span></span></span>A secured request using HTTP signature looks like the following:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHtAR9adpAiiLaUpk4n_vVa9-WDE2Wr0brbt436wsiMwRt0fl1wuImMlDjjopA5-xx8ntHS4xD5SDjiz58Fl3ILuDVfLphVUhlNyE_kDmIIq5Fory-CFtr1rPXunimSE-po-G44qFv-16n/s1600/httpsig.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="296" data-original-width="727" height="162" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHtAR9adpAiiLaUpk4n_vVa9-WDE2Wr0brbt436wsiMwRt0fl1wuImMlDjjopA5-xx8ntHS4xD5SDjiz58Fl3ILuDVfLphVUhlNyE_kDmIIq5Fory-CFtr1rPXunimSE-po-G44qFv-16n/s400/httpsig.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<b>2) Service configuration</b><br />
<br />
The <a href="https://github.com/coheigea/testcases/blob/master/apache/cxf/cxf-jaxrs-httpsig/src/test/resources/org/apache/coheigea/cxf/jaxrs/httpsig/cxf-service.xml">service configuration</a> is defined in spring. Two different JAX-RS providers are used on the service side - <a href="https://github.com/apache/cxf/blob/master/rt/rs/security/http-signature/src/main/java/org/apache/cxf/rs/security/httpsignature/filters/VerifySignatureFilter.java">VerifySignatureFilter</a> is used to verify a signature on the client request, and <span class="pl-smi"><a href="https://github.com/apache/cxf/blob/master/rt/rs/security/http-signature/src/main/java/org/apache/cxf/rs/security/httpsignature/filters/CreateSignatureInterceptor.java">CreateSignatureInterceptor</a></span> is used to sign the response message as per the client request.<br />
<br />
<script src="https://gist.github.com/coheigea/73b9b4059cdf1e5ab4ae2b36277af269.js"></script>
For more information on how to use HTTP Signatures with Apache CXF, refer to the <a href="http://cxf.apache.org/docs/jax-rs-http-signature.html">CXF documentation</a>.Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-31789794457543331742019-03-21T09:26:00.000-07:002019-03-21T09:26:35.614-07:00Using authorization with JWT tokens in Apache CXFJSON Web Tokens (<a href="https://jwt.io/">JWT</a>) have been covered extensively on this blog (for example <a href="http://coheigea.blogspot.com/2015/12/javascript-object-signing-and_7.html">here</a>). In this post we will cover how JWT tokens can be used for authorization when sent to a JAX-RS web service in Apache <a href="http://cxf.apache.org/">CXF</a>. In particular, we will show how Apache CXF 3.3.0 supports claims based access control with JWT tokens.<br />
<br />
<b>1) JWT with RBAC</b><br />
<br />
JWT tokens can be used for the purpose of authentication in a web service context, by verifying the signature on the token and taking the "sub" claim as the authenticated principal. This assumes no proof of possession of the token, something we will revisit in a future blog post. Once this is done we have the option of performing an authorization check on the authenticated principal. This can be done easily via <a href="http://cxf.apache.org/docs/jax-rs-token-authorization.html#JAX-RSTokenAuthorization-Rolebasedaccesscontrol">RBAC</a> by using a claim in the token to represent a role.<br />
<br />
Apache CXF has a <a href="https://github.com/apache/cxf/blob/c715942d412a0e487dc51ec59e87f7cb17b85b72/core/src/main/java/org/apache/cxf/interceptor/security/SimpleAuthorizingInterceptor.java">SimpleAuthorizingInterceptor</a> class, which can map web service operations to role names. If the authenticated principal is not associated with the role that is required to access the operation, then an exception is thrown. <a href="https://github.com/coheigea/testcases/blob/master/apache/cxf/cxf-jaxrs-jose/src/test/resources/org/apache/coheigea/cxf/jaxrs/jwt/authorization/cxf-service.xml">Here</a> is an example of how to configure a JAX-RS web service in CXF with the SimpleAuthorizingInterceptor for JWT:<br />
<script src="https://gist.github.com/coheigea/ae3cc835c4760cefb21eda371355c875.js"></script>Here the <span class="pl-s">JwtAuthenticationFilter<span class="pl-pds"> has been configured with a "roleClaim" property of "role". It then extracts the configured claim from the authenticated token and uses it for the RBAC authorization decision. To see this functionality in action, look at the corresponding <a href="https://github.com/coheigea/testcases/blob/master/apache/cxf/cxf-jaxrs-jose/src/test/java/org/apache/coheigea/cxf/jaxrs/jwt/authorization/JWTAuthorizationRoleTest.java">test-case</a> in my github repo.</span></span><br />
<span class="pl-s"><span class="pl-pds"><br /></span></span>
<b><span class="pl-s"><span class="pl-pds">2) JWT with CBAC</span></span></b><br />
<span class="pl-s"><span class="pl-pds"><br /></span></span>
Since CXF 3.3.0, we can also use the <a href="http://cxf.apache.org/docs/jax-rs-token-authorization.html#JAX-RSTokenAuthorization-Claimsbasedaccesscontrol">Claims</a> annotations in CXF (that previously only worked with SAML tokens) to perform authorization checks on requests that contain JWT tokens. This allows us to specify more fine-grained authorization requirements on the token, as opposed to the RBAC approach above. For example, we can annotate our service endpoint as follows:<br />
<br />
<script src="https://gist.github.com/coheigea/26c1f971e665ca96c150a3d09cecdb8c.js"></script>
Here we can see a "role" claim is required which must match either the value "boss" or "ceo". We can enable claims based authorization by adding the <a href="https://github.com/apache/cxf/blob/b8fc2da105029c10cf24ed344198fbefdd190648/rt/frontend/jaxrs/src/main/java/org/apache/cxf/jaxrs/security/ClaimsAuthorizingFilter.java">ClaimsAuthorizingFilter</a> as a provider of the endpoint, with the "securedObject" parameter being the service implementation:<br />
<br />
<script src="https://gist.github.com/coheigea/71da76d154a0de4eb311fb933a356004.js"></script>
We can specify multiple claims annotations and combine them in different ways, please see the CXF <a href="http://cxf.apache.org/docs/jax-rs-token-authorization.html#JAX-RSTokenAuthorization-Claimsbasedaccesscontrol">webpage</a> for more information. <span class="pl-s"><span class="pl-pds">To see this functionality in action, look at the corresponding <a href="https://github.com/coheigea/testcases/blob/master/apache/cxf/cxf-jaxrs-jose/src/test/java/org/apache/coheigea/cxf/jaxrs/jwt/authorization/JWTAuthorizationClaimsTest.java">test-case</a> in my github repo.</span></span>Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-5374801226753152652019-02-12T08:54:00.001-08:002019-02-12T08:54:30.376-08:00Deploying an Apache Camel route to Apache KarafIn the <a href="http://coheigea.blogspot.com/2019/02/using-apache-camel-kafka-component-with.html">previous blog post</a>, we showed how to use Apache <a href="http://camel.apache.org/">Camel</a> to query an Apache <a href="https://kafka.apache.org/">Kafka</a> broker, which is secured using kerberos. In this post, we will build on the previous blog post by showing how to deploy our Camel route to Apache <a href="http://karaf.apache.org/">Karaf</a>. Karaf is an application runtime container that makes it incredibly easy to deploy simple applications via its "hot deploy" feature. As always, there are a few slightly tricky considerations when using kerberos, which is the purpose of this post.<br />
<br />
As a pre-requisite to this article, please follow the <a href="http://coheigea.blogspot.com/2019/02/using-apache-camel-kafka-component-with.html">previous blog post</a> to set up Apache Kafka using kerberos, and test that the Camel route can retrieve from the topic we created successfully. <br />
<b><br /></b>
<b>1) Configuring the Kerberos JAAS Login Module in Karaf</b><br />
<br />
<a href="http://karaf.apache.org/download.html">Download</a> and extract the latest version of the Apache Karaf runtime (4.2.3 was used in this post). Before starting Karaf, we need to pass through a system property pointing to the krb5.conf file created in our Kerby KDC. This step is not necessary if you are using the standard location in the filesystem for krb5.conf. Open 'bin/karaf' and add the following to the list of system properties:<br />
<ul>
<li>-Djava.security.krb5.conf=/path.to.kerby.project/target/krb5.conf \</li>
</ul>
Now start Karaf via "bin/karaf". Karaf uses JAAS for authentication (see the documentation <a href="https://karaf.apache.org/manual/latest/security">here</a>). In the console, enter "jaas:" and hit 'tab' to see the possibilities. For example, "jaas:realm-list" displays the JAAS realms that are currently configured.<br />
<br />
Recall that our Camel route needs to configure a JAAS LoginModule for Kerberos. In the example given in the previous post, this was configured by setting the Java System property "java.security.auth.login.config" to point to the JAAS configuration file. We don't want to do that with Karaf, as otherwise we will end up overriding the other JAAS LoginModules that are installed.<br />
<br />
Instead, we will take advantage of Karaf's "hot deploy" <a href="http://karaf.apache.org/manual/latest/#_deployer">feature</a> to add the Kerberos Login Module we need to Karaf. Drop the following blueprint XML file into Karaf's deploy directory, changing the keytab location with the correct path to the keytab file:<br />
<br />
<script src="https://gist.github.com/coheigea/89ed9b4883d5c069e63b8f7e5d583b01.js"></script>
For Karaf to pick this up, we need to register the blueprint feature via "feature:install aries-blueprint". Now we should see our LoginModule configured via "jaas:realm-list":<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhiPY0FhzfS2wnnjJWVULtlqAJn4Wo1j_PNhkeVL-LI2eVadUt9m4PwpYQrsP3ga8PR-Yd7LU-nyPleAhpuy8Lk4F_JAKwKtOvIxKnZf2Ts0ezlQxcwD5UxkCMxHPVdLbLA3t8hEcI5fPjy/s1600/kafka-jaas.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="534" data-original-width="734" height="290" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhiPY0FhzfS2wnnjJWVULtlqAJn4Wo1j_PNhkeVL-LI2eVadUt9m4PwpYQrsP3ga8PR-Yd7LU-nyPleAhpuy8Lk4F_JAKwKtOvIxKnZf2Ts0ezlQxcwD5UxkCMxHPVdLbLA3t8hEcI5fPjy/s400/kafka-jaas.png" width="400" /></a></div>
<b><br /></b>
<b>2) Configuring the Camel route in Karaf</b><br />
<br />
Next we will hot deploy our Camel route as a blueprint file in Karaf. Copy the following file into the deploy directory:<br />
<br />
<script src="https://gist.github.com/coheigea/99103c02d24a0db136bdb0171f3d02f3.js"></script>
Then we need to install a few dependencies in Karaf. Add the Camel repo via "repo-add camel 2.23.1", and install the relevant Camel dependencies via: "feature:install camel camel-kafka". Our Camel route should then automatically start, and will retrieve the messages from the Kafka topic and write them to the filesystem, as configured in the route. The message payload and headers are logged in "data/log/karaf.log".Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-42418256788583032062019-02-07T10:15:00.003-08:002019-02-07T10:15:53.006-08:00Using the Apache Camel Kafka component with KerberosApache <a href="http://camel.apache.org/">Camel</a> is a well-known integration framework available at the Apache Software Foundation. It comes with a huge number of components to integrate with pretty much anything you can think of. Naturally, it has a dedicated component to communicate with the popular Apache <a href="http://kafka.apache.org/">Kafka</a> project. In this blog entry, we'll show first how to use Apache Camel as a consumer for a Kafka topic. Then we will show how to configure things when we are securing the Kafka broker with kerberos, something that often causes problems.<br />
<b><br /></b>
<b>1) Setting up Apache Kafka</b><br />
<br />
First let's set up Apache Kafka. <a href="https://kafka.apache.org/downloads">Download</a> and install it (this blog post uses Kafka 2.0.0), and then start up Zookeeper and the broker, as well as creating a "test" topic and a producer for that topic as follows:<br />
<ul>
<li>bin/zookeeper-server-start.sh config/zookeeper.properties</li>
<li>bin/kafka-server-start.sh config/server.properties</li>
<li>bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test</li>
<li>bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties</li>
</ul>
Type a few messages into the producer console to make sure that it is working.<br />
<b><br /></b>
<b>2) Consuming from Kafka using Apache Camel</b><br />
<br />
Now we'll look at how to set up Apache Camel to consume from Kafka. I put a project up on github <a href="https://github.com/coheigea/testcases/tree/master/apache/camel/camel-bigdata/camel-kafka">here</a> for this purpose. The Camel route is defined in Spring, and uses the Camel Kafka component to retrieve messages from the broker, and to write them out to the target/results folder:<br />
<script src="https://gist.github.com/coheigea/c329b1c96270e2c17432f7173a903e0b.js"></script>
Simply run "mvn clean install" and observe the logs indicating that Camel has retrieved the messages you put into the topic with the producer above. Then check "target/results" to see the files containing the message bodies.<br />
<br />
<b>3) Securing Apache Kafka with Kerberos</b><br />
<b><br /></b>
So far so good. Now let's look at securing the Kafka broker using kerberos. I wrote a previous <a href="http://coheigea.blogspot.com/2017/05/securing-apache-kafka-with-kerberos.html">blog post</a> to show how to use Apache Kerby as a KDC with Kafka, so please follow the steps outlined here, skipping the parts about configuring the consumer.<br />
<br /><b></b>
<b>4) Consuming from Kafka using Apache Camel and Kerberos</b><br />
<br />
To make our Camel route work with Kafka and Kerberos, a few changes are required. Just as we did for the Kafka producer, we need to set the "java.security.auth.login.config" and "java.security.krb5.conf" system properties for Camel. You can do this in the example by editing the "pom.xml" and adding something like this under "systemPropertyVariables" of the surefire configuration:<br />
<ul>
<li><java.security.auth.login.config>/path.to.kafka.project/config/client.jaas</java.security.auth.login.config</li>
<li><java.security.krb5.conf>/path.to.kerby.project/target/krb5.conf</java.security.krb5.conf></li>
</ul>
Replacing the paths to Kafka and Kerby appropriately (refer to the <a href="http://coheigea.blogspot.com/2017/05/securing-apache-kafka-with-kerberos.html">previous blog post</a> on Kafka + Kerberos if this does not make sense). Next we need to make some changes to the Camel route itself. Add the following configuration to the Camel configuration for the Kafka component:<br />
<ul>
<li>&amp;saslKerberosServiceName=kafka&amp;securityProtocol=SASL_PLAINTEXT</li>
</ul>
Camel uses "GSSAPI" as the default SASL mechanism, and so we don't have to configure that. Now re-run "mvn clean install" and you will see the Camel route get a ticket from the Kerby KDC and consuming messages successfully from the Kafka topic.Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-67625954523203855882019-02-06T03:50:00.001-08:002019-02-06T03:50:35.555-08:00Validating kerberos tokens from different realms in Apache CXFWe've covered on this blog <a href="http://coheigea.blogspot.com/2011/10/using-kerberos-with-web-services-part-i.html">before</a> how to configure an Apache <a href="http://cxf.apache.org/">CXF</a> service to validate kerberos tokens. However, what if we have a use-case where we want to have multiple endpoints validate kerberos tokens that are in different realms? As Java uses system properties to configure kerberos, things can get a bit tricky if we want to co-locate the services in the same JVM. In this article we'll show how it's done.<br />
<b><br /></b>
<b>1) The test scenario</b><br />
<br />
The scenario is that we have two KDCs. The first KDC has realm "realma.apache.org", with users "alice" and "bob/service.realma.apache.org". The second KDC has realm "realmb.apache.org", with users "carol" and "dave/service.realmb.apache.org". We have a single service with two different endpoints - one which will authenticate users in "realma.apache.org", and the second that will authenticate users in "realmb.apache.org". Both endpoints have keytabs that we have exported from the KDC for "bob" and "dave".<br />
<br />
<b>2) Kerberos configuration</b><br />
<br />
Both endpoints have to share the same Kerberos configuration, due to the fact that Java uses system properties to set up JAAS with the Krb5LoginModule. We need to set the following system properties:<br />
<ul>
<li>java.security.auth.login.config - The path to the JAAS configuration file for the Krb5LoginModule</li>
<li>java.security.krb5.conf - The path to the krb5.conf kerberos configuration file</li>
</ul>
The JAAS configuration file for our service looks like the following:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFwyl8cyx53zfaQ01OwL3d2vnfN851SzpJd8u26nj7cpq28wK8-f0v4agaMnouAl3CZkwQYjsPvlFr7F5M7uHhtU-C7kY_hc7dWQ38x3wM4fnfkONYBMLjtsqAcTodMTcLu38uC6zuBb8D/s1600/jaas.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="202" data-original-width="772" height="103" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFwyl8cyx53zfaQ01OwL3d2vnfN851SzpJd8u26nj7cpq28wK8-f0v4agaMnouAl3CZkwQYjsPvlFr7F5M7uHhtU-C7kY_hc7dWQ38x3wM4fnfkONYBMLjtsqAcTodMTcLu38uC6zuBb8D/s400/jaas.png" width="400" /></a></div>
<br />
Here we have two entries for "bob" and "dave", each pointing to a keytab file. Note that the principal contains the realm name. This is important as we have no default_realm in the krb5.conf file. The krb5.conf file looks like this:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinFq0ub_0e8DNmBcptUWOYxeFu3x_uK5xVSDvq37z1DRzKRHO3fxLfG0xtpVJqdg6w-M9Sw1gnEbWoJtQ3rP3Ryq6Y0IPTGkCCA7tenXlo5ln-K1zwxf_zXFElcagp5RB9KWWyl_HmtjBF/s1600/krb5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="211" data-original-width="285" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinFq0ub_0e8DNmBcptUWOYxeFu3x_uK5xVSDvq37z1DRzKRHO3fxLfG0xtpVJqdg6w-M9Sw1gnEbWoJtQ3rP3Ryq6Y0IPTGkCCA7tenXlo5ln-K1zwxf_zXFElcagp5RB9KWWyl_HmtjBF/s1600/krb5.png" /></a></div>
<br />
Here we configure how to reach both KDCs for our different realms.<br />
<br />
<b>3) Service configuration</b><br />
<br />
Next, we'll look at how to configure the services. We will show how it's done for a JAX-WS service, but similar configuration exists for JAX-RS. The client will pass the kerberos token in a BinarySecurityToken security header in the message, according to the <a href="https://en.wikipedia.org/wiki/WS-Security">WS-Security</a> specs. We'll assume the service is using a WS-SecurityPolicy that requires a kerberos token (for more details see <a href="http://coheigea.blogspot.com/2011/10/using-kerberos-with-web-services-part-i.html">here</a>). Here is a sample spring configuration for an endpoint for "dave":<br />
<br />
<script src="https://gist.github.com/coheigea/58db6866a107102efb3e4d84f2b99005.js"></script>
We have a JAX-WS endpoint with a "ws-security.bst.validator" property which points to a <a href="https://github.com/apache/wss4j/blob/trunk/ws-security-dom/src/main/java/org/apache/wss4j/dom/validate/KerberosTokenValidator.java">KerberosTokenValidator</a> instance. This tells CXF to process a received BinarySecurityToken with the KerberosTokenValidator.<br />
<br />
The KerberosTokenValidator is configured with a CallbackHandler implementation, to supply a username and password (see <a href="https://github.com/apache/cxf/blob/master/systests/kerberos/src/test/java/org/apache/cxf/systest/kerberos/common/KerberosServicePasswordCallback.java">here</a> for a sample implementation). Note that this is not required normally when we have a keytab file, but it appears to be required when we do not define a default realm. The KerberosTokenValidator instance also defines the JAAS context name, as well as the fully qualified principal name. As this is in service name form, we have to set the property "usernameServiceNameForm" to "true" as well.<br />
<br />
If we set up the endpoint for "bob" with similar configuration, then our krb5.conf doesn't need the "default_realm" property and we can successfully validate tickets for both realms.<br />
<br />Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-25818593073509706582018-09-21T03:33:00.000-07:002018-09-21T03:33:24.511-07:00Exploring Apache Knox - part VIIIThis is the eighth and final post in a series of blog posts exploring some of the security features of Apache <a href="http://knox.apache.org/">Knox</a>. The <a href="http://coheigea.blogspot.com/2018/09/exploring-apache-knox-part-vii.html">previous post</a> looked at how to authorize access to Apache Knox using Apache <a href="http://ranger.apache.org/">Ranger</a>. We have also <a href="http://coheigea.blogspot.com/2018/09/exploring-apache-knox-part-iv.html">previously looked</a> at how to achieve single sign-on using the Knox SSO service. In this post we will combine aspects of both, to show how we can use Knox SSO to achieve single sign-on for the Apache Ranger admin service UI.<br />
<br />
As a prerequisite to this tutorial, follow the <a href="http://coheigea.blogspot.com/2018/08/exploring-apache-knox-part-i.html">first tutorial</a> to set up and run Apache Knox.<br />
<br />
<b>1) Configure the Apache Knox SSO service</b><br />
<br />
First we'll make a few changes to the Apache Knox SSO Service to get it working with Apache Ranger. Copy "conf/topologies/knoxsso.xml" to "conf/topologies/knoxsso-ranger.xml". Change the "redirectToUrl" parameter in the "ShiroProvider" to redirect to "knoxsso-ranger" instead of "knoxsso". We also need to make some changes to the "KNOXSSO" service configuration, due to the fact that we have not configured the Ranger Admin Service to run on TLS. Change the "KNOXSSO" service in the topology file as follows (note: this should not be done in production as it is not secure to set "knoxsso.cookie.secure.only" to "false"):<br />
<script src="https://gist.github.com/coheigea/2334af96aad74305f5f080100ca8c715.js"></script>
Apache Ranger must be configured to trust the signing certificate of the Knox SSO service. In ${knox.home}/data/security/keystores, export the certificate from the jks file via (specifying the master secret as the password):<br />
<ul>
<li>keytool -keystore gateway.jks -export-cert -file gateway.cer -alias gateway-identity -rfc</li>
</ul>
<b>2) Configure Apache Ranger to use the Knox SSO service</b><br />
<br />
Next we'll look at configuring Apache Ranger to use the Knox SSO Service. Edit 'conf/ranger-admin-site.xml' and add/edit the following properties:<br />
<ul>
<li>ranger.truststore.file - ${knox.home}/data/security/keystores/gateway.jks</li>
<li>ranger.truststore.password - the truststore password</li>
<li>ranger.sso.enabled - true</li>
<li>ranger.sso.providerurl - https://localhost:8443/gateway/knoxsso-ranger/api/v1/websso</li>
<li>ranger.sso.publicKey - Edit gateway.cer we exported above and paste in the content between the BEGIN + END part here.</li>
</ul>
<b>3) Log in to the Ranger Admin Service UI using Knox SSO</b><br />
<br />
Now we're reading to log in to the Ranger Admin Service UI. Start Ranger via "sudo ranger-admin start" and open a browser at "http://localhost:6080". You will be re-directed to the Knox SSO login page. Login with credentials of "admin/admin-password". We will be redirected back to the Ranger Admin UI and logged in automatically as the "admin" user.<br />
<br />
<b>4) Some additional configuration parameters</b><br />
<br />
Finally, there are some additional configuration parameters we can set on both the Knox and Ranger sides. It's possible to enforce that the KNOX SSO (JWT) token has a required audience claim in Ranger, by setting the "ranger.sso.audiences" configuration parameter in "conf/ranger-admin-site.xml". The audience claim can be set in the "KNOXSSO" service configuration via the "knoxsso.token.audiences" configuration property. It is also possible to change the default signature algorithm by specifying "ranger.sso.expected.sigalg" in Ranger (for example "RS512") and "knoxsso.token.sigalg" in Knox.<br />
<br />
<b> </b>Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0tag:blogger.com,1999:blog-7391783704166348052.post-36686086179747949612018-09-19T06:50:00.000-07:002018-09-19T06:50:51.697-07:00Exploring Apache Knox - part VIIThis is the seventh in a series of blog posts exploring some of the security features of Apache <a href="http://knox.apache.org/">Knox</a>. The <a href="http://coheigea.blogspot.com/2018/09/exploring-apache-knox-part-vi.html">previous post</a>
looked at how to achieve single sign-on using the Knox SSO service,
where the Knox SSO service was configured to authenticate the user to a third party SAML SSO provider. In this post we are going to move away from authenticating users, and look at how we can authorize access to Apache Knox using Apache <a href="http://ranger.apache.org/">Ranger</a>.<br />
<br />
As a prerequisite to this tutorial, follow the <a href="http://coheigea.blogspot.com/2018/08/exploring-apache-knox-part-i.html">first tutorial</a> to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from. <br />
<br />
<b>1) Install the Apache Ranger Knox plugin</b><br />
<br />
First we will install the Apache Ranger Knox plugin. <a href="http://ranger.apache.org/download.html">Download</a>
Apache Ranger and verify that the signature is valid and that the
message digests match. Now extract and build the source, and copy the resulting plugin
to a location where you will configure and install it:<br />
<ul>
<li>mvn clean package assembly:assembly -DskipTests</li>
<li>tar zxvf target/ranger-${version}-knox-plugin.tar.gz</li>
<li>mv ranger-${version}-knox-plugin ${ranger.knox.home}</li>
</ul>
Now go to ${ranger.knox.home} and edit "install.properties". You need to specify the following properties:<br />
<ul>
<li>POLICY_MGR_URL: Set this to "http://localhost:6080"</li>
<li>REPOSITORY_NAME: Set this to "KnoxTest". </li>
<li>KNOX_HOME: The location of your Apache Knox installation </li>
</ul>
Save "install.properties" and install the plugin as root via "sudo
./enable-knox-plugin.sh". The Apache Ranger Knox plugin should now be
successfully installed. One thing to check for is that the user who is running Apache Knox has the correct permissions to read the policy cache ("/etc/ranger/KnoxTest/policycache"). Now restart Apache Knox before proceeding.<br />
<br />
<b>2) Create a topology in Apache Knox for authorization</b><br />
<br />
Even though we have installed the Apache Ranger plugin in Knox, we need to enable it explicitly in a topology. Copy "conf/topologies/sandbox.xml" to "conf/topologies/sandbox-ranger.xml" and add the following provider:<br />
<script src="https://gist.github.com/coheigea/49f84fd72d3176e9806d91d474edfcc4.js"></script>
Now let's try to access the file using the admin credentials:<br />
<ul>
<li>curl -u admin:admin-password -kL https://localhost:8443/gateway/sandbox-ranger/webhdfs/v1/data/LICENSE.txt?op=OPEN</li>
</ul>
You should get a 403 Forbidden error due to an authorization failure.<br /><br />
<b>3) Create authorization policies in the Apache Ranger Admin console</b><br />
<br />
Next we will use the Apache Ranger admin console to create authorization policies for Apache Knox. Follow the steps in <a href="http://coheigea.blogspot.com/2016/07/installing-apache-ranger-admin-ui.html">this tutorial</a> to install the Apache Ranger admin service. Before starting the Ranger admin service, edit 'conf/ranger-admin-site.xml' and add the following properties:<br />
<ul>
<li>ranger.truststore.file - ${knox.home}/data/security/keystores/gateway.jks</li>
<li>ranger.truststore.password - security</li>
</ul>
Start the Apache Ranger admin
service with "sudo ranger-admin start" and open a browser and go to
"http://localhost:6080/" and log on with "admin/admin". Add a new Knox service in the Ranger admin UI with the following configuration values:<br />
<ul>
<li>Service Name: KnoxTest</li>
<li>Username: admin</li>
<li>Password: admin-password</li>
<li>knox.url: https://localhost:8443/gateway/admin/api/v1/topologies</li>
</ul>
Now click on the "KnoxTest" service
that we have created. Click on the policy that is automatically created, and note that the "admin" user already has the "Allow" permission for all Knox topologies and services. Wait for the policy to sync to the plugin, and the curl call we executed above should now work:<br />
<ul>
<li>curl -u admin:admin-password -kL https://localhost:8443/gateway/sandbox-ranger/webhdfs/v1/data/LICENSE.txt?op=OPEN</li>
</ul>
whereas using the "guest" credentials ("guest"/"guest-password") should be denied, as we have not created a matching authorization policy in Ranger. Colm O hEigeartaighhttp://www.blogger.com/profile/10711987281965801793noreply@blogger.com0