Friday, April 14, 2023

Open Source Software Composition Analysis

Software Composition Analysis (SCA) is the process of figuring out which third-party dependencies are used in your project. It's an essential part of the software security process as it helps you to answer questions like:

  • Does my project contain third party dependencies with known vulnerabilities (CVEs)?
  • Does my project contain third party dependencies with risky licenses?
  • Does my project comply with all legal requirements imposed by the upstream projects?

In this post we'll look at some popular open-source SCA options. It's not intended to be comprehensive, let me know if I missed anything! Adding one or more of these projects to your CI/CD process will really improve your Supply Chain Security process.

GitHub Dependabot 

If your project is hosted on GitHub, then the first port of call for SCA is to enable GitHub Dependabot. You have the options to just enable alerts, which let you know if your dependencies have known CVEs, and also to have Dependabot automatically create pull requests to upgrade the dependencies in question to fix the vulnerability. Adding CI using GitHub Actions to this process to verify that the updates don't break any of the tests/build means fixing CVEs is a straightforward process. Dependabot has support for a wide range of software ecosystems.

GitHub recently added support as well to download an SPDX SBOM for a GitHub repository via e.g for Apache CXF https://github.com/apache/cxf/dependency-graph/sbom.

OWASP Dependency-Check

OWASP Dependency-Check is another tool that can help you find CVEs in your dependencies. It's useful as an alternative to dependabot if you don't have access to the security tab of a GitHub project, or if Dependabot is otherwise not enabled. You can run it on a Maven project via:

  • mvn org.owasp:dependency-check-maven:check

Trivy

Aqua Trivy is a really useful tool for SCA as it can help with a wide range of scenarios:

  • Scan a docker image for CVEs/Secrets: trivy image tomcat:9.0 
    • Exclude secret scanning: trivy --security-checks vuln image tomcat:9.0
    • Exclude OS level CVEs: trivy --security-checks vuln --vuln-type library image tomcat:9.0
  • Scan a GitHub repository: trivy repository https://github.com/apache/cxf
  • Scan the filesystem at the current working directory: trivy fs .

Syft

Anchore Syft is a tool which can help you with generating an SBOM from an image or filesystem:

  • Generate a CycloneDX SBOM from a docker image: syft -o cyclonedx-json tomcat:9.0
  • Generate an SBOM from a war file: syft packages ./fedizhelloworld.war

Grype

Anchore Grype is another super-useful tool that works well with Syft:

  • Scan the current working directory for CVEs: grype dir:.
  • Scan a docker image for CVEs: grype tomcat:9.0
  • Scan a CycloneDX SBOM produced by Syft for CVEs: grype sbom:./sbom.json

OSV-Scanner

Yet another tool is Google's OSV-Scanner:

  • Scan a docker image for CVEs: osv-scanner --docker tomcat:9.0
  • Scan the local filesystem for CVEs: osv-scanner -r .
     

Thursday, March 16, 2023

OpenSSF Allstar

In the previous blog post, I looked at how to use OpenSSF Scorecard to improve the security posture of your open-source GitHub projects. This is a really useful tool when working at the level of individual repositories. However, what if you want to apply security policies to many repositories in a GitHub organization? This is where OpenSSF Allstar comes in.

Getting Started

Detailed installation instructions are available here. The easiest way of getting started is to install the OpenSSF AllStar GitHub app in your organization. However you may not wish to grant access to your internal/private repositories to this instance, in which case it's pretty easy to manually install it

General Configuration

Allstar reads configuration from a GitHub repo called ".allstar" in your GitHub organization. Here an "allstar.yaml" file defines the general configuration for the tool, e.g.:

This configuration uses the "Opt out" strategy, meaning that all repositories are included in the organization unless you explicitly opt them out. Archived and forked repos are excluded as you may not care about applying security policies to these types of repositories. Finally the configuration blocks individual repositories overriding the allstar configuration.

Policies

Allstar policies are added by checking in the corresponding yaml file to the .allstar repository. Each policy allows you to define whether to just log the issue or whether to create a GitHub issue for it in the repository where a policy violation was found. GitHub issues are labelled with "allstar", making it easy to search for them across all repositories in your organization.

Here are some of the policies Allstar currently supports:

  • binary_artifacts.yaml: Enforce that binary artifacts aren't checked in to source control.
  • branch_protection.yaml: Enforce branch protection requirements on repos, for example:
    • Default branches are covered by branch protection. 
    • Approval is required for pull requests 
    • Block force pushes 
    • Require the branch is up to date before merging
  • dangerous_workflow.yaml: Flag dangerous things in github actions workflows.
  • outside.yaml: Enforce that outside collaborators can't be an admin on a repository. 
  • security.yaml: Enforce that repositories have a security policy. I use it with "optOutPrivateRepos: true" to only apply this policy to public repos. This helps to let external users of your software know how to report security issues to the project.

Contributions

I've found allstar pretty useful and submitted a few contributions to it in the spirit of open-source, that were included in the recent v3.0 release:

Tuesday, February 21, 2023

OpenSSF Scorecard

OpenSSF Scorecard is a tool that assesses your project against a number of security best practices and assigns a score (out of 10). It is a really useful thing to run on any open-source project you might contribute to, to try to improve the overall security posture of the project, or even to assess how secure a third-party project is that you might want to use. In this post I'll describe how I improved the security posture of a number of ASF projects I contribute to using OpenSSF Scorecard.

Getting Started

The first step is to install the OpenSSF Scorecard GitHub Action. This can be done in the GitHub dashboard, by going to "Actions", then "New Workflow" and searching for "OpenSSF Scorecard". Once this is committed to source control and runs successfully, the findings appear in the GitHub dashboard under "Security" and then "Code scanning". After the first run, you can add a Scorecard badge to the README of your project to display the current score. For example, for Apache Santuario.

Improving the score

After doing the initial run to get the base score, it's time to try to improve the score a bit. Here are some of the actions I performed:

  • Enable dependabot. This involves adding dependabot.yml (for example) to your project to automatically create PRs for updated dependencies. As in the example, it should cover both the package ecosystem of the project (e.g. Maven) as well as GitHub Actions, to keep any GitHub actions up to date as well.
  • Automated builds. Any pull request should have the full suite of project tests run on it before being committed. I made sure that all of the projects had Jenkins projects set up to build both maintained branches whenever new commits were made, as well as dedicated jobs to run on PRs. Note that at the ASF, the dependabot user needs to be explicitly allow-listed in a .asf.yaml file to automatically run Jenkins jobs on submitted PRs. The combination of dependabot and automated builds makes it easy to have confidence in automatically updating your project dependencies, assuming a good test-suite.
  • Adding CodeQL (and fixing the findings). CodeQL is a SAST tool that can be run on your project via a GitHub action by searching for "CodeQL". It should be run on the maintained branches of the project, as well as on any pull requests for the maintained branches.
  • Adding SECURITY.md. A SECURITY.md (for example) should be added to source control to describe the supported versions of the project, and how to submit security issues.
  • Pin GitHub action commits. It's best practice to pin GitHub action commits so that new updates don't break your project or even introduce a security regression. https://app.stepsecurity.io/securerepo can be used as a tool to analyse the GitHub actions of your project and to create pull requests with the correct versions pinned. Dependabot is then clever enough to be able to update your GitHub actions based on the pinned commit.
  • Adding OpenSSF Best Practices Badge. https://bestpractices.coreinfrastructure.org/en allows you to obtain a best practices badge for your project and to embed it in the README.

ASF Projects

Here are some of the ASF projects I applied the above to, and their current OpenSSF Scorecard result at the time of writing:

Future improvements that would improve the score are as follows:

  • No fuzzing. https://google.github.io/oss-fuzz/ could be used to fuzz the projects.
  • No branch protection. Branch protection is not enabled on these projects as traditionally we have followed a CTR approach to development. OpenSSF Scorecard also penalises committing directly to the main branch without a approved PR, so adding branch protection would greatly improve the score of all projects above.
  • No packaging. OpenSSF Scorecard's packaging check doesn't support Maven Central, which is where the releases of all the above projects go.
  • No signed releases. Again OpenSSF Scorecard doesn't check Maven Central for signed releases.

 


Wednesday, December 14, 2022

New Apache CXF releases and CVEs published

Apache CXF has released versions 3.5.5 and 3.4.10. Notable security upgrades in these releases include picking up a fix for CVE-2022-40152 in Woodstox, and a fix for CVE-2022-40150 in Jettison. In addition, two new CVEs are published for issues found directly in Apache CXF itself:

  • CVE-2022-46363: Apache CXF directory listing / code exfiltration. A vulnerability in Apache CXF before versions 3.5.5 and 3.4.10 allows an attacker to perform a remote directory listing or code exfiltration. The vulnerability only applies when the CXFServlet is configured with both the static-resources-list and redirect-query-check attributes. These attributes are not supposed to be used together, and so the vulnerability can only arise if the CXF service is misconfigured.
  • CVE-2022-46364: Apache CXF SSRF Vulnerability. A SSRF vulnerability in parsing the href attribute of XOP:Include in MTOM requests in versions of Apache CXF before 3.5.5 and 3.4.10 allows an attacker to perform SSRF style attacks on webservices that take at least one parameter of any type.
Thanks to thanat0s from Beijin Qihoo 360 adlab for reporting both issues to the project. The first issue is not really applicable in practice as it only arises on a misconfiguration. For the second issue, we restricted following MTOM URLs only to message attachments by default. It can be controlled via a new property "org.apache.cxf.attachment.xop.follow.urls" (which of course defaults to false).

Monday, September 20, 2021

New CVE (CVE-2021-40690) released for Apache Santuario - XML Security for Java

A new CVE has been released for Apache Santuario - XML Security for Java which is fixed in the latest 2.2.3 and 2.1.7 releases:

  • Bypass of the secureValidation property (CVE-2021-40690) - All versions of Apache Santuario - XML Security for Java prior to 2.2.3 and 2.1.7 are vulnerable to an issue where the "secureValidation" property is not passed correctly when creating a KeyInfo from a KeyInfoReference element. This allows an attacker to abuse an XPath Transform to extract any local .xml files in a RetrievalMethod element.

As part of this fix we do not allow unsigned References to "http" or "file" URIs any more. This is controlled by a new system property 

  • org.apache.xml.security.allowUnsafeResourceResolving

The next major release (2.3.0) won't support by default "http" or "file" URIs even when they are signed, it will be necessary to manually add the ResourceResolvers instead (for example).

An important point is to make sure that you are setting the "secure validation" property to "true" in your project. We have decided for the next major release (2.3.0) to enable the "secure validation" property by default

We would like to thank An Trinh for alerting us to this security issue.

Monday, June 29, 2020

Configuring Kerberos for Kafka in Talend Open Studio for ESB

A few years back I wrote a blog post about how to create a job in Talend Open Studio for Big Data to read data from an Apache Kafka topic using kerberos. This job made use of the "tKafkaConnection" and "tKafkaInput" components. In Talend Open Studio for ESB, there is a component based on Apache Camel called "cKafka" that can also be used for the same purpose, but configuring it with kerberos is slightly different. In this post, we will show how to use the cKafka component in Talend Open Studio for ESB to read from a Kafka topic using kerberos.

1) Kafka setup

Follow a previous tutorial to setup an Apache Kerby based KDC testcase and to configure Apache Kafka to require kerberos for authentication. Kafka 2.5.0 was used for the purpose of this tutorial. Create a "test" topic and write some data to it, and verify with the command-line consumer that the data can be read correctly.

2) Download Talend Open Studio for ESB and create a route

Now we will download Talend Open Studio for ESB (7.3.1 was used for the purposes of this tutorial). Unzip the file when it is downloaded and then start the Studio using one of the platform-specific scripts. It will prompt you to download some additional dependencies and to accept the licenses. Right click on "Routes" and select "Create Route", entering a name for the route.

In the search bar under "Palette" on the right hand side enter "kafka" and hit enter. Drag the "cKafka" component that should appear into the route designer. Next find the "cLog" component under "Miscellaneous" and drag this to the right of the "cKafka" component. Right click the "cKafka" component and select "Row / Route" and connect the resulting arrow with the "cLog" component.

 
3) Configure the components

Now let's configure the individual components. Double-click on the "cKafka" component and enter "test" for the topic. Next, select "Advanced Settings" and scroll down to the kerberos configuration. For "Kerberos Service Name" enter "kafka". Then for "Security Protocol" select "SASL over Plaintext":


Next click on the "Run" tab and go to "Advanced Settings". Under "JVM Settings" select the checkbox for "Use specific JVM arguments", and add new arguments as follows:
  • -Djava.security.auth.login.config=<path.to.kafka>/config/client.jaas 
  • -Djava.security.krb5.conf=<path.to.kerby.project>/target/krb5.conf
For the first argument, you need to enter the path of the "client.jaas" file as described in the tutorial to set up the Kafka test-case. For the second argument, you need to specify the path of the "krb5.conf" file supplied in the target directory of the Apache Kerby test-case:

Now we are ready to run the job. Click on the "Run" tab and then hit the "Run" button. Send some data via the producer to the "test" topic and you should see the data appear in the Run Window in the Studio.

Wednesday, May 27, 2020

SSH improvements in Apache Karaf

Last year I contributed a number of SSH improvements to Apache Karaf, which I never found the time to blog about. In this post I'll cover how to use SSH with Apache Karaf, and also what the improvements were.

1) Using SSH with Apache Karaf

Download and extract Apache Karaf (4.2.8 was used for the purposes of this post). Start it by running "bin/karaf". By default, Karaf starts an SSH service which is configured in 'etc/org.apache.karaf.shell.cfg'. Here you can see that the default port is 8101. Karaf uses JAAS to authenticate SSH credentials - the default realm is "karaf". Associated with this realm is a PropertiesLoginModule, which authenticates users against the credentials stored in 'etc/users.properties'. Also note that the user must have a group defined that matches the value for "sshRole" in 'etc/org.apache.karaf.shell.cfg'. So let's try to SSH into Karaf using the default admin credentials, and it should work:
  • ssh karaf@localhost -p 8101

2) SSH algorithm update

The first improvement, which was merged for the 4.2.7 release, was to remove support by default for a number of outdated algorithms:
  • SHA-1 algorithms were removed
  • CBC ciphers were removed
  • Old ciphers such as 3-DES, Blowfish, Arcfour were removed
These can all be configured in 'etc/org.apache.karaf.shell.cfg' if necessary. The configuration values + defaults are now as follows:
  • ciphers = aes256-ctr,aes192-ctr,aes128-ctr
  • macs = hmac-sha2-512,hmac-sha2-256
  • kexAlgorithms = ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
So for example, the following now fails with Karaf 4.2.8 using the default configuration:
  • ssh karaf@localhost -p 8101 -c 3des-cbc

3) Elliptic curve support for SSH server keys 

The second improvement, which was also merged for 4.2.7, was to add support to configure Karaf with an elliptic curve SSH key. Previously only RSA keys were supported. When you start Karaf, it will generate an SSH key if one does not already exist, according to the "algorithm" (RSA) and "keySize" (2048) defined in 'etc/org.apache.karaf.shell.cfg', and store it in the "hostKey" (etc/host.key) file. As part of the improvement, the public key is also written out to a new configuration property "hostKeyPub" (etc/host.key.pub).

To see this in action, delete 'etc/host.key.*' and edit 'etc/org.apache.karaf.shell.cfg' and change:
  • keySize = 256
  • algorithm = EC
Now restart Karaf + try to ssh in using the "-v" parameter. You will see something like: "debug1: Server host key: ecdsa-sha2-nistp256 SHA256:sDa1k...".

4) Support for elliptic keys in the PublicKeyLoginModule

As well as supporting authentication using a password via the PropertiesLoginModule, Karaf also supports authentication using a public key via the PublickeyLoginModule. The PublickeyLoginModule authenticates a public key for SSH by comparing it to keys stored in 'etc/keys.properties'. I added support for Karaf 4.2.7 to be able to authenticate using elliptic keys stored in 'etc/key.properties', before only RSA public keys were supported.

To see how this works, generate a new elliptic curve key with an empty password:
  • ssh-keygen -t ecdsa -f karaf.id_ec
Now edit 'etc/keys.properties' and copy the public key that was written in "karaf.id_ec.pub". For example:
  • colm=AAAAE2VjZHNhLXNoY...0=,_g_:sshgroup
  • _g_\:sshgroup = group,ssh
 Now we can SSH into Karaf without a password prompt via:
  • ssh colm@localhost -p 8101 -i karaf.id_ec

5) Support for encrypted key password for SSH

Finally, I added support for encrypted key passwords for SSH. This change necessitated moving from using not-yet-commons-ssl to BouncyCastle for parsing SSH keys, as the former does not support encrypted keys or newer security algorithms in general. As a result, encrypted key passwords for SSH are not available in Karaf 4.2.x, but will be in the next major release (4.3.0). Note as well that encrypted key passwords only work for when Karaf is reading an externally generated encrypted private key.

To test this out, grab Karaf 4.3.x and generate a new RSA encrypted private key as follows (specifying a password of "security"):
  • openssl genpkey -out rsa.pem -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -aes256
Edit 'etc/org.apache.karaf.shell.cfg' and change it as follows:
  • hostKey = ${karaf.etc}/rsa.pem
  • hostKeyPassword = security
Before starting Karaf, it's also necessary to register BouncyCastle as a security provider. Edit 'etc/config.properties' and add:
  • org.apache.karaf.security.providers = org.bouncycastle.jce.provider.BouncyCastleProvider
Now copy the BouncyCastle provider jar (e.g. bcprov-jdk15on-1.65.jar) to lib/ext and restart Karaf. It should be possible then to SSH into Karaf.