28.03.2023 - Amulya Bhatia - 10 min read Part 2: Tools in your arsenal Understanding the value of Software Supply Chain Security

In the first part we discussed software supply chain in general, which possible attack vectors exist and what actions are being taken by the community but rather in an isolated manner. In this part, I’ll line out specific actions which can be taken in an organized manner and which tools/framework/guidelines can be useful along this way.

What actions need to be / are being taken? (Organized)

I’ll start with a short overview of the actions that should be taken and which tools/frameworks/guidelines can be used to achieve this. Later I’ll give an overview of these in further depth.

Collect metadata for your supply chain

Collect metadata across your whole software supply chain, for e.g.

  • How was the software artifact generated?
  • Which source code was used to build, which build tool, what was the build command, what were its input and output parameters?
  • What all dependencies are included in the software artifact, where did they all come from and what sort of security standard do they meet?
  • What was the result of the vulnerability scan, unit tests, integration tests etc.?
  • Who approved the artifact to be deployed on production, which specific policy engine was used and what decisions were made during the deployment?

You could collect even more information that you see fit to your security standards and informs you further about your supply chain. Use SBOMs, SLSA Provenance, VEX, Detailed Vulnerability Scan reports, Verification Policy logging for this.

Establish trust across your supply chain

If you don’t have mechanisms to establish trust in your supply chain, you can never be sure about whether the metadata you see is created by your CI/CD pipeline or if it is from some hacker. Use sigstore/TUF for this.

Establish integrity of the metadata across your supply chain

Once the metadata is available you’ll need to establish integrity of this metadata, i.e. it is tamper-proof. Without this, you can’t trust the correctness and the accuracy of the metadata. Use in-toto, SLSA for this.

Establish and follow guidelines to continuously monitor your status

Adapt frameworks like SLSA, follow guidelines from OpenSSF etc. to continuously monitor your security hygiene in the DevSecOps loop.

Now it’s time for a somewhat deeper dive into the above mentioned tools/frameworks/guidelines.

SBOM

Software Bill of Materials (SBOM) represents an inventory of all components, including open-source, first-party and third-party components, that are a part of your software artifact. Having such an inventory is important to able to respond to attacks similar to the log4j attack in 2021, as you can easily and rapidly search all of your artifacts for the ones that are using a specific dependency or a specific version. This helps you drastically reduce your MTTR (Mean Time To Recovery) for any future security incidents related to exploitable vulnerabilities in any of the third-party / open-source components included in your deployed artifacts. SBOMs can be represented in many machine-readable formats including json, xml, protobuf, yaml etc. This is helpful as these can be fed for further assessment in your security pipeline.

CycloneDX vs. SPDX

Two major players in this space are CycloneDX and SPDX. Here is an example of an SBOM in json using CycloneDX. CycloneDX is being run under the leadership of Open Web Application Security Project (OWASP) whereas SPDX is a Linux Foundation Project. CycloneDX offers more Bill of Materials features than just SBOM, including:

A deeper comparison of the two projects is beyond the scope of this article but suffice to say that you can very quickly include SBOM into your DevSecOps pipeline using either of these projects as they both support mainstream languages like Java, Python, .NET and Javascript. For example, you can simply include a maven plugin in your Java projects to quickly generate an SBOM.

SBOMs are great but they don’t include all of the necessary information that helps to respond to any sort of build tampering and attacks like Solarwinds and Codecov. SBOMs also don’t provide any way to be able to verify the accuracy and correctness of the information included in them.

SLSA

Supply chain Levels for Software Artifacts (SLSA) is a framework which allows you to measure, evaluate, continuously monitor and improve the security of your Software Supply Chain. It provides guiding principles and a common terminology for both software producers and consumers. Software producers use SLSA to convey the maturity and security posture of their Software Supply Chain whereas the software consumers can make vendor decisions based on the security requirements specific to them.

SLSA provides four levels of maturity of the project’s security practices that are designed to be incremental and actionable. Before we jump deeper into these four levels, let me define two important terms used in SLSA:

  • Software Attestation - A software attestation is nothing more than authenticated metadata about software artifacts - the keyword here being authenticated.
  • Provenance - Primary idea behind provenance is to be able to link a particular software artifact to its source code. A provenance is a type of software attestation that some build system (Jenkins, Gitlab CI etc.) produced one or more software artifacts. It is metadata about how a particular software artifact was created, for e.g. it includes information like who started the build, what source code was used (git commit, repo etc.), which build system and build steps, what were the inputs to the build, what were its outputs etc.
{
   // Standard Attestation fields
   "_type":"https://in-toto.io/Statement/v0.1",
   "subject":[
      {
         "..."
      }
   ],
   // SLSA Predicate definition
   "predicateType": "https://slsa.dev/provenance/v0.2",
   "predicate":{   
      "builder":{
         "id":"<URI>"
      },
      "buildType":"<URI>",
      "buildConfig":"Object",
      "invocation":{
         "configSource":"Object",
         "parameters":"Object",
         "environment":"Object"
      },
      "metadata":{
         "buildStartedOn":"<TIMESTAMP>",
         "buildFinishedon":"<TIMESTAMP>",
         "completeness":{
            "parameters":"Boolean",
            "environment":"Boolean",
            "materials":"Boolean"
         }
      },
      "materials":[
         {
            "uri":"<URI>",
            "digest":{
               "..."
            }
         }
      ]
   }
}

Four Levels of SLSA

  1. Level 1 - Basic Assurance: Achieving level 1 means that you can provide provenance for your software artifacts and have a basic dev, build and release hygiene.
  2. Level 2 - Credible Assurance: Achieving level 2 means that not only can you provide provenance but the provenance is authenticated. Furthermore you have a dedicated build environment and a version-controlled source code.
  3. Level 3 - Resilient Assurance: You achieve level 3 when you are able to provide non-falsifiable provenance, an isolated build environment, version-controlled build and verified source history.
  4. Level 4 - Maximum Assurance: This level requires two-party review for all changes and parameterless, hermetic builds, including complete dependencies.

The great thing about SLSA is that it gives you a prescribed path to follow to achieve maximum assurance. You can decide what level you want to or can achieve based on the resources and budget available to you. Recognizing and adopting SLSA in your organization not only would help you improve your software supply chain as a producer but also as a consumer of open-source projects. You can define internal guidelines to allow use of third-party software which has, for e.g. at least level 3 or a similar badge as shown below.

SLSA Level 3 badge

Being SLSA compliant will help you avert a lot of the attacks mentioned in the beginning, as you are able to link your software artifact back to the source code, including information about the build. As you achieve maximum assurance, you create isolated build environment which produce parameterless, hermetic builds providing very high integrity to your system. You can use any one of the many provenance generators available to get started.

Important: The goal of SLSA isn’t to just provide build provenance but its use of in-toto attestation allows this to be extended for other software metadata as well, for e.g. attestations for SBOMs, Vulnerability Reports, Test Reports, Policy Engines, VEX and lots more. This will help you standardize artifact metadata of all kinds, without them being specific to any producer or consumer.

Sigstore

Now that we have mechanisms for capturing artifact metadata and guidelines/frameworks for ensuring their integrity, it’s time to establish trust using digital signatures across the whole supply chain - this is where sigstore (an OpenSSF project) comes in. Following is how they define their mission:

Sigstore aims to be for software artifact signing what Let’s Encrypt is to TLS

Any source code / repository maintainer who has ever dealt with managing keys (distributing keys, storage etc.) knows how hard it is. They also have no way of knowing when their keys get compromised, leaving doors open for security compromises. Sigstore aims to solve this major issue by taking away the pain of managing the keys altogether, as it allows you to use identities like any OIDC Identities (Google, Twitter etc.) or Workload Identity, for e.g. SPIFFE and SPIRE, for signing. This doesn’t mean that you can’t use your own keys in an enterprise setting using sigstore - it just gives you an option to not have to do key management yourself. Sigstore not only helps you with the signing but also the verification of these signatures using the following major components:

  • Rekor: Rekor is a transparency log (similar to blockchain but not blockchain) and a timestamping service which keeps signed metadata in a ledger which is non-temperable and can be searched. The complete key signing life cycle for each user is recorded here so anybody (an artifact consumer, any end-user etc.) can very easily verify the signature and establish trust. Furthermore you can also use Rekor even if you want to use your own keys for signing, which would be the case in most enterprise settings.
  • Fulcio: Fulcio is a root CA / OIDC PKI which is used for issuing temporary certificates to any identity (like OIDC) that has been authorized and publishes the whole transaction in the Rekor transparency log.
  • Multiple Clients: Sigstore provides clients for all mainstream programming languages like Java, Javascript, Python, Go, Ruby etc. It allows you to sign your git commits and tags using Gitsign and also has a special client called Cosign which enables container/artifact signing, verification and storage in an OCI registry and supports in-toto/SLSA attestations. You can also connect Cosign to your Cloud KMS or K8s secrets and use it to sign your helm charts, tekton bundles, pretty much anything that is stored in a container registry - more information here.

Sigstore is already supported by helm charts, kubernetes and has a policy controller enabling you to deploy images that you are sure have been signed by someone you trust (your build system, Ops lead, manager, etc.). It has also already found support in other important corners of the software ecosystem, for e.g. npm plans to not only use sigstore to sign their artifacts but also their build provenance, maven central and python have also started using it.

You can deploy a private instance of sigstore or use a community managed publicly available instance of rekor, fulcio and oidc to get started.

TUF - The Update Framework

TUF solves the same problem as sigstore - signing and verification of software artifacts, but has a greater focus on resiliency in case of a compromise. It aims to reduce impact of a compromise and enable secure recovery from a compromise.

Sigstore vs. TUF

While Sigstore allows for key signing using multiple techniques including identity and focuses on making signing and verification easier, TUF typically requires the developers to manage the keys themselves but makes sure that the impact of a compromise is minimal by having a design which allows for explicit and implicit key revocation, multi-signature trust and responsibility segregation. Sigstore actually uses TUF as root of trust for Fulcio and Rekor. Furthermore the releases of Fulcio, Rekor and Cosign are signed using TUF.

TUF has been in the wild for much longer than sigstore while sigstore has seen great adoption recently because of its usability. Both projects are collaborating and you can see co-deployments in the future.

Implementations and Deployments

You can find TUF implementations in many mainstream programming languages including Python, Go, Rust. TUF has seen great use inside the Automotive and IoT space as they need secure OTA updates, for e.g. Uptane, Aktualizr (C++ Uptane) which is also used in Automotive grade linux.

Final Words

If you managed to read until here, I hope I was able to convince you of the value of software supply chain security and provide you an overview of the tools that would help you along the journey. While none of these tools alone will give you the security posture you need, the combination of them is a lethal force against attacks and will reduce your attack surface area to the minimal while providing you an easier path to remediation in case an attack does go through. Be sure to also follow updates and guidelines from OpenSSF as this community grows.

Amulya Bhatia

Senior Solution Architect