Structured results for OpenSSF Scorecard - An enabler for custom policies6th November, 2023
This is the first in a series of blog posts where we will cover contributions to the open source security analysis tool OpenSSF Scorecard.
Ada Logics are excited to be collaborating with the OpenSSF Scorecard maintainers on improving the tooling for open source security. In this post, we will introduce an area we’ve focused on recently: enabling consumers to write custom policies based on OpenSSF Scorecard results. Our work on the OpenSSF Scorecard project is funded by Amazon AWS and we would like to express gratitude for this contribution. We see improving the open source security ecosystem as a positive-sum game and are happy to participate in this project.
What is OpenSSF Scorecard?
OpenSSF Scorecard is a set of open source introspection capabilities used for assessing the supply-chain risk of a given software project. OpenSSF Scorecard considers different aspects of a project’s development practices - including its workflows, commit history and source tree - to assess the security practises the project follows. These aspects are broken down into different checks that each focus on a particular problem in supply-chain security such as fuzzing, SAST, contributors metrics and more. The community uses OpenSSF Scorecard in different ways; By manual effort, users can search for a project on https://deps.dev and see the OpenSSF Scorecard results of the project. Another way is to integrate OpenSSF Scorecard projects’ Github/Gitlab workflows and add a badge to their repository. Additionally, users can use the official Scorecard Github Action to run Scorecard in their CI.
Increased Flexibility with Custom Policies
The OpenSSF Scorecard project is working towards a new way to use OpenSSF Scorecard with user-written policies that make decisions against OpenSSF Scorecard results. In this way, consumers can define their own policies using declarative yaml files to make automated decisions on the results from a OpenSSF Scorecard run. At a high level, a decision could be to require 3rd-party libraries to use automated packaging workflows, and disallow projects that package their releases manually. The OpenSSF Scorecard consumer could add the policy to a policy engine and run it as part of their production pipeline. In addition, with the new way of using OpenSSF Scorecard, consumers will have direct access to the Scorecard data at a lower level and can implement their own checks instead of using Scorecards. Adopters can align their threat model with scoring for granular customization.
Making it easy to write custom OpenSSF Scorecard policies is an important step towards adding security hardening in OpenSSF Scorecard consumers' software supply-chain. The OpenSSF Scorecard project implements a series of best practice heuristics considered and designed by the open source security community, and OpenSSF Scorecard consumers can adopt these as-is to integrate industry best practice supply-chain security mitigation practices into their production pipeline. In addition, custom policies can be used to develop and fulfil internal and public compliance requirements in consumers’ infrastructure.
A custom policy could look as follows:
version: 1 statements: - name: fuzzing require: or: - probe: fuzzedWithOneFuzz - probe: fuzzedWithClusterFuzzLite - probe: fuzzedWithGoNative - probe: fuzzedWithOSSFuzz - probe: fuzzedWithPropertyBasedHaskell text: positive: The project is fuzzed using one of the fuzzers negative: Configure one of the recognized fuzzers risk: Medium labels: [check:Fuzzing] motivation: | Fuzzing, or fuzz testing, is the practice of feeding unexpected or random data into a program to expose bugs. Regular fuzzing is important to detect vulnerabilities that may be exploited by others, especially since attackers can also use fuzzing to find the same flaws.
This policy requires the project to be fuzzed either continuously with OSS-Fuzz, OneFuzz or ClusterfuzzLite or have Go fuzzers or Haskell fuzzers in its source tree. The policy adds an explanation for other users of the same policy, which is useful to know the reason for the policy, in case an organisation distributes its policy writing efforts across multiple team members.
How we are enabling custom policies
As part of the initiative to make it easier for OpenSSF Scorecard consumers to consume OpenSSF Scorecard results through custom policies, Ada Logics are working on migrating the existing checks into structured results; Structured results are so-called “probes” that return either true or false for a given heuristic. The custom policy above checks five probes: fuzzedWithOneFuzz, fuzzedWithClusterFuzzLite, fuzzedWithGoNative, fuzzedWithOSSFuzz and fuzzedWithPropertyBasedHaskell, and one of these must return true for the project to pass.
Probes are tiny modules that make an assessment about a particular, specific angle of a supply-chain security problem. Existing Scorecard consumers are familiar with the supply-chain problems that Scorecard considers; SAST, fuzzing, repository branch protection etc. Probes are specific angles to each of these checks and can be an ecosystem, a particular tool, or a specific number of something to consider (ammount of pull requests, ammount of branches, ammount of fuzz harnesses).
Ada Logics have undertaken the work of migrating the existing checks - minus the first two checks that the OpenSSF Scorecard maintainers migrated - into probes. As of now, we have migrated 10 checks into 23 unique probes that are currently undergoing review in the OpenSSF Scorecard repository:
|Probe name||Pull request|
|CII Best Practices||https://github.com/ossf/scorecard/pull/3520|
We are excited to be contributing to making it easier to consume and make decisions against OpenSSF Scorecard results. We would like to thank Amazon AWS for funding the work that will bring best practice supply-chain security standards to the wider open-source community. We would also like to thank the Linux Foundation for facilitating our involvement in the OpenSSF Scorecard project.
Expanding Fuzzing Probes
We have also expanded Scorecard’s support identifying fuzzing support in a given open source project. Prior to our extensions, Scorecard was able to capture if a given project was fuzzed by way of:
- Fuzzing by way of OSS-Fuzz
- Fuzzing by way of ClusterFuzzLite
- Fuzzing by way of OneFuzz
- Go-native fuzzing
- Haskell-based property fuzzing by way of tools such as QuickCheck and Hedgehog
- Typescript and Javscript fuzzing by way of fast-check
There exists, however many more fuzzing frameworks that are commonly used and we added a set of probes for a set of these:
- A probe for fuzzing by way of libFuzzer for C code
- A probe for fuzzing by way of libFuzzer for C++ code.
- A probe for fuzzing by way of libFuzzer for Swift code.
- A probe for fuzzing by way of Atheris for Python code.
- A probe for fuzzing by way of Cargo-fuzz for Rust code.
- A probe for fuzzing by way of Jazzer for Java code.
Ada Logics have many years of experience developing and using fuzzing and program analysis tools, and we are excited to continue expanding on Scorecard’s support in this domain. Specifically, in the near future we are planning on expanding the support for identifying fuzzing support for more engines as well as support for the large set of static analysis tools available.