[go: nahoru, domu]

Skip to content

Latest commit

 

History

History
300 lines (223 loc) · 34.4 KB

finder-guide.md

File metadata and controls

300 lines (223 loc) · 34.4 KB

Guidance for Security Researchers to Coordinate Vulnerability Disclosures with Open Source Software Projects

Congratulations! You found a security vulnerability

Now what? This guide is intended to help security researchers (aka “Finders”) engage with open source software (OSS) project maintainers to kick off and participate in the coordinated vulnerability response process.

Before you begin

While engaging in the vulnerability disclosure process, please keep this in mind: no software is perfect. Software is written by people (at least today), and people sometimes make mistakes. This is true for closed-source (proprietary) software as well as open source software (OSS). This problem can be more challenging for open source software due to the factors that make OSS so powerful: highly distributed development by multiple contributors. At some point in the project lifecycle, someone – a user, a contributor, a security researcher, and likely the reader of this guide – will find a vulnerability that affects the security and usefulness of the project. Applying this guide will help everyone involved to be prepared to respond quickly and effectively.

About this guide

This guide was produced by the contributions of individuals and the Open Source Security Foundation (OpenSSF) Vulnerability Disclosure Working Group. This group is comprised of individuals with experience in vulnerability disclosure who have learned from common pitfalls and have developed this guide as a best-effort starting/reference point for the community. Disclosure can mean many things to different people based on their context within the process. Ultimately, security defects should be responsibly reported to software maintainers to evaluate and correct them with patches and some form of notification to downstream consumers. This group encourages coordinated vulnerability disclosure (CVD) as the appropriate model for most open source projects, so the advice in this guide follows that model. The following information should be treated precisely for what it is – a set of guidelines that gives Finders a high-level view of the many options available before, during, and after a coordinated vulnerability disclosure. Those involved in the disclosure should also be aware that not all advice here will apply to every open source project or every vulnerability disclosure event. Recommendations should be adjusted to fit the needs of each particular project and disclosure. The Working Group has previously released a guide focused on helping open source maintainers prepare for the CVD process and be able to intake vulnerability reports.

What are your goals in reporting the vulnerability you discovered?

The following framework was developed by I Am The Cavalry and aims to cover the major categories of motivations for researching security vulnerabilities. While the below framework is not comprehensive, it gives a basic understanding of why a researcher may have found the issue. The same motivations often apply to their motivations for reporting the vulnerability.

  1. Protect – make the world a safer place. These researchers are drawn to problems where they feel they can make a difference.
  2. Puzzle – tinker out of curiosity. This type of researcher is typically a hobbyist driven to understand how things work.
  3. Prestige – seek pride and notability. These researchers often want to be the best or well-known for their work.
  4. Profit – to earn money. These researchers trade on their skills as a primary or secondary income.
  5. Protest/Patriotism – ideological and principled. These researchers, whether patriots or protestors, strongly support or oppose causes.

Who is an open source maintainer?

An open source maintainer is often a software engineer and contributor to an open source project. They typically have elevated permissions to code repositories that allow them to manage the repository’s settings and have write access to the main branch. They’re usually the ones to merge pull requests and patches to the main branch and control what’s included in a release, including security patches. OSS projects can have one to many maintainers depending on their size, maturity, and interests. Larger open source projects can even have dedicated individuals or teams that solely conduct security-related tasks for that community. Not all open source projects are organized or developed exactly the same way. The majority of open source code out there is developed by one or two-person groups. As they gain external interest and contributions, many of these projects will grow into a bigger community, perhaps even growing to the size of having a whole foundation to support the developers and community. Some OSS projects are created by commercial vendors, but most are not. Some projects are bundled within a “distro” or distribution that collects and curates the content provided by that group and ideally provides security support. Still, as many of these communities are made up of volunteers or professionals donating their time to a passion project or as a hobby, there is no guarantee of security support. As the Finder interacts with the software maintainer, it will be important to understand the level of organization the project has and its capabilities and tailor the interactions with them accordingly. For example, a two-person maintained project may not have the testing infrastructure or tooling to perform security scanning and validation. In contrast, something with a distro or commercial support might have a dedicated team of trained application security engineers. While reporting a security flaw to either is the start of a good interaction, the former may require more involvement to see through to completion. At the same time, the latter may introduce concepts like long embargoes or the need to involve more individuals to complete the creation and testing of remediation steps.

What are open source maintainers' motivations?

Much like finders and security researchers, open source maintainers have a variety of motivations driving them to maintain a project that includes:

  • They are solving a problem or writing an academic project
  • They are having fun and/or learning something new skill/technology
  • They started this project as a hobby and aren't paid to maintain it
  • They are seeking recognition from their peers or feel maintaining the project will further their careers
  • They use open source as part of their jobs and feel strongly about giving back to the community
  • They started this project because they’re paid to do so
  • They use a library, and were the only ones to step up as a maintainer

It’s important to emphasize the volunteer capacity of open source maintainers/projects. Expectations for interactions should be set with the understanding that timelines may be longer than when working with a commercial entity. The human interaction during the submission/triage/remediation process will also likely differ. Open source maintainers may have a more direct and technical communication style. Researchers should remember that maintainers are not taking this approach out of malice or disrespect. Instead, the interaction should always be perceived through positive intentions and understanding that the maintainer is working hard, often in a volunteer capacity, to keep their project supported and healthy.

To read more about open source contributors, the Linux Foundation published a report that details many aspects of FOSS maintainers and contributors.

What is Coordinated Vulnerability Disclosure?

Coordinated vulnerability disclosure (CVD) is the process of sharing vulnerability details with the person or group who has the ability to respond and fix, and/or remediate the vulnerability. This is typically the open project maintainer, but could include project developers, collaborators, administrators, or other invested parties.

The CVD process involves privately disclosing the vulnerability details, creating and testing a fix for the vulnerability, and then disclosing the fix and details with all downstream consumers simultaneously. This coordination ensures that all required parties are prepared with fixes, communications, updates at the same time. The benefits of disclosing a vulnerability through this method is that there is a fix available to all consumers at the same time, and no one group is put more at risk than others.

Maintaining an information embargo during the mitigation and patching phase is key to protecting the users here.

Understanding how the project handles vulnerabilities (aka “Security Policy”)

Open source projects should share the expectations for contributors and finders on how it handles defect reports, both from the operational/quality perspective as well as when security vulnerabilities are discovered. This is typically communicated through a stated policy on how the project triages and addresses reports. A security policy should outline a project's handling expectations for vulnerability-related information. This policy typically includes:

  • Information about how vulnerability information should be reported to the project (email, issue tracker, etc.)
  • How sensitive information should be handled (encryption, TLP markers)
  • What a reporter can expect after submitting a vulnerability report (response timelines, potential handling decisions, aka “Bug Bar”)
  • Legal safe harbor (an explicit statement that promises no legal action will be taken against vulnerability reports made in good faith)

A security policy can look like a SECURITY.md file in a repository or a SECURITY.txt file on the project’s website or information on company’s/project’s website itself. These files typically include contact information for the project’s security team as well as the information included above.

While a security policy should explain how a report should be submitted and how the project will handle it, there are many report intake methods through which a vulnerability report can be submitted. This detail should be included explicitly within the security policy. Vulnerability submission intakes can range from simply listing a security-related email address (where the reporter submits a new/potential vulnerability via the security@ email alias), a defect/issue tracker, or submission through a third-party VDP or bug bounty platform. Following a project's publicly available handling expectations will show that you’re open to collaborating on this issue and increases the chance that your issue will be addressed quickly.

Another component of a security policy is the clarification around rewards. A Vulnerability Disclosure Program (VDP) typically does not implicitly include a reward structure for vulnerabilities reported. Coordinating the timing of any public disclosure is an essential practice to be followed. When a program offers financial rewards, it is often called a Bug Bounty Program (BBP). At their core, BBPs serve the same purpose/goal as a VDP, facilitating a channel to report vulnerabilities to the project maintainers. They will have a clearly communicated Security Policy containing the same components as a VDP, with the possible inclusion of a clearly defined asset scope.

No matter the type of intake method the project chooses, it is critical to thoroughly read the security policy when reporting vulnerabilities. Some policies include non-disclosure policies (ie. NDA) that may prevent you from disclosing the details of the vulnerability to the public or a 3rd party without permission from the maintainer or maintainer organization. While exceptionally rare for BBPs covering open source, you may encounter this type of agreement when working with a commercial entity that supports an open source project and runs a private or invitation-only BBP. These may occasionally be subject to stricter disclosure requirements. As with any agreement, it's recommended that you read the terms carefully.

When you’re attempting to locate the vulnerability reporting method or security policy for a specific project, open source databases like disclose.io are another possible source to reference. If the project is a part of the CVE program as a CVE Naming Authority (CNA), then their contact information and security policy are included in the CVE Program’s List of Partners.

Additional Sources for Finding the CVD Channel of a project

If you cannot find any clearly defined security contact information, try these solutions:

  1. Publicly request a security contact using an issue tracker
  2. Reach out to the project maintainer, owner, or most active contributors via email, social media, or the project’s real-time chat system
  3. Understand if the project is part of a larger community or a Foundation that may have contacts or even a dedicated security team
  4. Reach out to commercial vendors that may embed or support the project
  5. Contact relevant mailing lists, such as oss-security, and state you have a private issue you wish to disclose to the project
  6. Reach out to 3rd party organizations like MITRE/CVE or CERT/CC for assistance in coordination

When using any of these solutions, avoid sharing sensitive information until you’ve set up an appropriate communication channel with the right people.

Set up your Report for Easy Intake & Disclosure

Suggestion: Write your initial disclosure for public consumption as well as for the maintainer you are contacting. This will prevent you from needing to rewrite your report for public consumption at the end of the vulnerability handling process.

Provide Useful Information

Your vulnerability report should have enough vulnerability details in it so that the maintainer can reproduce the issue themselves. Be sure to include any public references as well. Some questions you should answer in your report include:

  • What problem did you find?
  • What versions do you believe are vulnerable?
  • What is the vulnerable product, package, or project?
  • What steps did you take to find the vulnerability?
    • Include any specific software and hardware requirements needed to reproduce the vulnerability.
    • If you have a proof-of-concept (POC) or example exploit, include it after you have established a secure channel to the maintainer or project security members.
  • What lines in the source code of the project are vulnerable?
    • Are you able to identify the root cause?
    • Can you identify when this was introduced?
  • What is the impact if exploited?
    • Can you estimate the severity (CVSSv3.1) score? Be aware that the maintainer/project may revise your scoring since they are experts in the code. This is an opportunity to educate them on how the flaw works and talk through impacts with them.
    • What CWE does the vulnerability fall under? CWE information helps categorize the attack - physical, memory (buffer overflow), ROP, etc. This will help see where the issues falls under and provides information for the maintainer to potentially find any other related vulnerabilities.
  • Are you aware if this is actively being exploited?
  • Can you suggest any remediation or mitigation steps? This will almost always help speed the resolution of the vulnerability. Patches are always welcome.
  • Do you have any time constraints that affect this disclosure (submitted to a conference, existing expected date of disclosure, personal disclosure policy)?
  • Has this information been shared with anyone else, and if so, when and how?
  • Are you willing to meet virtually with the maintainer to demonstrate what you’ve found?

An example/template vulnerability report template is included in the appendix below.

Disclosure

Although vulnerability disclosure has been normalized significantly over the past decade, there are still some parties who may resist disclosure of the vulnerabilities in their software. This may be because they are new to the process or do not have the correct capabilities or resources to manage a vulnerability disclosure process. Always assume positive intent when reporting to open source maintainers, especially in the light that they may be unpaid volunteers or just someone that loves writing code and may not have the training and background to understand what is being reported to them.

Properly setting expectations for the disclosure

When going into vulnerability disclosure, it’s advisable to declare your own goals and expectations early so that all parties understand the boundaries of the engagement.

While most vulnerability disclosures go well, sometimes maintainers are unresponsive, there are disagreements about the vulnerability's impact, or the maintainers may prioritize other work over fixing a vulnerability. Stating an up-front vulnerability disclosure timeline can set clear expectations around how and when disclosure will occur. These clear deadlines will help ensure that the maintainer has a reasonable amount of time to fix the vulnerability, that users aren’t left unaware for an extended period of time, and that a fix is released as soon as possible.

A note about software development

Typically, software developers will follow a Software Development Life-cycle (SDL) model, consisting of at least some of the following phases that lead from one to the other. The process is usually iterative and cycles back from the bottom to the top (more info in this article):

  • Gathering requirements - identify the problem(s) and prioritize.
  • Design - Analyze the gathered requirements and understand how to fulfill them.
  • Development - Write the code that implements the design, review and approve it.
  • Verification - Verify that the code solves the problem without creating new issues or regressing existing functionality.
  • Release - Build and package the new code so users can leverage the solution to their goals.
  • Maintenance/ Monitoring/ Collect feedback - Maintain the solution over time while collecting feedback that forms the basis for the next iteration’s requirements.

In general, collaborating externally as a vulnerability finder with an open source project team will occur during the final maintenance/collecting feedback phase, and the intent is that the vulnerability is prioritized high so that the next cycle will include the fix for it.

Disclosure Options

We have listed some of the most common disclosure methods below. These are not exclusive of one another and can sometimes overlap or daisy chain in some fashion. Whichever route you adopt is up to you, but the OpenSSF also takes the liberty of offering its guidance here. At a high level, this guidance is that any form of vulnerability disclosure is done in a coordinated fashion, following the CVD guidelines throughout this document. Below is a list of disclosure options that explain what that disclosure may look like. Unless specifically noted, each of these should be done as a joint effort between the finder and the project.

Coordinating vulnerability disclosures generally involves a healthy collaboration between the finder and the project maintainers. Many details need to be agreed upon for a smooth disclosure, so we suggest discussing those early in every CVD instance between both parties.

A small note about the disclosure text here; we recommend researchers consider the content of their report to be as complete as possible early in the process. A complete analysis and report saves considerable time and effort as it can be re-used as a security advisory, disclosure note, blog post, and initial report.

Disclosure Option Description Example Scenarios OpenSSF preferred
No Disclosure The finder keeps the information to themselves and does not share it with the maintainer, the public, or with others in private. Corporate Situation
Coordinated CVD (Coordinated Vuln Disclosure) involves “gathering information from vulnerability finders, coordinating the sharing of that information between relevant stakeholders, and disclosing

the existence of software vulnerabilities and their mitigations to various stakeholders, including

the public” - Definition from CERT/CC

X
Limited Publicly disclosing part of the information around the vulnerability, but keeping some information private (i.e. not releasing a POC) Non-Disclosure Agreement (NDA) in place, trusted partners, … X
Full Publicly announcing the full details (research, finding results and proof of concepts in some cases) of the vulnerability Full disclosure on a third party platform X
0-day Not for money / Full Disclosure: Researchers sometimes do not wish to get money for their work, and fully disclose vulnerabilities and proofs of concept, leaving products impacted vulnerable until fixed and patched.

Selling: For-Profit in general, this generally means the researcher sells their findings with private firms offering bounty for undisclosed vulnerabilities.

Getting a CVE ID

A CVE ID is a unique identifier for a vulnerability. If a vulnerability exists in software that is, in any way, shipped to an end user, it should receive a CVE. Having a CVE identifier for your vulnerability helps ensure that it is not confused with another vulnerability and increases the likelihood that end-users and the security community will see and address the issue. Many vulnerability response processes rely on a CVE being assigned to a vulnerability to review and remediate it. CVEs help users learn about security risks in specific systems versions so they can choose to update to patched versions.

A reporter can obtain a CVE identifier from the CVE program at any point during the disclosure process. This can and should be done after reporting it to the maintainer, but ideally, before public disclosure so the identifier can be included. To get a CVE, the reporter or maintainer should reach out to the appropriate CVE Numbering Authority (CNA). CNAs are organizations that are authorized to assign CVE identifiers to new vulnerabilities. CNAshave various scopes and do not issue CVEs outside of their scope. The project maintainers may have already established a relationship with one CNA whose scope covers the project and to whom they will go first for a CVE assignment. Reporters should collaborate with the maintainer to get a CVE whenever possible. MITRE, the organization that manages CVE administration, is also a “CNA of Last Resort” for open source projects and can be used if no more appropriately scoped CNA is available.

Alternative Vulnerability Identifiers. Formats, and Databases

In addition to CVE, many other vulnerability identifiers, formats, and databases exist that you may see used across the security industry. CVE exists as a vulnerability identifier, and the program also hosts a database of this information. Vulnerabilities can be assigned different or complimentary identifiers depending on where the information has been shared:

Troubleshooting common challenges to Coordinated Vulnerability Disclosure

Sometimes, the coordinated vulnerability disclosure process does not go smoothly. In this section, we offer advice for a few potential challenges you may encounter.

The project maintainers did not consider my report a security issue

If your vulnerability report was not considered a security issue by the maintainers, you could ask direct questions behind that decision to gain a better understanding of the rationale. Check if the maintainer engaged with domain experts to gather additional views and opinions. Ask if any aspects of the report were unclear, and make sure the following components were included in your original report and/or any follow-up communications:

  • In which lines of code is the vulnerability located?
  • How specifically does this vulnerability create a security risk?
  • If the project leaves the code as it is, what could an attacker ultimately do?
  • How can the project replicate the issue? / Do you have a working proof-of-concept that you would be willing to show the team?
  • What would the project have to do to fix the issue?

If you are willing to invest additional time to help the project better understand the issue so that they can fix it, make this clear to the maintainer. If they accept this extra help, be respectful of their time and collaborative and straightforward in your approach so that you can work together to remediate the vulnerability.

Notes on 0-day Vulnerabilities

According to Trendmicro, a zero-day vulnerability is a vulnerability in a system or device that has been disclosed but is not yet patched. An exploit that attacks a zero-day vulnerability is called a zero-day exploit. (source)

0-day vulnerabilities can be released via a public blog, mailing list, or shared via channels like Twitter and/or the Seclist Full Disclosure Mailing List. When using this disclosure path, you should still attempt to get a CVE ID and include that in your initial disclosure. If the assisting CNA is unwilling to issue you a CVE ID, for whatever reason, the CVE system has a formal appeals process.

Acknowledgements

Thank you to the wider security and open source communities whose work informed this guide, including the Google Open Source Programs Office and Google security teams, the OpenStack Vulnerability Management Process, Project Zero's disclosure process, and the Kubernetes security and disclosure process. We also would like to highlight the many resources we leveraged from MITRE on the creation of this paper.

Appendices

Glossary

For a Glossary of terms and their definitions, please refer to the OpenSSF’s Education SIG’s Terminology repository.

Bibliography

You can also find this bibliography—along with webpage snapshots—in this Zotero group.

Templates/Examples